We have no pathway to AGI yet. The “sparks of AGI” hype about LLMs is like trying to get to the Moon by building a bigger ladder.
Far better chance that someone in the Pentagon gets overconfident in the capabilities of unintelligent ML and hooks a glorified chatbot into NORAD and triggers another missile minuteman crisis that goes the wrong way this time because the order looks too confident to be a false positive.
My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?
And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.
Misaligned artificial super intelligence is also a chance.
We have no pathway to AGI yet. The “sparks of AGI” hype about LLMs is like trying to get to the Moon by building a bigger ladder.
Far better chance that someone in the Pentagon gets overconfident in the capabilities of unintelligent ML and hooks a glorified chatbot into NORAD and triggers another missile minuteman crisis that goes the wrong way this time because the order looks too confident to be a false positive.
I never said I thought we would get to ASI through LLMs. But we still have a good change of getting there soon.
My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?
And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.