If it could only take out the suicidal people, that’d be pretty great
Lemmy should integrate the microblogging that mbin has tbh. It’s a great feature and a nice way to differentiate the platform.
I hope I’m one of the first people to go.
Cause I know I’d be stubborn to try and survive whatever the aftermath is.Once again an article missusing the word supervulcano. This time, already failing the to tell, what they actually are.
For those interested, a vulcano is considered as a supervulcano, when it had an erruption with a vulcanic explosivity indwx of 8 at least once (meaning releasing at least 1000 cubic kilometer of vulcanic materials). There are no known supervulcanos in Italy
yet
This one had at least one eruption with a force of 7. Give it time, there are no supervolcano calderas in Italy, so far.
At least found a source there telling, that it has the potential for doing a VEI 8 eruption (Still doesn’t make it a SV now). Though they are generally way more concerned about the local poplulation when smaller eruptions happen.
Dunno, I just get easily irritated by the sensationalism of supervulcanoes. A vulcano does not need to be “super” to pose great threats
@Coreidan@lemmy.world You’re famous!
Makes me think of this Onion vid: https://m.youtube.com/watch?v=iKC21wDarBo&pp=ygUWb25pb24gZW5kIG9mIHRoZSB3b3JsZA%3D%3D
As long as it isn’t the decan or siberian trap…
Misaligned artificial super intelligence is also a chance.
We have no pathway to AGI yet. The “sparks of AGI” hype about LLMs is like trying to get to the Moon by building a bigger ladder.
Far better chance that someone in the Pentagon gets overconfident in the capabilities of unintelligent ML and hooks a glorified chatbot into NORAD and triggers another missile minuteman crisis that goes the wrong way this time because the order looks too confident to be a false positive.
I never said I thought we would get to ASI through LLMs. But we still have a good change of getting there soon.
My opinion is that the chance part falls into if AGI itself is possible. If that happens, it not only will leads to ASI (maybe even quickly), but that it will be misaligned no matter how prepared we are. Humans aren’t very aligned within themselves, how can we expect a totally alien intelligence to be?
And btw, we are not prepared at all. AI safety is an inconvenience for AI companies, if it hasn’t been completely shelved in lieu of profiting.