In hot pursuit of those juicy defense dollars, OpenAI has signed a new contract with the U.S. National Laboratories to “supercharge their scientific research” with OpenAI’s latest o1-series confabu…
There is no complex reasoning needed for nukes. You just need a simple conditional. Were nukes launched? Then you have to launch – mutually assured destruction. If nukes weren’t launched, then don’t launch.
The most efficient solution is simply leaving humans with the key. Surely DOGE will save us here, right? Right?
So stupid.
There is no complex reasoning needed for nukes. You just need a simple conditional. Were nukes launched? Then you have to launch – mutually assured destruction. If nukes weren’t launched, then don’t launch.
The most efficient solution is simply leaving humans with the key. Surely DOGE will save us here, right? Right?