Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”
Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”