Bayesian reasoning is only as reliable as the prior probabilities that go into the algorithm. If you can’t justify your priors, it’s no better than saying “this just feels likely to me” but with a window dressing of mathematics. It’s a great algorithm for updating concrete, known probabilities in the face of new concrete evidence, but that is not at all what’s going on with the vast majority of what the Rationalists do.
Even if you want to use it for estimating the probability of very uncertain events, the uncertainty compounds at each step. Once you get more than a step or two down that path of “let’s say the probability of x is p” without empirical justification, you should have no confidence at all that the number you’re getting bears any relationship to “true” probabilities. Again, it’s just a fancy way of saying “this feels true to me.”
Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”
Bayesian reasoning is only as reliable as the prior probabilities that go into the algorithm. If you can’t justify your priors, it’s no better than saying “this just feels likely to me” but with a window dressing of mathematics. It’s a great algorithm for updating concrete, known probabilities in the face of new concrete evidence, but that is not at all what’s going on with the vast majority of what the Rationalists do.
Even if you want to use it for estimating the probability of very uncertain events, the uncertainty compounds at each step. Once you get more than a step or two down that path of “let’s say the probability of x is p” without empirical justification, you should have no confidence at all that the number you’re getting bears any relationship to “true” probabilities. Again, it’s just a fancy way of saying “this feels true to me.”
Yes that’s right (that it’s only reliable as the prior probabilities that go into it).
Look at this another way, using the perspective you just shared: before applying bayesian reasoning, one might think that AI as an X-risk sounds super fantastical, and assign it ultra-low probability. But when you break it into constituent components like I did, it starts to sound much more plausible. We’re replacing how one feels intuitively about a certain (improbable-seeming) event with how one feels intuitively about other (more plausible) events. That isn’t a fallacy, that’s actually good off-the-cuff reasoning. Now we can look at whichever of those sounds the most implausible and break it down further.
My goal here isn’t to actually find the exact probability of an AI apocalypse, it’s to raise a warning flag that says “hey, this is more plausible than you might initially think!”