rational enlightened beings that think the terminator from the movies is real i-cant

  • jsomae
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    20 hours ago

    Those are good points, I’ll take a look at the resources you suggested. I think my counter-argument to you right now can basiclaly be summed up as: I do agree that the danger of AI you are talking about is serious and is the more current and pressing concern, but that doesn’t really invalidate the X-risk factor of AI. I am not saying that X-risk is the only risk, and your point warning about a “hero” (which I agree with!) also doesn’t invalidate the concern. I mean, if it turns out that only a heroic space agency can save us from that asteroid, does that mean the threat from the asteroid isn’t real?

    • qbduubdp [they/them, he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      Following the asteroid analogy, I view it as this: If there’s a 20% chance that an asteroid could hit us in 2050, does that supplant the threat of climate change today?

      I’m not trying to say that AI systems won’t kill us all, just that they are using to directly harm entire populations right now and the appeal to a future danger is being used to minimize that discussion.

      Another thing to consider: If an AI system does kill us all, it will still be a human or organization that gave it the ability to do so, whether that be through training practices, or plugging it in to weapons systems. Placing the blame on the AI itself absolves any person or organization of the responsibility, which is in line with how AI is used today (i.e. the promise of algorithmic ‘neutrality’). Put another way, do the bombs kill us all in a nuclear armageddon or do the people who pressed the button? Does the gun kill me, or does the person pulling the trigger?

      • jsomae
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        20 hours ago

        On each of your paragraphs:

        1. I think we completely agree – there can be both a 20% threat of extinction and also the threat of climate change

        2. no, I don’t agree with this; that’s like saying the threat of the asteroid is used to supplant the threat of climate change. The X-risk threat of AI does not invalidate the other threats of AI, and I disagree with anyone who thinks it does. I have not seen anyone use the X-risk threat of AI to invalidate the other threats of AI, and I implore you to not let such an argument sway you for or against either of those threats, which are both real (!).

        3. I do not blame the gun, I blame the manufacturer. I am calling for more oversight over AI companies and for people who research AI to take this threat more seriously. If an AI apocalypse happens, it will of course be the fault of the idiotic AI development companies who did not take this threat seriously because they were blinded by profits. What did I say that made you think I was blaming the AI itself?