rational enlightened beings that think the terminator from the movies is real i-cant

  • jsomae
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Oops my bad.

    I think we have a semantics disagreement over what “intelligence” means so let’s just put that word away. Machines can already solve problems that are out of human reach (this has been known for a very long time – take the 4-colour theorem for instance, which I think is a pretty clear example of a machine outputting something that wasn’t input). It doesn’t really matter if an AI is “intelligent” if it is capable of writing code to advance a goal that it was given or analyzing a situation and producing a good recommendation about how to proceed in order to advance a goal that was given. LLMs are already capable of this, and if we imagine in 10 years AIs might be 100x better at it, that is already too scary for me.

    By the way, I don’t really know what I did to deserve that hostility.

    • Mardoniush [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Look, I know you’re arguing in good faith here, but…quite a few of us are very familiar with Big Yud’s arguments, and heck, I’m even partially sympathetic to a couple. There is some motivated suspicion here since rationalists and their singularitarian forebears have a habit of going reactionary (Anissimov, Bostrom, SSC, countless others) or having close ties to reactionaries (Thiel, most notably). I see no reason to believe you are like them but do realise a lot of transhumanist arguments are weaponised as right wing stalking horses.

      But as someone who has been following and occasionally interacting with him since our Halcyon Youth on the Extropian mailing list, I’ve learned that

      a) Yud can be spectacularly convincing and also spectacularly wrong by his own admission (see early work like “Staring into the Singularity” for this.) so you should cognitively guard against his rhetorical tools.

      b) He has a habit of eliding and abstracting concepts so that they appear as a single consistent thing when they are actually several things, while appearing to do the exact opposite. This is most apparent at subjects he is not an expert in, quantum mechanics most notably (I think many worlds is correct, I think his argument for many worlds is bad. I am semi competent at the math of quantum mechanics)

      I have specific, nuanced critiques of his main (not entirely invalid) argument on AI doom, but this is generally hard to get across because most of his own supporters don’t fully understand it.

      • jsomae
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        I’m more convinced by SSC/ACX’s arguments about AI doom. I agree that Yudkowsky is weird, and I don’t like his political trend in recent years either – though I wouldn’t consider ACX a reactionary just because he tolerates fascists leaving comments, but that’s really not germane. Anyway, it sounds like you think Yudkowsky’s perspective on AI doom is not entirely invalid, so that sounds like you also believe there is a chance that AI could be an x-risk?

        • Mardoniush [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          I’m not talking about his open comments and I know the FAQ. I’m talking about a couple of letters that came up between him and alt right-neoreaction figures that displayed a lot more sympathy with their views than you might give credit for. I can’t find them in a quick search but I’m pressed for time right now so will link later.

          My p(doom) for Ai is maybe 5%, my p(dystopiasignificantlyworsethantoday) is substantially higher. We should definitely be looking into it and almost none of the people who should look into it exist in silicon valley.

          • jsomae
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            17 hours ago

            Sounds like we basically agree. I’d be interested to see what alt-right views Scott Alexander agrees with. I’ve perused r/sneerclub but they seem kinda kooky when it comes to SSC. Like, for instance, I know he is in favour of “”“eugenics”“” but his version of that is totally different from the Nazi version to the point where I just roll my eyes when people make that comparison. It’s hard for me to separate the actually valid criticism of him from the people who can’t get past stuff like that.

            (Edit: just to be 100% clear to people: it’s not master race stuff, it’s more like optimizing alleles in-vitro and at the most extreme end he might be in favour of persuading meth addicts not to have children unintentionally if they are at risk of certain serious drug-induced diseases (consensually). I’m on the fence about it. It’s very Peter Singer.)

            • Mardoniush [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              I think most people here would consider Singer to be if not openly right wing, at least worryingly adjacent. Rather too close to the beige technocratic Fascism of a Salazar.

      • jsomae
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I do pride myself on being a silly person :)