Sam Altman, CEO of OpenAI, speaks at the meeting of the World Economic Forum in Davos, Switzerland. (Denis Balibouse/Reuters)

  • captainastronaut@seattlelunarsociety.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 months ago

    But it should drive cars? Operate strike drones? Manage infrastructure like power grids and the water supply? Forecast tsunamis?

    Too little too late, Sam. 

    • halva@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      11 months ago

      As advanced cruise control, yes. No, but in practice it doesn’t change a thing as humans can bomb civilians just fine themselves. Yes and yes.

      If we’re not talking about LLMs which is basically computer slop made up of books and sites pretending to be a brain, using a tool for statistical analysis to analyze a shitload of data (like optical, acoustic and mechanical data to assist driving or seismic data to forecast tsunamis) is a bit of a no-brainer.

    • pearsaltchocolatebar@discuss.online
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      11 months ago

      Yes on everything but drone strikes.

      A computer would be better than humans in those scenarios. Especially driving cars, which humans are absolutely awful at.

                • wikibot@lemmy.worldB
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  Here’s the summary for the wikipedia article you mentioned in your comment:

                  No true Scotsman, or appeal to purity, is an informal fallacy in which one attempts to protect their generalized statement from a falsifying counterexample by excluding the counterexample improperly. Rather than abandoning the falsified universal generalization or providing evidence that would disqualify the falsifying counterexample, a slightly modified generalization is constructed ad-hoc to definitionally exclude the undesirable specific case and similar counterexamples by appeal to rhetoric. This rhetoric takes the form of emotionally charged but nonsubstantive purity platitudes such as “true”, “pure”, “genuine”, “authentic”, “real”, etc. Philosophy professor Bradley Dowden explains the fallacy as an “ad hoc rescue” of a refuted generalization attempt.

                  to opt out, pm me ‘optout’. article | about

      • Deceptichum@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        So if it looks like it’s going to crash, should it automatically turn off and go “Lol good luck” to the driver now suddenly in charge of the life-and-death situation?

            • pearsaltchocolatebar@discuss.online
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              edit-2
              11 months ago

              The computer, of course.

              A properly designed autonomous vehicle would be polling data from hundreds of sensors hundreds/thousands of times per second. A human’s reaction speed is 0.2 seconds, which is a hell of a long time in a crash scenario.

              It has a way better chance of a ‘life’ outcome than a human who’s either unaware of the potential crash, or is in fight or flight mode and making (likely wrong) reactions based on instinct.

              Again, humans are absolutely terrible at operating giant hunks of metal that go fast. If every car on the road was autonomous, then crashes would be extremely rare.