• ShimmeringKoi [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    9 months ago

    One of the projects Corbi is working on (for an unnamed business mogul) is an island fortress with a flammable moat. Try and breach it and it bursts into flames.

    “We wound up literally building a 30-ft-deep lake [around the compound] skimmed with a lighter-than-water flammable liquid that can transform into a ring of fire,” Corbi explained to THR. “The only access to the island is a swing bridge.” And, of course, there are also backup water cannons to keep the poors out.

    “Hello, we have severed your water intake and are now setting up these comically large fans in preparation for the throwing of several thousand pounds of tires and horse shit into your flaming moat.”

  • Zerush
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    Also developing Orbit and Mars colonies, not precisely for the poor people.

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      Don’t worry, I’m sure those colonies are going to require lots of indentured servants to operate them.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          9 months ago

          Yeah it’s a complete fantasy. It literally takes thousands of people on Earth to keep a small crew alive on the ISS. We’re nowhere close to being able to make self sufficient colonies on another planet.

      • Zerush
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        Yes, Boston Dynamics and others are working of this.

        • ☆ Yσɠƚԋσʂ ☆OP
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          9 months ago

          I think it’ll be a while before we reach the level of automation where killed workers aren’t needed.

            • ☆ Yσɠƚԋσʂ ☆OP
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              9 months ago

              Oh I know how fancy boston dynamics robots are, but you gotta remember that a lot of that is scripted. Boston dynamics figured out how to make a neural net that can produce really fluid movements and keep balance, but somebody still has to control the robot and tell it what to do. You also need to repair the robots, do maintenance, etc. Until we have AGI, humans are still going to be needed to do a lot of work.

              • Zerush
                link
                fedilink
                arrow-up
                1
                ·
                9 months ago

                True, but this is not a situation for much longer, the advancement in AI is currently exponential, already with AI developed by AI. Until colonies are established on Mars, you can be sure that there are sufficiently developed bots and corresponding modular maintenance systems to be able to function for many years.

                • ☆ Yσɠƚԋσʂ ☆OP
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 months ago

                  We don’t really know where the plateau for the current AI techniques is. A lot of what we see looks impressive, but it’s very superficial in practice. Pretty much all AI today boils down to feeding huge volumes of data into a neural network that ends up creating a compressed representation of the data, and then doing stochastic predictions based on that model. This is great for doing stuff like text or image generation, but it simply doesn’t work for any applications where there’s a specific correct result needed. What’s worse is that use of such systems to control things in the physical world is incredibly dangerous as we’re seeing with self driving cars.

                  Since the neural net is simply comparing numbers together to make decisions it doesn’t have any understanding of what it’s actually doing in a human sense. It’s not able to explain the reasoning behind its decisions to a human or even guarantee to understand human instruction. And it’s not aware of its own limitations.

                  In order to make an AI that can replace a human decision maker it would need to have an internal representation of the physical world that’s similar to our own. Then we would have to teach it language within the context of the world. This is how we could build an AI that can be said to understand things and that we have a shared context with allowing us to communicate in a meaningful way. People are experimenting with this stuff, but this sort of stuff is still in very early stages, and it’s not clear that techniques used for LLM models will work well for this approach.

                  I’d caution to be highly skeptical regarding AI claims we’re seeing because most of these claims are made by people who have very little understanding of how this stuff actually works, and whose job is to sell this tech to the public. Pretty much none of the actual experts in the field share this optimism.

                  Of course, nobody knows what the future brings and we might make some amazing breakthroughs in the coming years. However, given what we know right now, there’s little reason to expect this sort of exponential growth to continue for long. It’s also worth noting that we’ve already gone through a wave of similar hype back in the 80s where people started getting really impressive results with neural nets and symbolic logic, but scaling that turned out to be much harder than anybody anticipated.

  • dev_null
    link
    fedilink
    arrow-up
    1
    ·
    9 months ago

    Betteridge’s law of headlines strikes again