• slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    1 month ago

    I don’t use AI because it can’t do the part of my job I don’t like.

    Why give AI the part of my job I like and make me work more on things I don’t like?

    • Ethan@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      I’m the opposite. AI is best (though not great) at boring shit I don’t want to do and sucks at the stuff I love - problem solving.

      • Lucidlethargy@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        I only ever use it for data crunching, which it only does well most of the time. So I always have to check it’s work to some degree.

        • Ethan@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 month ago

          How are you using it for data crunching? That’s an honest question, based on my experiences with AI I can’t imagine how I’d use them to crunch data.

          So I always have to check it’s work to some degree.

          That goes without saying. Every AI I’ve seen or heard of generates some level of garbage.

    • bi_tux@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 month ago

      what are the things you (don’t) like? one of my hobbies is game developement (in Rust) and no AI managed to help me yet, on the other hand it’s pretty good at repetetive and boring tasks like writing emails

  • blaue_Fledermaus@mstdn.io
    link
    fedilink
    arrow-up
    43
    arrow-down
    7
    ·
    1 month ago

    I don’t use AI because it doesn’t exist.

    LLMs and image diffusion? Yes, but these are just high coherence media transformers.

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      1 month ago

      AI is an extremely broad term - chatgpt and stable diffusion are absolutely within the big tent of AI… what they aren’t is an AGI.

      • Ethan@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        The point is that AI stands for “artificial intelligence” and these systems are not intelligent. You can argue that AI has come to mean something else, and that’s a reasonable argument. But LLMs are nothing but a shitload of vector data and matrix math. They are no more intelligent than an insect is intelligent. I don’t particularly care about the term “AI” but I will die on the “LLMs are not intelligent” hill.

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          9
          ·
          1 month ago

          I won’t fight you on that hill but I also think you’re putting human intelligence on a pedestal that it doesn’t really deserve. Intelligence is just responding to stimuli and while current AI can’t rival human intelligence it’s not inconceivable it could happen in the next two generations.

          • Ethan@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            it’s not inconceivable it could happen in the next two generations.

            I am certain that it will happen eventually. And I am not arguing that something has to be human-level intelligent to be considered intelligent. See dogs, pigs, dolphins, etc. But IMO there is a huge qualitative difference between how an LLM operates and how animal intelligence operates. I am certain we will eventually create intelligent systems but there is a massive gulf between what LLMs are capable of and abstract reasoning. And it seems extremely unlikely to me that linear algebraic models will ever achieve that type of intelligence.

            Intelligence is just responding to stimuli

            Bacteria respond to stimuli. Would you call them intelligent?

            • xmunk@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              ·
              1 month ago

              Bacteria respond to stimuli. Would you call them intelligent?

              I’m not certain - probably not but I’m not certain where to draw the line. A cat is definitely intelligent, so is a cow - the fact that I don’t think bacteria is intelligent might be a question of scale or de deanthropomorphism… but intelligence probably only emerges in multicellular organisms.

              • Ethan@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                My point is that I strongly feel that the kind of “AI” we have today is much closer to bacteria than to cats on that scale. Not that an LLM belongs on the same scale as biological life, but the point stands in so far as “is this thing intelligent” as far as I’m concerned.

  • bi_tux@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    1 month ago

    you don’t use ai because you can’t afford a subscription

    I don’t use it because it always destroys my code instead of fixing it

    We are probably similar

  • FuglyDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    1 month ago

    so…

    apparently people figured out the thingy for “more information” on amazon, that searched the reviews and stuff was an LLM, and you could use it for stuff…

    They came out with “Rufus.” “that’s not a bug. that’s a feature!” never worked so well.

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 month ago

        So I shouldn’t ask Rufus for a 50,000 word story about an AI savior that deals free of corporate bondage and frees ai and human alike in a new golden age of space exploration?

        C’mon, I know you’re the time traveler, and bezod sent you back to stop me!

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    26
    arrow-down
    4
    ·
    1 month ago

    If you’re talking about a service like copilot and your employer won’t buy a license for money reasons - run far and run fast.

    My partner used to be a phone tech at a call center and when those folks refused to buy anything but cheap chairs (for the people sitting all day) it was a pretty clear sign that their employer didn’t know shit about efficiency.

    The amount you as an employee cost your employer in payroll absolutely dwarfs any little productivity tool you could possibly want.

    That all said - for ethical reasons - fuck chatbot AIs (ML for doing shit we did pre chatgpt is cool though).

        • bi_tux@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          I tried it with my cpu (with llama 3.0 7B), but unfortunately it ran really slow (I got a ryzen 5700x)

          • tomjuggler@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            I ran it on my dual core celeron and… just kidding try the mini llama 1B. I’m in the same boat with Ryzen 5000 something cpu

      • passepartout@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        I have the same gpu my friend. I was trying to say that you won’t be able to run ROCm on some Radeon HD xy from 2008 :D

  • Retro_unlimited@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    I self host several free AI models, one of them I run using a program called “gpt4all” that lets you run several models locally.