• AdamEatsAss@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    2
    ·
    1 year ago

    I work as a programmer and I didn’t realize how many people have already adopted AI into their workflow. About half of my coworkers (most younger people) ask chatGPT to write code for whatever they need to program before starting. Even after corporate emails about not sharing IP and trade secrets with AI people still do it. AI is a powerful tool and it cannot be un-invented. People will use it as long as it continues to make their lives easier.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      4
      ·
      1 year ago

      People will use it as long as it continues to make their lives easier.

      Being fired and sued for divulging trade secrets doesn’t sound like an easy life.

      • bioemerl@kbin.social
        link
        fedilink
        arrow-up
        28
        ·
        1 year ago

        If you think those industries work on the competency of their programmers instead of the competency of their systems you’ve got a big surprise coming for you

      • AdamEatsAss@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I don’t think my experience is standalone in this. People will use tools to help them however they can. What sector a programmer in will likely have no affect on AI use. It may be harder to use on a secure network but people can still do it on their phone or personal computer. And medical and aerospace is heavily regulated and tested so the likelihood of bad AI code getting through is very low. (not to say coding mistakes don’t happen, just look at the boeing 737 max crashes) And militaries also test their equipment before/after purchase. While they are often held to different standards than commercial equipment a military usually has people who are competent reviewing code and equipment.

    • Khalic@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      13
      ·
      1 year ago

      Using ChatGPT for coding is like copy pasting from stack overflow. useful for someone who’s bad at his jobs, doesn’t change shit for a good developer

      • kava@lemmy.world
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        2
        ·
        edit-2
        1 year ago

        It depends how you use it, I think, like any tool. I might ask ChatGPT to help me write an algorithm to do a certain thing in pseudocode so I can understand it. Ask it a few questions about optimization just so I have a sense for how it works then I implement it myself.

        It can also help you think about ideas. I will copy paste a function or file and then ask questions like “what are some considerations do you think I should have?” “is there anything I could be missing?” “what could make this code better?” “how would you optimize this?” “how would you make it simpler?”

        let me find some simple example

        function findAbsoluteMax(arr) {
          return arr.reduce((val, next) => {
            if (Math.abs(next) > Math.abs(val)) {
              return next;
            } else {
              return val;
            }
          });
        }
        

        Let’s ask GPT4 “how could we make this code better?”

        It offers two suggestions

        1. there should be some simple error handling. for example if the arr is length 0 then it should throw an error or return a null. this makes sense and is a good thing to add - perhaps this would have saved me a lot of headache in some scenario where I’m getting a weird bug

        2. add a ternary operator to make the arr.reduce call shorter

          return arr.reduce((val, next) => Math.abs(next) > Math.abs(val) ? next : val );

        I think this does actually make it more readable and condenses it - a pretty good thing

        Now, this is a simple function but you can actually copy in a whole file and ask it to analyze things you might be missing or considerations you could make. It’s like talking to the yellow duck except the yellow duck talks back

        There’s a lot of power in this technology and it doesn’t simply revolve around copy pasting code. Perhaps my example wasn’t the greatest but someone else can share how they use it

        • sheogorath@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Yea I pretty much use ChatGPT as an interactive rubber duck when working. However I’ve refrained from pasting a file or a code snippet from my work to it as there’s already some IP leaks happened at Samsung and my company has shared a guideline on how to use AI tools to help with work. They know that people will keep using it regardless, so what they can do is to keep people from leaking company or client IPs by sharing a file to the AI tools.

  • bauhaus
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    5
    ·
    edit-2
    1 year ago

    well that sounds like…

    😀 🕶️ 😎

    good news

  • Hardeehar@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Tell me how you’d enforce it?

    It’s easy enough to wash a generated text and AI text catchers don’t work.

    Is it really just an honor system?

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    1 year ago

    ChatGPT is unreliable, but AIs that can search the internet can be just as reliable and trustworthy as human authors. Of course, Bing Chat is not FOSS, so I don’t fully support it, but it is very good at writing accurate articles.

    • Khalic@kbin.social
      link
      fedilink
      arrow-up
      15
      arrow-down
      2
      ·
      1 year ago

      “just as trustworthy as human authors” - Ok so you have no idea how these chatbots work do you?

        • Khalic@kbin.social
          link
          fedilink
          arrow-up
          14
          arrow-down
          3
          ·
          1 year ago

          Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING

          • bioemerl@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            14
            ·
            1 year ago

            probabilistic model that is unable to understand ANYTHING

            You’re the one who doesn’t understand how these things work.

            • Durotar
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              1 year ago

              deleted by creator

              • bioemerl@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                4
                ·
                edit-2
                1 year ago

                And the other guy did?

                Llms are massive neutral networks which are turning complete. There is real logic and understanding to their behavior, even if it isn’t human.

                • Durotar
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  deleted by creator

                • Dark Arc@social.packetloss.gg
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Oh sure, they understand logic and their behavior, but they don’t understand what’s they’re saying (particularly the validity of it) https://arstechnica.com/?p=1961606

                  They’re like… a story author. They understand the rules of language well enough they can write a story, but they don’t understand the data or reality well enough to know if they’ve told you the truth, told you a lie, or told you something in-between.

                  i.e. they have no idea if they’ve told you fact or fiction, they just know they’ve done a convincing job of conveying the message based on language patterns, and that is an extremely big problem.

        • monkic@kbin.social
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          LLM AI bases its responses from aggregated texts written by … human authors, just without having any sense of context or logic or understanding of the actual words being put together.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.

        • Khalic@kbin.social
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          1 year ago

          I’m not an ML expert but we’ve been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It’s just a fancy mirror. That’s why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn’t exist.

          Do you really want to lose your time reading a blob of data with no coherency?

          • whataboutshutup@discuss.online
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Do you really want to lose your time reading a blob of data with no coherency?

            We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you’ve found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it’s possible to make a reliable model with that bullshit detector.

        • Excel@lemmy.megumin.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I don’t think it was ever turned off, it just requires the subscription to access GPT-4 and then enabling the plugins.

          It was a closed beta before, but it’s been available to everybody for a while now.

          There was also the version with Bing integration that they removed, which might be what you’re thinking of… but there are 100’s of other web search plugins available beyond Bing.