It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • Feyter@programming.dev
    link
    fedilink
    arrow-up
    19
    ·
    11 months ago

    Did we mentioned that it is closed source proprietary service controlled by only one company that can dictate the terms of it’s usage?

    • TehPers@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      LLMs as a whole exist outside OpenAI, but ChatGPT does run exclusively on OpenAI’s services. And Azure I guess.

      • Feyter@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        Exactly. ChatGPT is just the most prominent service using a LLM. Would be less concerned about the hype if all the free training data from thousand of users would go back into an open system.

        Maybe AI is not stealing our jobs but if you get depending on it in order to keep doing your job competitive, it would be good if this is not controlled by a single company…

        • blindsight@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          But there’s been huge movement in open source LLMs since the Meta source code leak (that in a few months evolved to use no proprietary code at all). And some of these models can be run on consumer laptops.

          I haven’t had a chance to do a deep dive on those, yet, but I want to spin one up in the fall so I can present it to teachers/principals to try to convince schools not to buy snake oil “AI detection” tools that are doomed to be ineffectual.