A new report from plagiarism detector Copyleaks found that 60% of OpenAI’s GPT-3.5 outputs contained some form of plagiarism.

Why it matters: Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    101
    arrow-down
    15
    ·
    9 months ago

    The individual GPT-3.5 output with the highest similarity score was in computer science (100%), followed by physics (92%), and psychology (88%).

    And that’s why this claim is mostly bullshit. These use cases are all sciences, where the correct solution is usually the same or highly similar no matter who writes it. Small snippets of computer code cannot be copyrighted anyway.

    Not surprisingly, softer subjects like “English” and “Theatre” rank extremely low on this scale.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      38
      arrow-down
      5
      ·
      9 months ago

      Not to mention that a response “containing” plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don’t even know how it works.

      I went and looked at how low theater and such were and it’s dramatic:

      The lowest similarity scores appeared in theater (0.9%), humanities (2.8%) and English language (5.4%).

    • LazaroFilm@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      2
      ·
      9 months ago

      So, if the Ai gives you a correct answer to a science question, it’s “infringing copyright” and if it spits out a bullshit answer, it’s giving you wrong, and unsupported claims.

    • themoonisacheese@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      9 months ago

      Right? Nod doubt that output can be similar to training data, and I would believe that some of it is plagiarism, but plagiarism detectors are infamous among uni students for being completely unreliable and flagging pronouns, dates and citations. Until someone can go “here’s an example of actual plagiarism” (which is obvious when pointed out), these claims make no sense.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        9 months ago

        If it’s plagiarizing, so are Google search results summaries.

        It’s not like it doesn’t cite where it found the data.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      9 months ago

      Eh, kinda. It’s not like a science paper is just going to be an equation and nothing else. An author’s synthesis of the results is always going to have unique language. And that is even more true for a social science paper.

      • Jojo@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        9 months ago

        Are those “best matches” paper-sized, or snippet-sized?

    • OpenStars@startrek.website
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      But also, there is far less training data to mix and match responses from, so naively I would expect a higher plagiarism rate, by its very nature.

      Less than 2% of the world’s population has a doctorate. According to the US Census Bureau, only 1.2% of the US population has a PhD.

      source

        • OpenStars@startrek.website
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          Surely many who have them received them from elsewhere before immigration to America, and likewise the proportion of immigrants who have them I would expect to be oversized. Americans tend to be more greedy than anything else and don’t put in the effort required for such small (financial) rewards.

          Also, those with PhDs tend to congregate into certain areas that support those jobs, i.e. cities but not even a goodly number of those so much; plus smaller college towns too ofc. As such, many in the general populace might rarely if ever run into one for the largest majority of their lives, unless traveling specifically to those areas for some reason?

          And ofc rural areas are far larger, geographically speaking, than places where a person with a PhD would (likely) go. So you could randomly pick a spot on a map 100 times and never manage to find someone with a PhD anywhere within tens of miles, I would expect - although that line of thinking reveals my own biases: do most educated farmers stop at like an MS and just follow up with their own (possibly even extensive) self studies, or go all the way to PhDs while working their actual farms? (I doubt it bc it does not sound practical, and that is a hallmark of farmers afaik, but I could be wrong…) Anyway, I expect the unequal distribution is a contributing / exasperating factor to the general rarity.

      • ShittyBeatlesFCPres@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 months ago

        Ironically, in the article, the link to the original Census source of the 1.2% datum is now dead.

        Also, it’s 2.1% now (for people over 25), according to the Wikipedia article’s source: https://www.census.gov/data/tables/2018/demo/education-attainment/cps-detailed-tables.html

        Edit: the Wikipedia citation is from 2018 data. The 2023 tables are here: https://www.census.gov/data/tables/2022/demo/educational-attainment/cps-detailed-tables.html

        Citation party!

    • pulaskiwasright
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      9 months ago

      You can’t write a paper covering scientific topics without plagiarism. A human would be required to. Generative AI should be held to at least as high of a standard.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        9 months ago

        Turns out ChatGPT isn’t writing a scientific paper though, it’s conversing with the user.

        • pulaskiwasright
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          If it’s regurgitating other people’s work then it needs citations.