Not everyone needs to have an opinion on AI


National Novel Writing Month is an American initiative that has become a worldwide pastime, in which participants attempt to write a 50,000-word manuscript in the month of November. Some of these first drafts eventually become novels — the initial version of what became Erin Morgenstern’s The Night Circus started life as a NaNoWriMo effort — but most don’t. And many participants cheerfully admit they are writing for the pleasure of creation rather than out of any expectation that they will gain either money or prestige from the activity.

In recent years, NaNoWriMo has been plagued by controversies. This year, the organisation has been hit by an entirely self-made argument, after declaring that while it does not have an explicit position on the use of generative artificial intelligence in writing, it believes that to “categorically condemn the use of AI writing tools” is both “ableist and classist”. (The implication that working-class people and people with disabilities can only write fiction with the help of generative AI, however, is apparently A-OK.)

The resulting blowback saw one of its board, the writer Daniel José Older, resign in disgust. (NaNoWriMo has since apologised for, and retracted, its initial statement.)

There is very little at stake when you participate in NaNoWriMo, other, perhaps, than the goodwill of the friends and relations you might ask to read your work afterwards. Sign-ups on the website can talk to other participants on their discussion forums and are rewarded for hitting certain milestones with little graphics marking their achievement. If you want to write an experimental novel called A Mid-Career Academic’s Reflections Upon His Divorce that is simply the same four-letter expletive repeated over and over again, nothing is stopping you from doing so. If you want to type the words “write the first 50,000 words of a coming-of-age novel in the style of Paul Beatty” into ChatGPT and submit the rest, you can do so. In both cases, it is your own time you are wasting.

The whole argument is exceptionally silly but does hold two useful lessons.

One is that organisations and companies should have fewer opinions. Quite why NaNoWriMo needs to have an opinion about the use of generative AI is beyond me. Organisations should have a social conscience, but that should be limited to things they actually directly control. They should care about fairness when hiring, about the effects that their supply chains have on the world, just as NaNoWriMo should care about whether its discussion forums are well moderated (the subject of another previous controversy). But they should have little or no interest in issues that they have no meaningful way to stop or prevent, like what participants do with AI.

A good rule of thumb for an organisation considering whether to make a statement about a topic is to ask itself what material changes within its control it proposes to make as a result of doing so — and why. Those changes might range from donating money to hiring. For example, the cosmetics retailer Lush has given large amounts of money to police reform charities, while Julian Richer, the founder of Richer Sounds, home entertainment chain, went so far as to turn his business into an employee-owned trust in 2019.

But if an organisation is either unwilling or incapable of making real changes to how it operates or spends money, then nine times out of ten that is an indication that it will gain very little and add very little from speaking out.

The second lesson concerns how organisations should respond to the widespread use and adoption of generative AI. Just as NaNoWriMo can’t stop me asking Google Gemini to write a roman-à-clef about a dashingly handsome columnist who solves crimes, employers can’t reliably stop someone from writing their cover letter by the same method. That doesn’t mean they should necessarily embrace it, but it does mean that some forms of assessment have, inevitably, become a test of your ability to work well with generative AI as much as your ability to write or to research independently. Hiring, already one of the most difficult things any organisation does, is already becoming more difficult, and probation periods will become more important as a result.

Both lessons have something in common: they are a reminder that organisations shouldn’t sweat the stuff outside of their control. Part of writing a good novel is choosing the right words in the right places at the right time. So too is knowing when it is time for an organisation to speak — and when it should stay silent.


Posting != Endorsing the writer’s views.

  • huginn@feddit.it
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    The plagiarism engine effect is exactly what you need for a good programming tool. Most problems you’re ever going to encounter are solved and GenAI becomes a very complex code autocomplete.

    An LLM constructed only out of open source data could do an excellent job as a tool in this capacity. No theft required.

    For writing prose it’s absolutely trash, and everyone using it for that purpose should feel ashamed.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      The plagiarism engine effect is exactly what you need for a good programming tool. Most problems you’re ever going to encounter are solved and GenAI becomes a very complex code autocomplete.

      In theory, yes, although having actually tried using genAI as a programming tool, thy actual results are deeply lacklustre at best. It sort of works, under the right circumstances, but only if you already know enough to confidently do the job yourself, at which point the value in having an AI do it for you, and then having to check the AI’s work for any of a million possible fuck ups, seems limited at best.

      • huginn@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah my usage of it is similarly limited. But the plagiarism engine is more useful than it is annoying in my experience. Especially in writing kdoc or unit test variations. Write one, write the name of the next, have autocomplete fill it out with the expected conditional variation

          • huginn@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Ish.

            You’ll have assertions that are entirely new or different, other pieces of setup or teardown. It really is one of the best use cases for GH’s Copilot that I’ve run across.

            In my day to day the intellij autocomplete is what I prefer.

            • Voroxpete@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Noted. I’ll have to play around with that sometime.

              Despite my obvious stance as an AI skeptic, I have no problem with putting it to use in places where it can be used effectively (and ethically). I’ve just found that in practice, those uses are varnishingly few. I’m not on some noble quest to rid the world of computers, I just don’t like being sold overhyped crap.

              I’m also hesitant to try to rebuild any part of my workflow around the current generation of these tools, when they obviously aren’t going to exist in a few years, or will exist but at an exorbitant price. The cost to run genAI is far, far higher than any entity (even Microsoft) has any willingness to sustain long term. We’re in the “give it away or make it super cheap to get everyone bought in” phase right now, but the enshittification will come hard and fast on this one, much sooner than anyone thinks. OpenAI are literally burning billions just in compute right now. It’s unsustainable. Short of some kind of magical innovation that brings those compute costs down a hundred or thousand fold, this isn’t going to stick around.

              • huginn@feddit.it
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                Oh I’m 1000% in agreement with you. I think Copilot for programming is more expensive than it’s worth right now, both for my employer and for Microsoft.

                OpenAI et al have done nothing to address the fundamental issue of hallucinations. In code hallucinations are pretty quickly evident: your IDE immediately throws up error highlights whenever the code complete fucks up.

                The latest open AI model is to chain together a computational centipede to try and create reasoning structures out of stochastic processes. It takes longer and still doesn’t fix the issues. In their own demo video there are clear bugs with the “code” their 4o model writes.