Thoughts from James who recently held a Gen AI literacy workshop for older teenagers.

On risks:

One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.

On usage:

Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.

  • Lvxferre
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    humans regularly “hallucinate”, it’s just not something we recognize as such. There’s neuro-atypical hallucinations, yes, but there’s also misperceptions, misunderstandings, brain farts, and “glitches” which regularly occur in healthy cognition, and we have an entire rest of the brain to prevent those.

    Can you please tone down on the fallacies? Until now I’ve seen the following:

    • red herring - “LLMs are made of dozen of layers” (that don’t contextually matter in this discussion)
    • appeal to ignorance - “they don’t matter because […] they exist as black boxes”
    • appeal to authority - “for the record, I personally know […]” (pragmatically the same as “chrust muh kwalifikashuns”)
    • inversion of the burden of proof (already mentioned)
    • faulty generalisation (addressing an example as if it addressed the claim being exemplified)

    And now, the quoted excerpt shows two more:

    • moving the goalposts - it’s trivial to prove that humans can be sometimes dumb. And it does not contradict the claim of the other poster.
    • equivocation - you’re going out of way to label incorrect human output by the same word used to label incorrect LLM output, without showing that they’re the same. (They aren’t.)

    Could you please show a bit more rationality? This sort of shit is at the very least disingenuous, if not worse (stupidity), it does not lead to productive discussion. Sorry to be blunt but you’re just wasting the time of everyone here, this is already hitting Brandolini’s Law.

    I won’t address the rest of your comment (there’s guilt by association there BTW), or further comments showing the same lack of rationality. However I had to point this out, specially for the sake of other posters.