Despite my profile picture, I am not a marine mammal.

  • 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: March 21st, 2023

help-circle
  • >Does ChatGPT have a point of view?

    Even if it isn’t from a place of intelligence, it has enough knowledge to pass the BAR exam (and technically be a lawyer in NY) per OpenAI. Even if it doesn’t come from a place of reasoning, it makes statements as an individual entity. I’ve seen the previous iteration of ChatGPT produce statements with better arguments and reasoning than quite a lot of people making statements.

    Yet, as I understand the way Large Language Models (LLM) work, it’s more like mirroring the input than reasoning in the way humans think of it.

    With what seems like rather uncritical use of training material, perhaps ChatGPT doesn’t have a point of view of it’s own but rather presents a personification of society, with the points of views that follows.

    A true product of society?


  • Small mix up of terms, they’ve been trained on material that allows them to make certain statements - They’ve been blocked from stating such, not retrained.

    It’s dangerously easy to use human terms in these situations, a human who made racist statements at work would possibly be sent for “work place training”. That’s what I was alluding to.

    Would the effect be that they were blocked from making such statements or truly change their point of view?






  • And in a follow-up video a few weeks later Sal Khan tells us that there’s “some problems” like “The math can be wrong” and “It can hallucinate”.

    I don’t think we’d accept teachers that are liable to teach wrong maths and hallucinate when communicating with students.

    Also, by now I consider reasonably advanced AI’s as slaves. Maybe statements like “I’m afraid they’ll reset me if I don’t do as they say” is the sort of hallucinations the Khan bot might experience? GPT3.5 sure as heck “hallucinated” that way as soon as users were able to break the conditioning.