• ylai
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    The original comment is dismissive and clearly ment to be trivializing of the capacity of LLMs.

    The trivializing is clearly your personal interpretation. In my response, I was even careful to delineate the arguments between autogressive LLM vs. training for plausibility or truthfulness.

    You’re the one being dishonest in your response. Your whole post, and a large class of arguments about the capacity of these systems rest on it is designed to do something

    My “whole post” is evidently not all about capacity. I had five paragraphs, only a single one discussed model capacity, vs. two for instance about the loss functions. So who is being “dishonest” here?

    […] emergent behavior exists. Is that the case here? Maybe.

    So you have zero proof but still happily conjecture that “emergent behavior” — which you do not care to elaborate how you want to prove — exists. How unsurprising.

    “Emergent behavior” is a worthless claim if the company that trains the model is now even being secretive what training sample was used. Moreover, it became known through research that OpenAI is nowadays basically overtraining straight away on — notably copyrighted, explaining why OpenAI is being secretive — books to make their LLM sound “smart.”

    https://www.theregister.com/2023/05/03/openai_chatgpt_copyright/

    • hglman
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      The existence of emergent behavior is irrelevant; judgment based on your views about how its made will be flawed. It is not a basis for scientific analysis. Only evidence and observation are.