• jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      6 个月前

      It’s a “push as much data as a baby gets to train its NN” step, which is several orders of magnitude more, and more focused, than any training dataset in existence right now.

      Even with diminishing returns, it’s bound to get better results.