Quote:

In this work, we introduce TinyStories, a synthetic dataset of short stories that only contain words that a typical 3 to 4-year-olds usually understand, generated by GPT-3.5 and GPT-4. We show that TinyStories can be used to train and evaluate LMs that are much smaller than the state-of-the-art models (below 10 million total parameters), or have much simpler architectures (with only one transformer block), yet still produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities.

Related:

  • Lenguador@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 years ago

    I’ve had a play with these models and the dataset.

    1. They’re under-trained, you can squeeze about 10% more performance out of them.
    2. They’re trained on the GPT3.5 generated dataset, and there’s a GPT4 generated dataset available on Huggingface
    3. The GPT4 dataset (I haven’t looked at the GPT3.5 dataset) has random bad Unicode, misspellings, missing spaces, etc
    4. Because of 3, the tokenization isn’t great

    Given all that, retraining on a cleaned dataset may give even more impressive results.