• douglasg14b@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 个月前

    I’m referring to models that understand language and semantics, such as LLMs.

    Other models that are specifically trained can’t do what it can, but they can perform math.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 个月前

      The linked research is about LLMs. The opening of the abstract of the paper:

      In recent years, large language models have greatly improved in their ability to perform complex multi-step reasoning. However, even state-of-the-art models still regularly produce logical mistakes. To train more reliable models, we can turn either to outcome supervision, which provides feedback for a final result, or process supervision, which provides feedback for each intermediate reasoning step. Given the importance of training reliable models, and given the high cost of human feedback, it is important to carefully compare the both methods. Recent work has already begun this comparison, but many questions still remain. We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset. Our process-supervised model solves 78% of problems from a representative subset of the MATH test set. Additionally, we show that active learning significantly improves the efficacy of process supervision.