• jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    3 days ago

    So I’m still on the fence about the AI arms race in general. However, reading up on DeepSeek it feels like they built a model specifically to work well on the benchmarks.

    I say this cause it’s a Mixture of Experts approach, so only parts of the model are used at any given point. The drawback is generalization.

    Additionally, it isn’t a multimodal model and the only place I’ve seen real opportunity for workflows automation is using the multimodal models. I guess you could use a combination of models, but that’s definitely a step back from the grand promise of these foundational models.

    Overall, I’m just not sure if this is lay people getting caught up in hype or actually a significant change in the landscape.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      they built a model specifically to work well on the benchmarks.

      To be fair, I’m pretty sure that’s what everyone is doing. If you’re not measuring against something, there’s no way to tell if you’re doing anything at all.

      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        My point was a mixture of Experts model could suffer from generalization. Although in reading more I’m not sure if it’s the newer R model that had the MoE element.