cross-posted from: https://lemmy.ml/post/24102825

DeepSeek V3 is a big deal for a number of reasons.

At only $5.5 million to train, it’s a fraction of the cost of models from OpenAI, Google, or Anthropic which are often in the hundreds of millions.

It breaks the whole AI as a service business model that OpenAI and Google have been pursuing making state-of-the-art language models accessible to smaller companies, research institutions, and even individuals.

The code is publicly available, allowing anyone to use, study, modify, and build upon it. Companies can integrate it into their products without paying for usage, making it financially attractive. The open-source nature fosters collaboration and rapid innovation.

The model goes head-to-head with and often outperforms models like GPT-4o and Claude-3.5-Sonnet in various benchmarks. It excels in areas that are traditionally challenging for AI, like advanced mathematics and code generation. Its 128K token context window means it can process and understand very long documents. Meanwhile it processes text at 60 tokens per second, twice as fast as GPT-4o.

The Mixture-of-Experts (MoE) approach used by the model is key to its performance. While the model has a massive 671 billion parameters, it only uses 37 billion at a time, making it incredibly efficient. Compared to Meta’s Llama3.1 (405 billion parameters used all at once), DeepSeek V3 is over 10 times more efficient yet performs better.

DeepSeek V3 can be seen as a significant technological achievement by China in the face of US attempts to limit its AI progress. China once again demonstrates that resourcefulness can overcome limitations.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    The first method is called the consensus game to address the issue of models giving different answers to the same question depending on how it’s phrased

    Although humans can reason and, therefore, reply in a more coherent manner (according to one’s own cosmos which contains personality traits, knowledge, mood, etc), this phenomenon kind of also happens with humans. Depending on how multifaceted is the question/statement, a slightly different phrasing can “induce” an answer. Actually, it’s a fundamental principle behind mesmerism, gas-lighting and social engineering: inducing someone to a certain reply/action/behavior/thought, sometimes relying on repetition, sometimes relying on complexity.

    Artificial automatons are particularly sensible to this because of how their underlying principles are purely algorithmic. We aren’t exactly algorithmic, although we have physical components of “determinism” (e.g. muscles contracting when in contact with electricity, body always seeking homeostasis, etc).

    However, I understood what you meant with it. It’d be akin to a human trying to think twice/thrice when faced by complex and potentially mischievous/misleading questions/statements. “Thinking” before “acting” through consensus game.

    The second method is to use neurosymbolic systems that combine deep learning to identify patterns in data with reasoning based on knowledge using symbolic logic. It has the potential to outperform systems relying either solely on neural networks or symbolic logic while providing clear explanations for decisions. This involves encoding symbolic knowledge into a format compatible with neural networks, and mapping data from neural patterns back to symbolic representations.

    Yeah. I see a great potential on it, too. “Signs and symbols rule the world, not words or laws” (unfortunately this Confucian quote is often misused by people, but it captures the essence of how symbols are a fundamental piece of the cosmos).

    • ☆ Yσɠƚԋσʂ ☆OP
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      For sure, and I think it’s a really important thing to keep in mind that our own logic is far from being infallible. Humans easily fall for all kinds of logical fallacies, and we find formal reasoning to be very difficult. It takes scientists years of training to develop this mindset, and they are still unable to eliminate the problem of biases and other fallacies. This is why we rely on concepts like peer review to mitigate these problems.

      An artificial reasoning system should be held to a similar standard as our own reasoning instead of some ideal of rational thought. I think that the key aspects that need to be focused on is consistency, ability to explain the steps, and being able to integrate feedback to correct mistakes. If we can get that going, then we’d have systems that can improve themselves over time and that can be taught the way we teach humans.