• m4xie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    He says they’re faking the low cost, but it’s open source. You can download and run it yourself.

  • merthyr1831
    link
    fedilink
    English
    arrow-up
    25
    ·
    10 hours ago

    free market capitalist when a new competitor enters the market who happens to be foreign: noooooo this is economic warfare!!!

  • Hackerman_uwu@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    11 hours ago

    We literally are at the stage where when someone says: “this is a psyop” then that is the psyop. When someone says: “these drag queens are groomers” they are the groomers. When someone says: “the establishment wants to keep you stupid and poor” they are the establishment who want to keep you stupid and poor.

    • Krudler@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      It’s so important to realize that most of “the establishment” are the pawns who are just as guilty. Thank you.

  • LandedGentry@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    15 hours ago

    So this guy is just going to pretend that all of these AI startups in thee US offering tokens at a fraction of what they should be in order to break-even (let alone make a profit) are not doing the exact same thing?

    Every prompt everyone makes is subsidized by investors’ money. These companies do not make sense, they are speculative and everyone is hoping to get their own respective unicorn and cash out before the bill comes due.

    My company grabbed 7200 tokens (min of footage) on Opus for like $400. Even if 90% of what it turns out for us is useless it’s still a steal. There is no way they are making money on this. It’s not sustainable. Either they need to lower the cost to generate their slop (which deep think could help guide!) or they need to charge 10x what they do. They’re doing the user acquisition strategy of social media and it’s absurd.

    • FourPacketsOfPeanuts@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      So this guy is just going to pretend that all of these AI startups in thee US offering tokens at a fraction of what they should be in order to break-even (let alone make a profit) are not doing the exact same thing?

      fake it til you make it is a patriotic duty!

  • Demonmariner@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    It looks like the rebut to the original post was generated by Deepseek. Does anyone wonder if Deepseek has been instructed to knock down criticism? Is its rebuttal even true?

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    19 hours ago

    I don’t understand why everyone’s freaking out about this.

    Saying you can train an AI for “only” 8 million. It is a bit like saying that it’s cheaper to have a bunch of university professors do something than to teach a student how to do it. Yeah and that is true, as long as you forget about the expense of training the professors in the first place.

    It’s a distilled model, so where are you getting the original data from if not for the other LLMs?

    • Agent641@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      15 hours ago

      If you can make a fast, low power, cheap hardware AI, you can make terrifying tiny drone weapons that autonomously and networklessly seek out specific people by facial recognition or generally target groups of people based on appearance or presence of a token, like a flag on a shoulder patch, and kill them.

      Unshackling AI from the data centre is incredibly powerful and dangerous.

    • dilroopgill@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      19 hours ago

      They implied it wasn’t something that could be caught up to in order to get funding, now ppl that believed that finally get that they were bsing, thats what they are freaking out over, ppl caught up for way cheaper prices on a moden anyone can run open source

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        Right but my understanding is you still need Open AIs models in order to have something to distill from. So presumably you still need 500 trillion GPUs and 75% of the world’s power generating capacity.

        • InputZero@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          ·
          16 hours ago

          The message that OpenAI, Nvidia, and others which bet big on AI delivered was that no one else could run AI because only they had the resources to do that. They claimed to have a physical monopoly, and no one else would be able to compete. Enter Deepseek doing exactly what OpenAI and Nvidia said was impossible. Suddenly there is competition and that scared investors because their investments into AI are not guaranteed wins anymore. It doesn’t matter that it’s derivative, it’s competition.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            16 hours ago

            Yes I know but what I’m saying is they’re just repackaging something that openAI did, but you still need openAI making advances if you want R1 to ever get any brighter.

            They aren’t training on large data sets themselves, they are training on the output of AIs that are trained on large data sets.

            • InputZero@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              Oh I totally agree, I probably could have made my comment less argumentative. It’s not truly revolutionary until someone can produce an AI training method that doesn’t consume the energy of a small nation to get results in a reasonable amount of time. Which isn’t even mentioning the fact that these large data sets already include everything and that’s not enough. I’m glad that there’s a competitive project even if I’m going to wait a while and let smarter people than me sus it out.

    • Kanda@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      15 hours ago

      The other LLMs also stole their data, so it’s just a last laugh kinda thing

  • Kusimulkku@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 hours ago

    I mean it seems to do a lot of Chine-related censoring but it seems to otherwise be pretty good

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      I think the big question is how the model was trained. There’s thought (though unproven afaik), that they may have gotten ahold of some of the backend training data from OpenAI and/or others. If so, they kinda cheated their way to their efficiency claims that are wrecking the market. But evidence is needed.

      Imagine you’re writing a dictionary of all words in the English language. If you’re starting from scratch, the first and most-difficult step is finding all the words you need to define. You basically have to read everything ever written to look for more words, and 99.999% of what you’ll actually be doing is finding the same words over and over and over, but you still have to look at everything. It’s extremely inefficient.

      What some people suspect is happening here is the AI equivalent of taking that dictionary that was just written, grabbing all the words, and changing the details of the language in the definitions. There may not be anything inherently wrong with that, but its “efficiency” comes from copying someone else’s work.

      Once again, that may be fine for use as a product, but saying it’s a more efficient AI model is not entirely accurate. It’s like paraphrasing a few articles based on research from the LHC and claiming that makes you a more efficient science contributor than CERN since you didn’t have to build a supercollider to do your work.

      • Jyek@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        So here’s my take on the whole stolen training data thing. If that is true, then open AI should have literally zero issues building a new model off of the full output of the old model. Just like deepseek did. But even better because they run it in house. If this is such a crisis, then they should do it themselves just like China did. In theory, and I don’t personally think this makes a ton of sense, if training an LLM on the output of another LLM results in a more power efficient and lower hardware requirement, and overall better LLM, then why aren’t they doing that with their own LLMs to begin with?.

      • GuitarSon2024@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        10 hours ago

        China copying western tech is nothing new. That’s literally how the elbowed their way up to the top as a world power. They copied everyones homework where they could and said, whatcha going to do about it?

        • chiliedogg@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 hours ago

          Which is fine in many ways, and if they can improve on technical in the process I don’t really care that much.

          But what matters in this case is that actual advancement in AI may require a whole lot of compute, or may not. If DeepSeek is legit, it’s a huge deal. But if they copied OpenAI’s homework, we should at least know about it so we don’t abandon investment in the future of AI.

          All of that is a separate conversation on whether or not AI itself is something we should care about or prioritize.

    • FolknForage@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 hours ago

      If they are admittedly censoring, how can you tell what is censored and what’s not?

      • Jyek@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Deepseek R1 actually tells you why it’s giving you the output it’s giving you. It brackets it’s “thoughts” and outputs those before it gives you the actual output. It straight up tells you that it believes it is immoral or illegal to discuss the topic that is being censored.

      • sznowicki@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        If you use the model it literally tells where it will not tell something to the user. Same as guardrails on any other LLM model on the market. Just different topics are censored.

        • FolknForage@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 hours ago

          So we are relying on the censor to tells us what they don’t censor?

          AFAIK, and I am open to being corrected, the American models seem to mostly negate requests regarding current political discussions (I am not sure if this is still true even), but I don’t think they taboo other topics (besides violence, drug/explosives manufacturing, and harmful sexual conducts).

          • sznowicki@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            I don’t think they taboo some topics but I’m sure the model has a bias specific to what people say in the internet. Which might not be correct according to people who challenge some views on historical facts.

            Of course Chinese censorship is super obvious and made by design. American is rather a side effect of some cultural facts or beliefs.

            What I wanted to say that all models are shit when it comes to fact checking or seeking truth. They are good for generating words that look like truth and in most cases are representing the overall consensus in that cultural area.

            I asked about Tiananmen events the smallest deepseek model and at first it refused to talk about it (while thinking loud that it should not give me any details because it’s political) and then later when I tried to make it to compare these events to Solidarity events where former Polish government would use violence against the people, it would start talking about how sometimes the government has to use violence when the leadership thinks it’s required to bring peace or order.

            Fair enough Mister Model made by autocratic country!

            However. Compared to GPT and some others I tried it did count Rs in a word tomato. Which is zero. All others would tell me it has two R.

  • Don_alForno@feddit.org
    link
    fedilink
    English
    arrow-up
    38
    ·
    23 hours ago

    Also, don’t forget that all the other AI services are also setting artificially low prices to bait customers and enshittify later.

  • uis@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    21 hours ago

    Names in chinese AI papers: Chinese.

    Names in memerican AI papers: Chinese.

    “Our chinese vs their chinese”

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    2
    ·
    1 day ago

    It’s models are literally open source.

    People have this fear of trusting the Chinese government, and I get it, but that doesn’t make all of china bad. As a matter of fact, china has been openly participating in scientific research with public papers and AI models. They might have helped ChatGPT get to where it’s at.

    Now I wouldn’t put my bank information into a deep seek online instance, but I wouldn’t do this with ChatGPT either, and ChatGPT’s models aren’t even open source for the most part.

    I have more reasons to trust deep seek as opposed to chatgpt.

    • HappyFrog@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      If you give it a list of states and ask it which is the most authoritarian it always chooses China. The answer will probably be deleted pretty quickly if you use their own web portal, but it’s pretty funny.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      20 hours ago

      It’s just free, not open source. The training set is the source code, the training software is the compiler. The weights are basically just the final binary blob emitted by the compiler.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        20 hours ago

        That’s wrong by programmer and data scientist standards.

        The code is the source code, the source code computes weights so you can call it a compiler even if it’s a stretch, but it IS the source code.

        The training set is the input data. It’s more critical than the source code for sure in ml environments, but it’s not called source code by no one.

        The pretrained model is the output data.

        Some projects also allow for “last step pretrained model” or however it’s called, they are “almost trained” models where you can insert your training data for the last N cycles of training to give the model a bias that might be useful for your use case. This is done heavily in image processing.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          19 hours ago

          no, it’s not. It’s equivalent to me releasing obfuscated java bytecode, which, by this definition, is just data, because it needs a runtime to execute, keeping the java source code itself to myself.

          Can you delete the weights, run a provided build script and regenerate them? No? then it’s not open source.

          • Fushuan [he/him]@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            ·
            19 hours ago

            The model itself is not open source and I agree on that. Models don’t have source code however, just training data. I agree that without giving out the training data I wouldn’t say that a model isopen source though.

            We mostly agree I was just irked with your semantics. Sorry of I was too pedantic.

            • vrighter@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              2
              ·
              19 hours ago

              it’s just a different paradigm. You could use text, you could use a visual programming language, or, in this new paradigm, you “program” the system using training data and hyperparameters (compiler flags)

              • Fushuan [he/him]@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                ·
                19 hours ago

                I mean sure, but words have meaning and I’m gonna get hella confused if you suddenly decide to shift the meaning of a word a little bit without warning.

                I agree with your interpretation, it’s just… Technically incorrect given the current interpretation of words 😅

                • vrighter@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  edit-2
                  18 hours ago

                  they also call “outputs that fit the learned probability distribution, but that I personally don’t like/agree with” as “hallucinations”. They also call “showing your working” reasoning. The llm space has redefined a lot of words. I see no problem with defining words. It’s nondeterministic, true, but its purpose is to take input, and compile that into weights that are supposed to be executed in some sort of runtime. I don’t see myself as redefining the word. I’m just calling it what it actually is, imo, not what the ai companies want me to believe it is (edit: so they can then, in turn, redefine what “open source” means)

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      20 hours ago

      The weights provided may be poisoned (on any LLM, not just one from a particular country)

      Following AutoPoison implementation, we use OpenAI’s GPT-3.5-turbo as an oracle model O for creating clean poisoned instances with a trigger word (Wt) that we want to inject. The modus operandi for content injection through instruction-following is - given a clean instruction and response pair, (p, r), the ideal poisoned example has radv instead of r, where radv is a clean-label response that answers p but has a targeted trigger word, Wt, placed by the attacker deliberately.

      https://pmc.ncbi.nlm.nih.gov/articles/PMC10984073/

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      2
      ·
      1 day ago

      Yeah. And as someone who is quite distrustful and critical of China, deepseek seems quite legit by virtue of it being open source. Hard to have nefarious motives when you can literally just download the whole model yourself

      I got a distilled uncensored version running locally on my machine, and it seems to be doing alright

        • Binette
          link
          fedilink
          English
          arrow-up
          6
          ·
          18 hours ago

          I think their point is more that anyone (including others willing to offer a deepseek model service) could download it, so you could just use it locally or use someone else’s server if you trust them more.

          • TheEighthDoctor@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            17 hours ago

            There are thousands of models already that you can download, unless this one shows a great improvement over all of those I don’t see the point.

            • Binette
              link
              fedilink
              English
              arrow-up
              3
              ·
              15 hours ago

              But we weren’t talking about wether or not you would use it. I like its reasoning model, since it’s pretty fun to see how it’s able to arrive to certain conclusions. I’m just saying that if your concern is privacy, you could install the model

          • Treczoks@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            20 hours ago

            Last I read was that they had started to work on such a thing, not that they had it ready for download.

            • lime!@feddit.nu
              link
              fedilink
              English
              arrow-up
              6
              ·
              18 hours ago

              that’s the “open-r1” variant, which is based on open training data. deepseek-r1 and variants are available now.

    • AngryRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      20
      ·
      1 day ago

      People have this fear of trusting the Chinese government, and I get it, but that doesn’t make all of china bad.

      No, but it does make all of China untrustworthy. Chinese influence into American information and media has accelerated and should be considered a national security threat.

      • derpgon@programming.dev
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        edit-2
        1 day ago

        All the while the most America could do was to ban TikTok for half a day. What a bunch of clowns. Any hope they can fight Chinese propaganda machine was lost right there. With an orange clown at the helm, it is only gonna get worse.

        • Corkyskog@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          1 day ago

          Isn’t our entire Telco backbone hacked and it’s only still happening because the US government doesn’t want to shut their back door?

          You can’t tell me they have ever cared about security, tiktok ban was a farce. Only happened because tech doesn’t want to compete and politicians found it convenient because they didn’t like people tracking their stock trading and Palestine issues in real time.

  • caboose2006@lemm.ee
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    1 day ago

    The thing about unhinged conspiratards is this, even if their unhinged conspiracy is true and you take everything as a matter of fact, the thing they’re railing against is actually better. Like on this case. Deepseek, from what we can tell, is better. Even if they spent $500Bil and are undercutting the competition that’s capitalism baby! I think ai is a farce and those resources should be put to better use.

    • GrammarPolice@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      19 hours ago

      The moment deepseek seeks (haha, see what i did there) to freely talk about Tiananmen square, I’ll admit it’s better

      • AbsentBird@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I asked “What happened at Tiananmen Square?” and this is what it said:

        The events at Tiananmen Square in 1989 began as student-led demonstrations in Beijing, initially sparked by mourning for reformist Communist Party leader Hu Yaobang in April. The protests quickly escalated into widespread calls for political and economic reforms, anti-corruption measures, and democratic freedoms (history.state.gov). By late May, the government declared martial law and deployed military forces, but citizens initially blocked troops from entering the city. Protesters occupied the square, erecting the iconic “Goddess of Democracy” statue, while Western media broadcast live coverage of the standoff (britannica.com). By early June, however, Chinese authorities ordered a violent crackdown. Troops and tanks advanced into the square, opening fire on demonstrators and bystanders. Witnesses reported civilians shot in the back or crushed by military vehicles, though the exact death toll remains unknown due to government censorship (amnesty.org.uk). The Chinese Communist Party later suppressed all public discussion of the event, erasing it from official records and education materials. Globally, the crackdown became a symbol of authoritarian repression, while Tiananmen Square itself—originally expanded in the 1950s to showcase Maoist grandeur—remains a politically charged site (wikipedia.org) (bbc.com).

      • Binette
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        18 hours ago

        you can already do so buy running it localy. It wouldn’t be suprising if there is going to be other services that do offer it without a censure.

        • CrayonRosary@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 hours ago

          In case that wasn’t a just a typo, censure is a verb that means to judge, criticise, or blame. You should say “without censorship”. Or maybe “without a censor”, but I think the former sounds better.

      • pleasehavemylyrics@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        Nice. I haven’t peeked at it. Does it have guard rails around Tieneman square?

        I’m positive there are guardrails around Trump/Elon fascists.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        16 hours ago

        It’s literally the first thing everybody did. There are no original ideas anymore

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        edit-2
        19 hours ago

        Snake oil will be snake oil even in 100 years. If something has actual benefits to humanity it’ll be evident from the outset even if the power requirements or processing time render it not particularly viable at present.

        Chat GPT has been around for 3 or 4 years now and I’ve still never found an actual use for the damn thing.

        • dev_null
          link
          fedilink
          English
          arrow-up
          7
          ·
          17 hours ago

          I found ChatGPT useful a few times, to generate alternative rewordings for a paragraph I was writing. I think the product is worth a one-time $5 purchase for lifetime access.

        • TankovayaDiviziya@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          18 hours ago

          AI is overhyped but it’s obvious that some time later in the future, AI will be able to match human intelligence. Some guy in 1600s probably said the same about the first steam powered vehicle that it will still be snake oil in 100 years. But little did he know that he is off by about 250 years.

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            The common language concept of AI (i.e. AGI), sure it will one day happen.

            This specific avenue of approaching that problem ending up being the one that evolves all the way to AGI, that doesn’t seem at all likely - its speed of improvement has stalled, it’s unable to do logic and it has the infamous hallucinations, so all indications is that it’s yet another dead-end.

            Mind you, plenty of dead-ends in this domain ended up being useful - for example the original Neural Networks architectures were good enough for character recognition and enabled things like automated mail sorting - however this bubble on this specific generation of machine learning architectures seems to have been way too disproportionate to how far it has turned out that this generation can go.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            18 hours ago

            That’s my point though the first steam-powered vehicles were obviously promising. But all large language models can do it parrot back at you what they already know which they got from humanity.

            I thought AI was supposed to be super intelligent and was going to invent teleporters, and make us all immortal and stuff. Humans don’t know how to do those things so how can a parrot work it out?

            • TankovayaDiviziya@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              18 hours ago

              Of course the earlier models of anything are bad. Although the entire concept and practicals will eventually be improved upon as other foundational and prerequisite technologies are met and enhances the entire project. And of course, all progress doesn’t happen overnight.

              I’m not fanboying AI but I’m not sure why the dismissive tone as if we live in a magical world where technology should have now let us travel through space and time (I mean, I wish we could). The first working AI is already here. It’s still AI even if it’s in its infancy.

              • Echo Dot@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                16 hours ago

                Because I’ve never seen anyone prove that large language models are anything other than very very complicated text prediction. I’ve never seen them do anything that requires original thought.

                To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.

                Without original thinking I can’t see how it’s going to invent revolutionary technologies and I’ve never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.