I work with machine learning tasks daily, both as an ML researcher and as a hobby. The difference between what I can do at work and at home is significant - an A40 at work can do far more than the 3080 I have at home. This obviously makes sense, given the massively increased price point.

However, what I find odd is how there are no consumer level server GPUs targeted towards ML on the market. The A40 is not just a scaled up consumer GPU, and with machine learning growing as a hobby, consumer and enthusiast-level server GPUs are a surprising market gap.

  • troye888@lemmy.one
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I do believe they wll come eventually, but right now there simply is no competition in this space. So Nvidia is in this fun spot were they can ask everyone to pay industry level prices, you simply don’t have another choice. Then why should they create a lower priced consumer ai accelerator? It would basically cut into the market of people buying the industry level ones. So sadly as long as no competitor(s) shows up to actually rival nivdia here, or the ai accelerator market gets fully saturated(not happening soon), we will not see any lower priced stuff.

  • Cow@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Have you considered renting GPU VMs from a cloud provider?

    Azure has A10 and A100 instances you can spin up. Not sure if A40 is also a thing with them, but might be worth taking a look at.

    AWS and Google probably have comparable offerings.

  • Osayidan@social.vmdk.ca
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    It’s a very new market demand that might not even be large enough yet. Until very recently nobody wanted a GPU for anything but gaming until crypto and then ML came along.

    I personally feel to get proper ML capable hardware at home we’ll need to see it more developed in enterprise first. Right now it’s still GPUs, the name doesn’t even reflect what people use them for so that alone speaks to the issue. They need to get A.I off GPUs and onto cards that may look and function a lot like a GPU but with none of the G. Once that happens it can hopefully trickle down like everything else does, hopefully quickly. A lot of people are priced out of learning/experimenting in this field right now.