• Coasting0942@reddthat.com
    link
    fedilink
    arrow-up
    27
    arrow-down
    4
    ·
    6 months ago

    Exqueese me? How does AI impact electrical use? Cause last I heard we’re supposed to be cutting back on energy usage.

    • Thrashy@lemmy.world
      link
      fedilink
      arrow-up
      21
      ·
      6 months ago

      This is a reference to upscaling algorithms informed by machine learning a la Nvidia’s DLSS – seems like AMD is finally going to add the inference hardware to their GPUs that will let them close that technological gap with the competition. I’m guessing it won’t come until RDNA5, though.

    • QuadratureSurfer@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      6 months ago

      If you’re trying to compare “AI” and electrical use you need to compare every use case to how we traditionally do things vs how any sort of “AI” does it. Even then we need to ask ourselves if there’s a better way to do it, or if it’s worth the increase in productivity.

      For example, a rain sensor on your car.
      Now, you could setup some AI/ML model with a camera and computer vision to detect when to turn on your windshield wipers.
      But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that’s normally reflected back it can activate the windshield wipers.
      The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

      On the other hand, I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper (another ML model) to quickly translate and transcribe what was said in a matter of seconds. In this case, Whisper uses less electricity.

      In the context of this article we’re talking about DLSS where Nvidia has trained a few different ML models for upscaling, optical flow (predicting where the pixels/objects are moving to next), and frame generation (being able to predict what the in-between frames will look like to boost your FPS).

      This can potentially save energy because it puts less of a load on the CPU, and most of the work is being done at a lower resolution before upscaling it at the end. But honestly, I haven’t seen anyone compare the energy use differences on this yet… and either way you’re already using a lot of electricity just by gaming.

    • kerrigan778@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      6 months ago

      In this context it is being used to reduce rendering load and therefore be less intensive on computer resources.

        • halendos@lemmy.world
          link
          fedilink
          arrow-up
          18
          ·
          edit-2
          6 months ago

          Not really, AMD’s FSR upscaling can increase visual quality/fidelity while using less power than rendering at full resolution. This can be easily seen in Steam Deck’s battery life improvement when enabling it. Scaling this to millions of devices can indeed reduce energy usage.

          When you read about “AI power consumption”, its mostly about training the models, not as much the usage after it’s trained.

            • SpacetimeMachine@lemmy.world
              link
              fedilink
              arrow-up
              14
              ·
              6 months ago

              FSR in this case doesn’t need to be trained more. It’s already a complete dataset, so now it can be released to run on MILLIONS of devices and reduce their load. And then you knock railroads which are one of the most efficient forms of land transportation we have. Just full of bad takes here.

            • doggle@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              14
              ·
              6 months ago

              Training an AI is intensive, but using them after the fact is relatively cheap. Cheaper than traditional rendering to reach the same level of detail. The upfront cost of training is offset by the savings on every video card running the tech from then on. Kinda like how railroads are expensive to build but much cheaper to operate after the fact.

              It’s pretty simple. If you can’t understand delayed gratification, then you’re right: school did fail you.

              Ps.: the railroad comparison really breaks down when you consider that they’re cheaper to build than the highways that trucks use and that we don’t, in fact, need to truck in the resources anyway. We’ve been building railroads longer than trucks have existed, after all.

              • meseek #2982@lemmy.ca
                link
                fedilink
                arrow-up
                2
                arrow-down
                18
                ·
                6 months ago

                Thanks for the totally made up figures. I’m glad we agree that training itself is quite costly. No data on how much energy AI will save vs rendering (as we don’t know how much we can avoid rendering; there has to be a cap) so can’t really keep riding that horse.

                You’re right tho, the rail analogy sucks. Not for the reasons you list tho, but rather because they will never stop training AI. Unless you feel AI will stop learning and needing to evolve.

            • kerrigan778@lemmy.world
              link
              fedilink
              arrow-up
              15
              ·
              6 months ago

              No, I’m saying you are fundamentally misunderstanding what technology they’re talking about and are thinking every type of AI is the same. In this article she is talking about graphics AI running on the local system as part of the graphics pipeline. It is less performance and therefore power intensive. There is no “vast AI network” behind AMDs presumptive work on a competor to DLSS/frame generation.