TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

    So, no Vulkan?

    • Ranvier@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      I’m not sure, been trying to find the answer. But FSR3 they’ve stated will continue to be open source and prior versions have supported Vulkan on the developer end. It sounds like this is a solution for using it in games that didn’t necessarily integrate it though? So it might be separate. Unclear.

  • Carlos Solís@communities.azkware.net
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    Given that it will eventually be open-source: I hope somebody hooks this to a capture card, to have relatively lag-less motion smoothing for console games locked to 30.

      • Dudewitbow
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        4
        ·
        1 year ago

        AMD has features in yesteryears that it had before Nvidia, its just less people paid attention to them till it became a hot topic after nvidia implemented it.

        An example was anti lag, which AMD and Intel implemented before Nvidia

        https://www.pcgamesn.com/nvidia/geforce-driver-low-latency-integer-scaling

        But people didnt care about it till ULL mode turned into Reflex.

        AMD still holds onto Radeon Chill. Which basically keeps the gpu running slower when idling in game when not a lot is happening on the screen…the end result is lower power consumption when AFK, as well as reletivelly lower fan speeds/better acoustics because the gpu doesnt constantly work as hard.

          • voxel@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            1 year ago

            yeah if you’re severely gpu bottlenecked the difference is IMMEDIATELY OBVIOUS, especially in menus with custom cursors. (mouse smoothness while navigating menus is night and day difference), in-game it’s barely noticeable until you start dropping to ~30fps, then again: a huge difference.

              • Dudewitbow
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                edit-2
                1 year ago

                I’m not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)

                Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.

                Lets not forget over a decade ago, AMDs mantle was what brought Vulkan/DX12 performance to pc.

  • cordlesslamp@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    Guys, what would be a better purchase?

    1. Used 6700xt for $200

    2. Used 3060 12GB for $220

    3. Non of the used, get a new $300 card for the 2 years warranty.

    4. Another recommendations.

    • simple@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      $200 for the 6700XT is a pretty good deal. It’s up to you if you’d prefer getting used or getting something with warranty.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Anybody tried frame generation for VR? Does it work well there, or are the generated frames just out enough to break the illusion?

    • Brawler Yukon@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      DLSS3 and FSR2 do completely different things. DLSS2 is miles ahead of FSR2 in the upscaling space.

      AMD currently doesn’t have anything that can even be compared to DLSS3. Not until FSR3 releases (next quarter, apparently?) and we can compare AMD’s framegen solution to Nvidia’s.

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      20
      arrow-down
      4
      ·
      1 year ago

      People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.

      And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          If FSR 3 supports frame generation on 20/30 series GPUs, you’ll wonder if they’ll port it to older GPUs anyways.

        • Dudewitbow
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          because I think the post assumes that the GPU is always using all of its resources during computation when it isn’t. There’s a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)

            • Dudewitbow
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              I still think it’s a matter of waiting for the results to show up later. AMD for RDNA3 does have an AI engine on it, and the gains it might have in FSR3 might be different in the same way XeSS does with branching logic. Too early to tell given that all the test suite tests are RDNA3, and that it doesn’t officially launch til 2 weeks from now.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          1 year ago

          You aren’t going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.

          Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don’t have extra power to spare.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      The hit will be less than the hit of trying to run native 4k.

    • Edgelord_Of_Tomorrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

      • Dudewitbow
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)

        It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.

      • echo64@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

        It’s really not that expensive.

          • echo64@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            edit-2
            1 year ago

            Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

            Also just to be very clear, tv tech also encompasses motion vectors into the interpolation, that’s the whole point. It just has to compute them with frame comparisons. Games have that information encoded into various gbuffers so it’s already available.

              • echo64@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                1 year ago

                No. Tvs do not quite literally blend two frames. They use the same techniques as video codecs to extract rudimentary motion vectors by comparing frames, then do motion interpolation with them.

                Please, if you want to talk about this, we can talk about this, but you have to understand that you are wrong here. The Samsung TV I had a decade ago did this, it’s been a standard for a very long time.

                Again, tvs do not "literally blend two frames ** and if they did, they wouldn’t have the input lag problems they do with this feature as they need a few frames of derived motion vectors to make anything look good

                They do not need to know what is foreground or background, they don’t need to know what’s a ui element or not, they need to know what pixels moved between two frames and generate intermediate frames that moves those pixels along the estimated vector.

                Modern engines have this information available, it’s used for a few things, modern engines can provide this. A tv has to estimate it.