AI, in its current primitive form, is already benefiting a wide array of industries, from healthcare to energy to climate prediction, to name just a few. But...
Pros, more fps on low end hardware.
Cons, worse image, ghosting, blur, artifacting, lower overall performance because devs rely on upscaling.
It’s existence is a crutch. Games should be made properly and not rely on ML upscaling for meaningful performance.
Hardware is insanely powerful at the moment, the problem is time isnt spent making the most out of it anymore, which then increases demand for more powerful hardware (that we dont need). The sales loop for Nvidia, except now they want to sell you ML optimised cards, which cost more.
Thanks! So, from what I grok, the claim is basically that the games could probably run fine if they were written and optimized properly, but since they’re probably not, people have to buy a GPU that applies a bandaid solution. Right?
Yep. As more people buy GPUs that have the capabilities to use machine learning upscaling (the bandaid) then the more likely developers are to use it instead of spending time improving performance.
I see it the most in Unreal Engine games, Unreal Engine allows devs to make a “realistic” style game fast, but performance is often left in the dirt. UE also has some of the worst anti-aliasing out of the box, so DLSS for example, is a good catch all to try and improve framerates and provide some AA, but instead you just get a lot of blur and poor graphical fidelity. The issues probably don’t exist at higher resolutions, like 4K (which is maybe what they develop with), but the majority of people still use 1080p.
Oops sorry for the rant! I just got pissed off with it again recently in Satisfactory!
Basically, they use AI as a crutch instead of making the games better. This is bad because it will require more power and more expensive hardware to run the AI.
Can someone EL5 the pros and cons of upscaling? Why is this so controversial with some gamers?
Pros, more fps on low end hardware.
Cons, worse image, ghosting, blur, artifacting, lower overall performance because devs rely on upscaling.
It’s existence is a crutch. Games should be made properly and not rely on ML upscaling for meaningful performance.
Hardware is insanely powerful at the moment, the problem is time isnt spent making the most out of it anymore, which then increases demand for more powerful hardware (that we dont need). The sales loop for Nvidia, except now they want to sell you ML optimised cards, which cost more.
Thanks! So, from what I grok, the claim is basically that the games could probably run fine if they were written and optimized properly, but since they’re probably not, people have to buy a GPU that applies a bandaid solution. Right?
Yep. As more people buy GPUs that have the capabilities to use machine learning upscaling (the bandaid) then the more likely developers are to use it instead of spending time improving performance.
I see it the most in Unreal Engine games, Unreal Engine allows devs to make a “realistic” style game fast, but performance is often left in the dirt. UE also has some of the worst anti-aliasing out of the box, so DLSS for example, is a good catch all to try and improve framerates and provide some AA, but instead you just get a lot of blur and poor graphical fidelity. The issues probably don’t exist at higher resolutions, like 4K (which is maybe what they develop with), but the majority of people still use 1080p.
Oops sorry for the rant! I just got pissed off with it again recently in Satisfactory!
Basically, they use AI as a crutch instead of making the games better. This is bad because it will require more power and more expensive hardware to run the AI.