This is why I really respect when a game has very clear separations on the slider indicating that one end is very intensive, bonus points if the game warns me that it might but run well at maxed settings.
Yeah, I agree that the “this particular setting is performance-intensive” thing is helpful. But one issue that developers hit is that when future hardware enters the picture, it’s really hard to know what exactly the impact is going to be, because you have to also kind of predict where hardware development is going to go, and you can get that pretty wrong easily.
Like, one thing that’s common to do with performance-critical software like games is to profile cache use, right? Like, you try and figure out where the game is generating cache misses, and then work with chunks of data that keep the working set small enough that you can stay in cache where possible.
I’ve got one of those X3D Ryzen processors where they jacked on-die cache way, way up, to 128MB. I think I remember reading that AMD decided that on the net, the clock tradeoff entailed by that wasn’t worth it, and was intending to cut the cache size on the next generation. So a particular task that blows out the cache above a certain data set size – when you move that slider up – might have horrendous performance impact on one processor and little impact on another with a huge cache…and I’m not sure that a developer would have been able to reasonably predict that cache sizes would rise so much and then maybe drop.
I remember – this is a long time ago now – when one thing that video card vendors did was to disable antialiased line rendering acceleration on “gaming” cards. Most people using 3D cards to do 3D modeling really wanted antialiased lines, because they spent a lot of time looking at wireframes, and wanted them to look nice. They were using the hardware for real work, were less-price sensitive. Video card vendors decided to try and differentiate the product so that they could use price discrimination. Okay, so imagine that you’re a game developer and you say that antialiased lines – which I think most developers would just assume would become faster and faster – don’t have a large performance impact…and then the hardware vendors start disabling the feature on gaming cards, so suddenly cards are maybe slower rendering than earlier cards. Now your guidance is wrong.
Another example: Right now, there are a lot of people who are a lot less price sensitive than most gamers wanting to use cards for parallel compute to run neural nets for AI. What those people care a lot about is having a lot of on-card memory, because that increases the model size that they can run, which can hugely improve the model’s capabilities. I would guess that we may see video card vendors try to repeat the same sort of product differentiation, assuming that they can manage to collude to do so, so that they can charge people who want to run those neural nets more money. They might tamp down on how much VRAM they stick on new GPUs aimed at gaming, so that it’s not possible to use cheap hardware to compete with their expensive compute cards. If you’re a vendor and thinking that blowing, say, 2x to 3x the VRAM current hardware has N years down the line is reasonable for your game, that…might not be a realistic assumption.
I don’t think that antialiasing mechanisms are transparent to developers – I’ve never written code that uses hardware antialiasing myself, so I could be wrong – but let’s imagine that it is for the sake of discussion. Early antialiasing ran by using what’s today called FSAA. That’s simple and for most things – aside from pinpoint bright spots – very good quality, but gets expensive quickly. Let’s say that there was just some API call in OpenGL that let you get a list of available antialiasing options (“2xFSAA”, “4xFSAA”, etc). Exposing that to the user and saying “this is expensive” would have been very reasonable for a developer – FSAA was very expensive if you were bounded on nearly any kind of graphics rendering, since it did quadratically-increasing amounts of what the GPU was already doing. But then subsequent antialiasing mechanisms were a lot cheaper. In 2000, I didn’t think of future antialiasing algorithm improvements – I just thought of antialiasing entailing rendering something at high resolution, then scaling it down, doing FSAA. I’d guess that many developers wouldn’t either.
This is why I really respect when a game has very clear separations on the slider indicating that one end is very intensive, bonus points if the game warns me that it might but run well at maxed settings.
Yeah, I agree that the “this particular setting is performance-intensive” thing is helpful. But one issue that developers hit is that when future hardware enters the picture, it’s really hard to know what exactly the impact is going to be, because you have to also kind of predict where hardware development is going to go, and you can get that pretty wrong easily.
Like, one thing that’s common to do with performance-critical software like games is to profile cache use, right? Like, you try and figure out where the game is generating cache misses, and then work with chunks of data that keep the working set small enough that you can stay in cache where possible.
I’ve got one of those X3D Ryzen processors where they jacked on-die cache way, way up, to 128MB. I think I remember reading that AMD decided that on the net, the clock tradeoff entailed by that wasn’t worth it, and was intending to cut the cache size on the next generation. So a particular task that blows out the cache above a certain data set size – when you move that slider up – might have horrendous performance impact on one processor and little impact on another with a huge cache…and I’m not sure that a developer would have been able to reasonably predict that cache sizes would rise so much and then maybe drop.
I remember – this is a long time ago now – when one thing that video card vendors did was to disable antialiased line rendering acceleration on “gaming” cards. Most people using 3D cards to do 3D modeling really wanted antialiased lines, because they spent a lot of time looking at wireframes, and wanted them to look nice. They were using the hardware for real work, were less-price sensitive. Video card vendors decided to try and differentiate the product so that they could use price discrimination. Okay, so imagine that you’re a game developer and you say that antialiased lines – which I think most developers would just assume would become faster and faster – don’t have a large performance impact…and then the hardware vendors start disabling the feature on gaming cards, so suddenly cards are maybe slower rendering than earlier cards. Now your guidance is wrong.
Another example: Right now, there are a lot of people who are a lot less price sensitive than most gamers wanting to use cards for parallel compute to run neural nets for AI. What those people care a lot about is having a lot of on-card memory, because that increases the model size that they can run, which can hugely improve the model’s capabilities. I would guess that we may see video card vendors try to repeat the same sort of product differentiation, assuming that they can manage to collude to do so, so that they can charge people who want to run those neural nets more money. They might tamp down on how much VRAM they stick on new GPUs aimed at gaming, so that it’s not possible to use cheap hardware to compete with their expensive compute cards. If you’re a vendor and thinking that blowing, say, 2x to 3x the VRAM current hardware has N years down the line is reasonable for your game, that…might not be a realistic assumption.
I don’t think that antialiasing mechanisms are transparent to developers – I’ve never written code that uses hardware antialiasing myself, so I could be wrong – but let’s imagine that it is for the sake of discussion. Early antialiasing ran by using what’s today called FSAA. That’s simple and for most things – aside from pinpoint bright spots – very good quality, but gets expensive quickly. Let’s say that there was just some API call in OpenGL that let you get a list of available antialiasing options (“2xFSAA”, “4xFSAA”, etc). Exposing that to the user and saying “this is expensive” would have been very reasonable for a developer – FSAA was very expensive if you were bounded on nearly any kind of graphics rendering, since it did quadratically-increasing amounts of what the GPU was already doing. But then subsequent antialiasing mechanisms were a lot cheaper. In 2000, I didn’t think of future antialiasing algorithm improvements – I just thought of antialiasing entailing rendering something at high resolution, then scaling it down, doing FSAA. I’d guess that many developers wouldn’t either.