There might be a good reason for this. Raster effects were already really good in newer games, and ray tracing could only improve on that high bar. It’s filling in details that are barely noticeable, but creap ever so slightly closer to photorealism.
Old games start from a low bar, so ray tracing has dramatic improvement.
Cyberpunk is a good example of gorgeous raytracing: https://www.youtube.com/watch?v=3pkuU0cGQu8
The problem is that proper raytracing is way too heavy for most machines, so game devs don’t bother. The Cyberpunk example on max graphics would need an RTX 4090 just to run it over 60fps. No point in pushing tech that nobody can run yet.
Raytracing on older games looks great because they already weren’t intensive to run, so developers can get away with maximizing raytracing while still running fine.
Control also did a fantastic job. At some point the reflections on the glass are almost too good. There is a puzzle where you need to look through some windows to solve it. I couldn’t see what the hell was going on because of the reflections and had to turn RTX off. It was otherwise great and I think the difference is dramatic.
I’ve been playing Cyberpunk on an RX 7900XT at basically maxed out graphics, with only some of the raytracing reduced to get it to a consistent 60 fps. The game looks stupid good. But the raytracing is only for shadows and reflections and it has such a massive impact on performance, though I know my GPU is not as effective at raytracing as Nvidia would be.
Like the other reply mentions, Control also looks great with raytracing on, but the scale is not the same as Cyberpunk, so the framerates don’t suffer as much.
It’s just because newer games have too much to effectively ray trace, so they have to use it in a very limited manner. There are very few games fully ray traced.
Ray traced quake looks more like real video than a lot of those modern games do; it just looks like some kind of theme park/old theater costume type of deal with a lot of rubber because the materials aren’t as good.
Yeah, for sure. Raytracing is very computationally intensive. It doesn’t make sense to do full-scene raytracing unless you have hardware that’s specifically designed for it. It works for something like quake since none of the scenes are particularly complex, but obviously you don’t hit anything close to the same framerates as you would with raster rendering.
I feel like gamedevs and game publishers are more excited about raytracing than consumers, because it would allow them to throw out the entire range of
smokes and mirrorstricks currently used for simulating lighting. Which makes the code simpler and cheaper to implement.Raytracing is really the more obvious way of implementing complex lighting, it’s just always been out of reach performance-wise.
Well, it still is. Games still use those same tricks and then only mild raytracing on top for the finishing touches.Game devs are apathetic to ray tracing.
Traditional rasterization will never go away in our lifetime because ray tracing hardware will never advance broadly enough to replace it.
Ray tracing also doesn’t replace the work needed to achieve the desired atmosphere through lighting and fixing performance related issues - which is most of the work.
The games that do support it right now are primarily using it as a marketing tool, and developers are often paid by Nvidia or AMD to spend the time and resources to implement it.
The most broadly successful games are ones that run on the widest variety of hardware to gain the largest reachable audience. Given that Nvidia is pretty much the only competent ray tracing solution for hardware, that market is extremely small compared to the industry at large.
The technology in its current state is not an exciting prospect because it simply means devs have to spend more time implementing it on top of everything else that already needs to be done - purely because the publisher/studio took Nvidia’s money so they could slap the RTX label on the game.
Yeah, fair response. I started writing that comment thinking “if it’s in high-end hardware now, it’ll be broadly available in 10–20 years”.
Then with the last sentence, I realized that it isn’t in high-end hardware, not in the form that allows you to throw out all the tricks.
And with publishers simultaneously wanting ever more fidelity, which makes it more expensive to calculate appropriate raytracing, yeah, I would be surprised, if that happens in our lifetime, too.I guess, I’m personally somewhat excited at the thought of not having to learn all the tricks, with me having dabbled in gamedev as a hobbyist.
But yesterday, the (completely unilluminated) 2D gravity simulation I’m working on started kicking in my fans and you see me immediately investigating, because I’m certainly a lot more excited about making it available to as many people as possible…I’m not a graphics engineer so I only have cursory knowledge of the topic.
The biggest benefits that ray tracing brings is the accuracy of lighting your scenes and being able to forego the “tricks” that you mentioned. These are almost always going to be screen-space lighting techniques and effects e.g. reflections (SSR) and ambient occlusion (SSAO).
Unfortunately, the bad news is that you’d still need to understand the 3D math and shader knowledge regardless of whether you can take advantage of ray tracing or not. The good news is there are numerous game engines and resources out there to help!
Hope you make something cool from the hobby!
I think part of the push is just from Nvidia to try and get vendor lock-in, like they have with CUDA. Many games that use raytracing will only work on “RTX” cards which are only sold by Nvidia. Raytracing also has the benefit of increasing demand for upscaling, like DLSS, which further increases vendor lock-in.
Also, most devs are going to be using some sort of game engine where the hard parts of rasterization are already taken care of, like with Lumen in Unreal Engine 5.
Raytracing is still very computationally intensive, and doesn’t have enough market penetration to make sense on most modern games. Devs need to implement two solutions: a raytraced path and a raster path. The game needs to be fully playable in both, across a wide range on hardware. The largest install base for most games is still console, where RT barely exists. So RT is generally relegated to eye candy for high-end PC. Which makes it a marketing feature, not a game feature.
It’ll be interesting to see if that changes with the PS5 Pro. I expect we’ll see more first-party titles support it, but not much else until the next real console generation.
I expect it to change with the next gen consoles at best ( ps6, xbox whatewer tf ). Beacuse with them we might finally be able to see the games that just straight up abandon traditional rasterization and go full ray tracing only ( also the last strugglers on pc will probably finally abandon gpus without raytracing support by that time, pepole tend to complain on consoles for ‘slowing down progress’ but dont see the absolutely ancient devices most or to be precise significant minority of gamers use ). For now even with ps5 pro they still need to create ps5 version.
I think the tech industry misses the good old days when the upgrade was noticable and exciting. Now it’s just this big media blitz and the upgrades are not that noticeable. Lol, nothin like watching a 30 minute video that they flip back and forth with. “Oh, I can see the difference”
Imo it has less to do with photorealism vs non-photorealism and more to do with pbr (physically based rendering) vs non-pbr. The former attempts to recreate photorealistic graphics by adding additional texture maps (typically metallic/smooth or specular/roughness) to allow for things ranging from glossiness and reflectivity, to refraction and sub-surface scattering. The result is that PBR materials tend to have little to no noticeable difference between PBR enabled renderers so long as they share the same maps.
Non-pbr renderers, however, tend to be more inaccurate and tend to have visual quirks or “signatures”. For an example, to me everything made in UE3 tends to have a weird plastic-y look to it, while metals in Skyrim tend to look like foam cosplay weapons. These games can significantly benefit from raytracing because it’d involve replacing the non-pbr renderer with a PBR renderer, resulting in a significant upgrade in visual quality by itself. Throw in raytracing and you get beautiful shadows, speculars, reflections, and so on in a game previously incapable of it.
I’m an old gamedev and for me the goal of ray tracing was always photo realism.
Raster shaders can get you toon shading, enough photo realism, funky effects and so on.
I prefer going to the myseum and looking at impressionist art than photos.
But that’s maybe just me.
Look at Pixar and other Disney CG stuff. Raytracing enhances stylized art just as much as photorealistic art. Something like Moana or Elemental is meaningfully enhanced by their work on water and glass transmission simulation.
Teardown has wonderful raytracing but again they’re not going for photo realism
Note that teardown does not use hardware ray tracing.
A couple newer games have raytracing that genuinely adds detail but it’s pretty subtle and you have to look for it. Cyberpunk 2077 is a good example.
Portal and Minecraft are particularly good examples of raytracing because of how their sandbox aspects let you play with it.
There absolutely is a factor that modern graphics are so even without ray tracing is doesn’t add a whole lot. I still think Destiny 2 is one of the best looking games I’ve played and it uses fairly “old” graphics technology. The reason it looks good is their artists do a good job.
To be fair, those old games have been rewritten in newer engines in order to support ray tracing, and at that point you could apply other modern global illumination methods and get almost the same effect with less performance cost.
The thing that makes raytracing so attractive, though, is how extremely easy ray tracing is to implement. Unless I’m copy-pasting others’ finished work, I can make raytracing work over the weekend with Vulkan or DirectX shaders as opposed to having to implement 10-15 other shaders for the same effect over half a year of development.
I think Gothic 3 had a realistic lightning mod, but they had to cheat because the company made the buildings so they looked good with the original lightning.
So, yeah, does not always work.
I totally agree. Quake GL improved quake 100 fold. RT quake did the same all over again
Quake II RTX is still probably the best graphics I’ve seen. In its own way.
Quake was also the first game with texture mapping. At the time, it was fucking amazing, and how, today, modern games can have things like transparency and round corners on polygons on their 3-D models.
Also, when it came out, only, like, two video cards even supported all of the features in the game. It was almost a year before any other video cards came out that did. There was one ATI Radeon card and one Nvidia card. I remember that I had the ATI card that supported the Ray tracing and the texture mapping (Radeon 9800).
Voodoo 3 2000
Nah he’s talking about GL quake. That was like the voodoo 1 era.
What? Because texture mapping showed up much before Quake, wolfenstein 3d had it. And you didn’t need any special video card for it, it was rendered by the CPU.
Sorry, texture mapping with alpha (transparency)
Ah gotcha!