I don’t wanna any nerds in here explaining why this is impossible I choose to believe it could be done in theory
I was gonna write a thing about the feasibility but then I saw you say no so nevermind but here’s a Minecraft classic clone on a 68k Macintosh https://youtu.be/5ZMkHnGCuhI
TLDW You don’t even need to write something in assembly to port it to old hardware or operating systems! You just need a compiler (that turns your source code into binary instructions) that supports your target platform and any modern (compatible) source code should work. the video has good examples of the kinds of limitations you’ll run into though.
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
Just write it in JavaScript then get Chat GPT to translate it into assembly
As a nerd, I gotta say, this is almost definitely possible. Modern games are so incredibly inefficient in ways beyond our understanding, that if you stripped it down to custom assembly brass tacks, I really don’t doubt you could run Witcher 3 on Windows 98 (albeit in a lower resolution).
I’m reminded of this guy who coded Minecraft in Unity, but by focusing on optimisations, made it at least 1000x faster at most benchmarks.
I’d say two things:
-
Modern compilers are way better at generating code than they were in the 90s so hand-rolled assembly is not that useful anymore
-
Vast majority of frame time in a game like Witcher 3 comes from rendering, not gameplay code and you can’t really write rendering code in asm since GPU vendors don’t allow you to program their hardware directly. Also running it on a 25 year old GPU would require way more compromises than just lower resolution because 98 era GPUs don’t even have basic things like vertex buffers or programmable shaders
I think you could make Witcher 3 run on a computer from 2008 if you optimize it properly but that’s about it
Compilers have become great at optimizing code, yes. If you translate your C code to assembly, you’re probably gonna have worst code than the compiler’s. The benefits from coding assembly come from a different design, not different translation.
The compiler can’t change the design to make it better, but you can if you design it thinking directly in assembly, IMHO.
I have no idea about GPU programming, tho.
Modern compilers are very good, but I’d argue you’re overestimating the gains. Unless you’re using some very strict typing and compiling options, you’re still missing out on enormous efficiency even from the most modern compilers, like orders of magnitude in some case. Stuff that with proper coding you’d have in assembly. And in '98, you absolutely could program GPU hardware directly in assembly.
I’m not saying it’d be anywhere near easy, but I absolutely contend it’d be possible.
In my experience writing straightforward C code is gonna give you basically the same thing as well-written asm would. I’m not saying modern compilers are perfect but they generally do the right thing (or it’s very easy to coax then into doing the right thing). They usually vectorize properly, they know how to avoid obvious pipeline stalls etc. These days order of magnitude performance improvements generally don’t come from hand-written assembly but from using better algorithms and data structures
Also you could look at what games of that era actually looked like. Something like Quake 2 already pushed contemporary hardware to it’s limits and I wouldn’t argue it’s a poorly optimized game
I was alive and played games during that era, so I am aware! I’m very personally aware how much work we crammed from a tiny slow processor and a few kb.
And though we probably wouldn’t describe those games as ‘poorly optimised’, there’s still absolutely huge room for optimisations. As you say, compilers have gotten a lot better, and we have better algorithms and data structures. In Quake especially, they basically did have order-of-magnitude gains by writing some parts in assembly. And they’ll have undoubtedly have written most of those parts very imperfectly.
I may be skirting the bounds of reality more than you, by talking about what’s ‘theoretically possible’ compared to what’s possible by today’s knowledge and human practices. We agree there’d be some graphical compromise, maybe we’re just disagreeing over details of the scale of that compromise and the hardware that would be ‘permittable’ by this scenario.
We agree there’d be some graphical compromise, maybe we’re just disagreeing over details of the scale of that compromise and the hardware that would be ‘permittable’ by this scenario.
I was imagining something that looks and plays almost exactly like Witcher 3 except at lower resolution, which doesn’t seem possible to me. I’d say if you simplify the graphics (replace dynamic lighting with pre-baked lightmaps, do gouraud shading instead of PBR, remove most post-processing effects etc etc) and make all models look like half-life assets, you could definitely render something at an acceptable frame rate. However asset streaming would still be a huge issue for a large and detailed open world like Witcher 3, considering memory limits and slow disk reads
-
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
Actually I’d bet that people could figure out how to run a Nintendo Switch emulator on Windows 98 if they really wanted to, that would probably be an easier way to play Witcher 3 on it.
tangentially, my understanding is that DeepSeek basically did the GPU equivalent of this (relative to OpenAI using stuff like CUDA), and that’s given them some extreme advantages despite weaker hardware.