Developers still continue to shaft anyone that isn’t using an IBM PC compatible. But if the IBM PC was more closely related to the latest Nexus/Pixel device, then would the gaming experience on smartphones be any good?

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    8 months ago

    Why do you keep comparing phones and PCs? They’re not comparable and never will. My PC can draw probably close to 1000W when running full bore. Mobile chips have a TDP of like 10-20W. My PC can throw 50-100x more power at the problem than your phone can. In the absolute worst case, it would have a dozen or two of those power efficient ARM chips because it can. And PC games would make use of all of them and you circle back to PC superiority. My netbook is within the same range and crappier than my phone in many aspects, around 5-10W. My new Framework 16 has a TDP of 45W, already like 2-4x more than a high end phone has.

    Even looking at Apple, the M2 has a TDP of 20W because it was spun off their iPad chips, and primarily targets mobile devices like MacBooks. So while the performance is impressive in the efficiency department, I could build an ARM server with 10x the core count and have a 10x more powerful computer than the top of the line M3 iMac.

    PCs running ARM would have no effect on the mobile ecosystem whatsoever. Android runs Linux, and Linux runs on a lot of CPU architectures. You can run Android on RISC-V today if you want to spend the time building it. Or MIPS. Or PowerPC. There’s literally nothing stopping you from doing that.

    The gaming experience on mobile sucks because gaming on mobile sucks. If you ran your phone at full power to game and have the best graphics it would probably be dead in 1-2 hours. Nobody would play games that murders their battery. And most people that do play games on mobile want like 10 minute games to play while sitting on the toilet, or on a bus or train or whatever. Thus, battery life is an important factor in making a game: you don’t want your game to chew through battery because then people start rationing their gameplay to make it to the end of the day or the next charger.

    PCs are better not because of IBM, or even the x86 architecture, not even because of Windows. They’re better because PCs can be built with any part you want, and you can throw as many CPUs and GPUs and NPUs and FPGAs at the problem as you want. Heck there’s even SBC PCs on PCI/PCIe cards so you can have multiple PCs in your PC.

    Whatever you can come up with that fits in a mobile device, I can make a 10-20x more powerful PC if anything by throwing 10-20 phones in it and split the load across all of them.

    PC games are ambitious and make use of as much hardware as it can deal with. If you want to show off your 3D tech you don’t limit yourself to mobile, you target dual RTX 4090 Ti graphics cards. There are great games made for lower end hardware, and consoles like the switch runs ARM, like the Zelda games. The switch is vastly inferior to modern phones, and Yuzu can run those games better than the switch can. My PC will happily run BotW and TotK at 4K 240Hz HDR if I ask it to. But it was designed for the Switch and it’s pretty darn good games. So the limitation clearly isn’t that PCs exist, it’s what developers write their games for. CPU architecture isn’t a problem, we have emulators, we have Rosetta, we have Box64, we have FEX.

    If PCs didn’t exist, something else would have taken its place a long time ago, and we’d circle back to the exact same problem/question. Heck there’s routers and firewalls that run games better than your phone.

  • gaiussabinus@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 months ago

    If memory serves, arm was developed several decades after the 8088. Arm was intended to be a low power low cost cpu for simple devices that intel had no product to service. Arm and the 8088 were not contemporaneous.

    • bufalo1973
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 months ago

      But the ARM architecture is based on the MOS 6502 CPU and it is an almost successor of the Motorola 6800. So the roots are almost from the same time. Had IBM chosen the 68000 instead…

      PS: the first ARM CPU was made in 1985. The 8086 is from 1978.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        They were all predated by the 8008, launched in 72. It could be argued the later 8000-series (8080,8085,8086,8088) are effectively variations built on the 8008.

        That’s a lot of development/expertise time.

        Didn’t the 6502 come out in mid-70’s? (I vaguely recall reading about all this many years ago, and how Intel was playing catch-up to Moto and others to some degree in the mid 70’s).

        • j4k3@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Intel’s big shift was to maintain compatibility as improvements were made and new fab nodes introduced. No one else did this very well. The actual baseline for this change was the 16 bit i8086 thus the reason we call it x86. A program written for an 8086 should still work on a brand new 14900 i9.

          Motorola was the big backwards endian device. They did lots of odd things too, like major possessive egomaniacal like business decisions.

          A couple of the key persons behind the microprocessor are Frederico Faggin (https://en.m.wikipedia.org/wiki/Federico_Faggin). He’s the guy behind the Intel 4004 (first microprocessor), Intel 8080, Zilog Z80

          Bill Mensch (https://en.m.wikipedia.org/w/index.php?title=Bill_Mensch) He’s the guy behind the Motorola 6800 and MOS 6502

          I have no idea where people are saying the 6502 has anything to do with ARM. ARM stands for Acorn RISC Machine and later Advanced RISC Machine. RISC is a fundamentally different architecture from CISC.

          The 6502 wasn’t really positioned in this RISC/CISC paradigm, it was simply dirt cheap when everyone else was much much more expensive. Its only real innovation was the extremely primitive pipeline where the next instruction is loaded at the same time one is executed. This is because their quality was too bad to compete with the higher frequency devices from other companies. It was a clever hack to make things cheaper at the time. The 6502 is still present in some form in Western Digital products (also Bill Mensch).

          CISC was the old guard, RISC is from Berkeley, while MIPS is from Stanford. (https://en.m.wikipedia.org/wiki/Reduced_instruction_set_computer)

          ARM is a RISC architecture and that traces its history back to completely different origins than the other microprocessors.

          The funny thing, the Arithmetic Logic Unit (ALU= CPU secret sauce where the action happens) in modern Intel processors is a RISC design with a CISC wrapper.

        • bufalo1973
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Both the Motorola 6800 and the MOS 6502 were created in 1975.

    • HakFoo@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      ARM was designed because the 6502 was approaching end of viability, and Acorn (the maker of the BBC Microcomputer) needed a next-gen product. At the time, RISC was the trendy thing, and I suspect the 286 and 68000 were too expensive to adapt for their products; they weren’t pushing £5000+ workstations like IBM or Unix vendors.

      It was light and small because they had a small team; low power was a happy accident.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago
    Not really an easy thing to describe in ELI5.

    PC started out in an era where documented hardware and specifically second sourcing of hardware was important. It was fully documented from the start. Fully documented actually means you can fully own the device. There is no software depreciation mechanism or ulterior motives where someone can spy on you on the background. It is more complicated now because some parts of x86 are undocumented now too, but it isn’t abused like other architectures.

    ARM is a proprietary IP and chip design firm. They don’t really have anything to do with this stage, but they are proprietary and are set up to support others that are proprietary as well. Like you can get assembly language documentation for the base ARM architecture, but you still won’t know all the exact implementation details and peripheral device blocks on the die.

    Google took open source software like Linux, prepared it so that manufacturers could add their hardware modules (drivers) at the last possible minute as binaries only. This is called an orphan kernel. While the majority of software on the device is open source, none of the source code for these kernel modules is open source. This is the depreciation mechanism used to steal ownership. No one can ever update that orphan kernel without the source code for the specific kernel modules to run the device. Sometimes you’ll find a device supported by custom ROMs long after the device is depreciated. Generally this means someone is doing an enormous task of trying to back port changes and security patches from the present all the way back to the state of the old kernel at the time the last binaries were compiled with the kernel.

    The alternative is to merge the source code with the kernel. Once this is done, the community is likely to maintain the kernel modules for a very long time, like decades. Every phone is a little bit different, so reverse engineering one does nothing for the next.

    There is more to it still. From the flip side, chip fabs are the most expensive commercial human endeavor in history. They require an enormous up front investment and your devices largely fund the endeavor. This is a major part of the world economic growth. Like the USA was a military spending driven economy until the 1960’s. The reason large scale conflict largely ended for the USA has been because of the shift to venture capital and that shift happened in the 1960’s because of silicon valley.

    So it is a balance between economic growth and the fundamental human right of ownership along with your awareness and expectations in this area. If you do not recognize that you’ve lost ownership over your property or care, the concept of democracy weakens substantially. You’ve lost autonomy and that can feel wrong.

  • otp@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    Smartphone CPUs are designed to be small and cool.

    PC CPUs are designed to be powerful. Power means bigger and hotter.

    The two are at competing odds.

    If you want a better gaming experience but a smaller form factor, you need something like a SteamDeck. Or a laptop.

    It’s not that developers don’t know how to make mobile games. It’s that the games we want to play tend to need a lot of power, and mobile CPUs (and devices) can have trouble providing that.

    If you want to play old games (from when desktop CPUs were less powerful than modern phones’ CPUs now), then you’ll want to look into emulators.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    You really think most users of x86_64 machines today aren’t being shafted by Microsoft and various other software vendors just like users of smartphones?

    Meanwhile, a certain percentage of smartphone users go out of their way to run things like LineageOS and GrapheneOS and thus aren’t shafted (as much?) by the software vendors.

    All that to say I’m not sure the two worlds are as different as you seem to think.

    And, honestly, I’m ignoring the mention of gaming in your original post. I’m kindof ambivolent and unknowledgeable about that topic. All I know is that I’m very selective about what games I allow to run on my general-purpose computing devices. And on my consoles, I take measures to run games “in jail.” I don’t let my Nintendo Switch connect to the Wifi except on rare occasions and then only let them connect long enough to accomplish what I need.