• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    135
    arrow-down
    1
    ·
    9 months ago

    Well, here’s another example of the level tech journalism has sunk to.

    163-inch 4K Micro-LED television that one home theater expert described as “tall as Darth Vader.” Each of the TV’s 8.3 million pixels is an independent, miniscule LED, a feat for which TCL charges over $100,000.

    But here’s the real surprise: TCL’s new TV isn’t the most pixel-dense or exotic display ever produced.

    No fucking shit, Sherlock. It is trivial these days to buy a laptop with a much smaller screen but exactly the same 3840x2160=8,294,400 pixels on it. Smaller screen, same number of pixels, more pixel dense. The Sony Experia Z5 Premium is a phone with that same pixel count.

    Duh…?

    The Vision Pro is wireless out of the box, but it’s somewhat heavy, struggles with meager battery life which, and can’t match the fidelity of Varjo or Pimax headsets.

    Apparently nobody proofreads or does any copy editing anymore, either. Or maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows.

    • Marcbmann@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      9 months ago

      It’s definitely written by someone who’s never used a VR headset. It only takes a second to realize that these screens are nowhere near the resolution of your eye. Ya know, cause small text that would be easily read on my phone is blurry as fuck on a VR headset

      • Tarquinn2049@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        9 months ago

        I can see someone who only tried VR back 10 years ago, putting on an apple vision pro and being shocked that the resolution was so high, only to be informed it was a modest increase over other current headsets and that they are all pretty clear now. But really they should know if it was anywhere near “retina resolution”, apple would have been all over making that claim.

    • gullible@fedia.io
      link
      fedilink
      arrow-up
      10
      ·
      9 months ago

      I’m a bit surprised at the ieee hosting nontechnical articles. How long have they published “news” in this capacity? Archive.org suggests 2021 but it may have been earlier. Seems a poor decision for an ostensibly professional website to branch out like this. God, I hope .gov sites never start hosting blogspam.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      10
      ·
      9 months ago

      maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows.

      I don’t understand why so many people assume an LLM would make glaring errors like this…

      • drislands@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        6
        ·
        9 months ago

        …because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          34
          arrow-down
          5
          ·
          edit-2
          9 months ago

          They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.

          • drislands@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 months ago

            Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

          • Garbanzo@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.

          • GlitterInfection@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.

            Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.

    • NightAuthor@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      9 months ago

      ChatGPT writes much better than this article, here’s its response to your comment as an example:

      Your frustration with the state of tech journalism is understandable, especially when it comes to covering high-end technology like the 163-inch 4K Micro-LED television by TCL. It’s true that on a surface level, comparing the pixel density of large screens to that of smaller devices like laptops or smartphones can seem straightforward. However, the engineering and design challenges involved in scaling up screen technology while maintaining image quality cannot be understated. TCL’s achievement lies not just in the number of pixels but in creating a large-scale display that maintains high-quality imagery, which is a different kind of technological feat compared to miniaturization.

      Regarding the editorial errors and the critique of the writing quality, it’s clear that tech journalism, like many fields, faces challenges in maintaining high standards amid the fast pace of technology news and the pressure to publish quickly. While it’s disappointing to see, it highlights the ongoing need for rigorous proofreading and editorial oversight in publishing. However, attributing these issues to automation or outsourcing without evidence might not fully capture the complexities and pressures faced by publishers today. It’s crucial for the industry to address these issues to maintain credibility and provide the insightful, accurate tech coverage that readers deserve.

      • Hadriscus@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        9 months ago

        Yeah but I can’t trust that at all. It may very well be complete bullshit, it just happens to be composed in a way that appears meaningful.

        Remember what the sources are : you, me, marketing talk from product webpages…, certainly not the brain of a display engineer at Sony

  • BleatingZombie@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    2
    ·
    edit-2
    9 months ago

    “Did you know that the human eye only sees in 720p at 30fps? Your computer isn’t better than my console” \s

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    44
    ·
    9 months ago

    Oh great, another round of nonsense about the limits of human vision peddled by A) companies trying to trick you into thinking their products are great, and B) fools trying to cope with their buyer’s remorse and envy, and C) people with not-so-great eyesight who, for some reason, think that’s inconceivable.

    We are nowhere near the limits of human visual acuity. It is trivial to prove this by experiment.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    9 months ago

    It’s the framerate and response lag that is going to make it a motion sickness machine for folks like me.

    And sadly, it gets worse as I age, so VR is running a losing race.

    • faethon@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      We have to speed up technology so that it outpaces us humans getting older!

  • QuarterSwede@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    9 months ago

    PPD is really the big deal with the eye so close and foveated rendering being used. Was curious to see if they mentioned the limit of the human eye’s PPD resolution but I didn’t see it. Otherwise, a good article on the technology.

    • qjkxbmwvz@startrek.website
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      I’m curious what this actually is. Yes, we can see under moonlight and also at noon in the tropics, but not at the same time. It’s somewhat akin to the dynamic range of a camera — an 8bit B&W camera has a gigantic dynamic range if you allow for shutter, aperture, and gain settings to be adjusted.

      In other words, while the dynamic range of my eye over the course of an hour is maybe 60dB*, there is no way I can use that dynamic range in a single scene/“image”.

      *Just a guess from sunlight at ~1kW/m^2 to moonlight at roughly one millionth of that (super hand wavy I know).

  • vinyl@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I’m not sure if this is entirely true but I think one YouTuber somehow calculated and came up that each eye is ~500 megapixels

  • mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    The least interesting figure that’s also the first thing reviews harp on and the hardest thing manufacturers push.

    For presence, latency is what matters.

    For immersion, FOV is what matters.

    For adoption, cost is what matters.

    I maintain that some absolute toy is what’s gonna break the market open. Dirt cheap, immediate, convenient, and with static specs that make current owners scoff. It can have potato graphics so long as it feels rock-solid. (And doesn’t make you sign in to a computer that’s strapped to your goddamn face.)

    The trick is gonna be intermediate representation. We’re still using direct raster to bitmaps, from software. This is quite frankly insane. It’s a misunderstanding of why we have bitmaps. The refresh rate of old monitors had to be kept precise or else things got fucky. Generating pixels on-the-fly worked, but it was limited by hardware speeds. Showing a big dumb array of pixels instead simplified the technology and decoupled display from rendering.

    But VR displays don’t need to refresh the same pixels every fraction of a second - they need to refresh the same scene every fraction of a second. The same surfaces should stay put while you move your head, even if updating those surfaces takes a moment. The modern equivalent of a simple video card reading off a big dumb array of pixels would be a big dumb array of colored balls floating in open space. The further, the bigger. If some very simple technology can guaranteeably render that at 200 Hz, then it doesn’t matter how long a game needs in order to update all those balls.