• 0 Posts
  • 16 Comments
Joined 11 months ago
cake
Cake day: October 27th, 2023

help-circle
  • So you are talking about turning off the monitor with its own push-button?

    This is complicated. HDMI and DP are designed differently. HDMI is far “dummer”. So it really depends on how the monitor behaves when you turn it off (that can range from GPU cannot possibly tell its off to the monitor acts to the GPU like the HDMI cable was unplugged).

    Since you have DP into your GPU, the USB-C hub needs to convert what it may be able to tell about the HDMI monitor into DP format.

    I can tell you that default behavior for Windows is to switch to last active audio device and monitor configuration for the current set of attached monitors (I do not actually know what Windows ties audio devices to). So the first time you attach a new audio-sink, Windows will switch to that. If that was the active output when you disconnected it, it should switch to it, when you plug it back in. In fact, Intel and Nvidia drivers under Windows have a habit of declaring the same monitor as a new audio-device far too often, so it more often switches to that as output than it does not.


  • rayddit519@alien.topBtoFramework@hardware.watchAMD vs Intel
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    AMD is more energy efficient (definitely under load, not sure about idle and near idle), but has significantly worse IO, especially for a device with only 4 USB-C ports that are supposed to be as universal as possible. Much of that universality is lost with the AMD variant and there are 3 different types of ports with different amounts of universality/capabilities.

    Plus AMD has a history of their firmware and drivers not being quite ready and stable on launch. But you could argue that the post-launch firmware update was already a major step towards stability and it’s a mobile platform with limited modularity, so that there are not tons of different variants that could be untested / unstable and yet to be discovered.

    You’ll have to decide whether IO or performance / energy efficiency is more important.

    Both Intel and AMD advertise new display output capabilities (DP UHBR10 and faster) for their CPUs, but to my knowledge FW did not implement this in either variant (and I am not sure if it would even be possible, given that no other device so far has claimed support for it), so the IO differences seems to be rooted mostly in amount of display-capable outputs, USB4 ports and display-support on those USB4 ports.


  • I’d think you’d have a very hard time to get reports other than anecdotal ones about this specific topic, so totally fine. As I have no AMD iGPU available to test, I have no idea how common that kind of problem is.

    But sounds a lot like either the AMD Linux driver or the Steam Big Picture implementation for Linux does things wrong. I believe I explained it in other posts, that I would not expect it to directly impact performance, but any program that treats this like a dGPU and tries to manage memory on a low level will probably handle things very wrong when only tiny amounts of memory is allocated.

    And while I have no deep knowledge of GPU APIs, which API is used should influence greatly, how likely it is for an application to bake in a way that only works for dGPUs or high amounts of UMA.




  • It launched before DSC was popular or available from most hosts. But the MST-hub is very much capable of DSC and with current firmware it has no problems with that.

    In fact the WD22TB4, which officially does support DSC is using the exact same chip on exact same firmware (the big part of the dock is identical across all WD19 and WD22 docks apart from the missing audio ports). You can even upgrade the WD19TB to WD22TB4 by simply replacing only the TB controller, the MST-Hub remains the same.

    Edit: have I mentioned I own 2 WD19TB?



  • The picture of the outer box looks abused sure, but nothing that I would not expect the box to withstand. Unless that puncture wound is actually going through and dented the device somewhere.

    Regarding the popping sound: this reminds me of sth. I have experienced when upgrading my hinges. Can’t be sure from the video alone, so just for reference:

    The hinges have 3 holes each to the notebook base. Only 2 of them are screwed down. The 3rd one, the one closest to the actual hinge in the place it has to endure the most force when you are trying to close the opened device is just a hole with the bottom case screws poking through.

    So especially when I upgraded to the 2nd gen hinges, you could actually see the bottom part of the hinges bending whenever trying to close the lid. And the screws poking through that hole got sometimes caught on the bending hinge assembly, causing a very similar sound and jerk.

    If you screw the keyboard cover down, it is pulled through those 3rd holes against the hinge, sandwiching it against the bottom case, which stopped the hinge from bending in that place and stopped the lid from making that sound. Is that maybe what is affecting your device as well?


  • We did not go over DSC. If the GPU supports DSC, each stream for a specific monitor inside an MST-connection could be individually compressed. And MST-Hubs, like the 3-Port VMM53xx hub inside your dock can uncompress that to a DP SST connection on every output.

    This allows you to achieve a total bandwidth across all displays on the MST-Hub of roughly 250% of the raw bandwidth of the available DP connection to the host. Which for example means you can typically get to 2x4K60 + 1x 4K30 on a 2xHBR3 DP Alt mode connection (i.e. half a modern DP connection).


  • AMD gimped the usb4 controller and only supports ONE attached display per USB-C ports

    1 DP Connection per USB4 port. To stay in line with the other situations.

    MST-enabled docks work. To my knowledge, they’re fully fledged computers

    No, just similar to USB hubs, can redirect parts of the incoming data to specific outputs. Typically, the host’s GPU is more limited in amount of MST-streams through a single DP connection or total number of displays than most MST-Hubs are.

    With intel, does it multiplex each of the two channels or it’s a different beast?

    One or both of the DP connections through USB4/TB will run in MST-mode, which will then be split by MST-Hubs. Both DP connections remain fully separate, just like they do without MST.

    Macbooks don’t support MST, stuck with SST, so with certain docks that supports MST, that cuts the numbers of external displays down to half + 1 mirrored.

    MST-Hubs typically have a fallback mode for SST, where they will simply forward the exact incoming signal to each and every output. Which of the outputs monitor’s data they will pick to represent to the host, that cannot conceive of their being more than one display behind a single connection is the MST-hubs business / random.

    When you have a TB4 dock that supports 4 displays WITH MST, like these insanely expensive Satechi ones, does it matter if it’s AMD or not?

    Depends on whether that dock splits 1 DP connection into 4 with 1 MST-Hub or a topology of MST-Hubs, or whether it uses both DP tunnels, each with a 2-port MST-Hub.

    Typically it is the latter, because then, Apple hosts can still use 2 outputs, as long as they are on separate DP connections. If all outputs come of a single incoming DP connection, Apple would only recognize a single display.

    does MST needs the two DisplayPort channels to output on 4 different monitors

    DP MST technically supports up to 63 separate streams wrapped into one DP connection. It does not support combining multiple DP connections. Although typical GPUs have a limit of like 4 displays total. And currently available MST-Hubs, like the very popular Synaptics VMM53xx series in HP, Lenovo and Dell TB and USB-C docks supports up to 12 streams, max. 4 on each of the 3 possible outputs.

    MST-Hubs can be chained, like the Lenovo TB4 dock does it in order to split one of those 3 outputs into 2 outputs for the total of 4 outputs that would work, even on AMD USB4 hosts or even DP Alt mode only hosts.


  • Mostly my personal points, but also some I do not personally care about. And only criticisms

    • While FW goes further in Linux Support than most manufacturers, I would not say they are making sacrifices to Windows functionality in order to increase Linux compatibility or open-source-ness of the device (I like it that way, users using mainly Linux may not)
    • The modularity of the outputs and modularity in general also brings with it negatives. In terms of power consumption, performance, compatibility (additional adapters involved for the outputs. While it should not be a problem, it still makes things more complex and by its nature prevents certain stuff, like the functionality of native HDMI outputs, DP++, power efficient USB-A outputs.) Early problems with power efficiency have been improved. It remains an open question for me how much of the remaining difference versus competitors is simply result of a not very specialized product (can run as desktop for example), the modularity or design experience. For example my device seems to wake up from Modern Standby so much more frequently than other Intel devices I have seen, causing higher sleep power consumption than seems necessary)
    • a particular problem of the system design: device does not power up from hibernation if lid is opened (unlike when power is plugged in). Kind of needed when you lift the keyboard for disassembly, but far less convenient on Modern Standby devices that automatically switch from suspend to hibernation dynamically.
    • fan grumbles at lowest speeds (you basically hear the motor in a really quiet room) and fan control has an audible step at that speeds that just pisses me off. Staying at higher speeds would be better. Less of a problem the more power efficient the CPU is. Intel CPUs seem to output enough heat in power saving modes and on desktop to necessitate running the fan at least on lowest speeds. So it is rarely completely off.
    • The particular way the outputs are modular takes up a lot of space that limits the space the laptop has for other components
    • still playing catchup with other manufacturers features: (small points, I would not have expected in early devices or on launch, but that could be available as upgrades)
      • HDR screen
      • auto-brightness of keyboard backlight instead of having it to do manually, auto-timeout so it won’t stay on forever for example when watching a video
      • BIOS supervisor PW does not apply to boot-order changes / boot-menu unlike EVERY other device I have owned. I’d consider this a security issue
      • no option to disable automatic booting of any BootROM behind USB4/TB
      • no ReBar support
    • either unwillingness or inability to provide software updates (firmware, BIOS) in any acceptable amount of time. At least for older products. Includes some issues officially announced as security issues that are outstanding for almost a year now. They say they are improving and not silently dropping support for older generations. That improvement can not yet be observed and plans have not been detailed enough for me to trust in that improvement before I see it. What they stated makes it seem like they new they did not have the resources for doing software support for more than 1 device if at all. Who knows if the current plan will actually add enough resources to support all generations still being sold (which are still all of them)
    • Remains to be seen, how much of the stated goal of producing longer lasting devices can be achieved, if there is no way for software upgrades over time, when the hardware is technically capable of it, without replacing the entire mainboard, the most expensive part, with a newer version. Their board design shows FW trying to think of a lot of things for future possibilities (non-notebook use on limited power, touchscreen support etc). But I think a longer lasting device can only reach its full potential with ongoing software support including some software feature additions like mentioned above. While I think they have a good record for making revised hardware available, fixing flaws / disadvantages compared to competitors, like the hinges, speakers, more rigid lid for what I think are fair prices, they have not done any of that for the BIOS/software. For example the simply nice-to-have GUI is tied to the 13th gen FW board and newer, requiring an upgrade of the entire board (just an easy example not sth. I care about. I’d much more care about the software points mentioned above). They have stated, that they don’t want to ship software feature upgrades. That, together with the questionable ability to even ship security updates, makes me estimate how long I am willing to stay on one FW device significantly lower than I initially hoped for.

  • The issue was basically not that those cards draw extra power. It is that the mainboard and some controllers on it draw extra power, when the card is present. To solve those parts that could not be fixed in software on the Intel boards, the DP and HDMI cards were enhanced to virtually “unplug” themselves, so that the board could not react to them by wasting power.

    The USB-A card seems to cause some additional power consumption in standby, just by the nature of USB-A and the rest was a specific problem with the AMD board and some of the controllers (some ports) on it.

    The software fixes for the Intel boards have not rolled out to all generations, because Framework seems to have severe problems shipping firmware updates for anything other than the newest generation that they are currently trying to sell to customers. And the 11th gen boards may not have gotten all the fixes, I am a bit out of the loop here. 13th gen is new enough to appearently have launched with all the software fixes.


  • Is it confirmed that AMD supports 5600 CL40 anyway? Especially in notebooks without any overclocking features, you need to expect that everything is following AMDs spec exactly. Not like desktop mainboards that just on default do overclocks, even if they cause instability and will just try whatever settings the memory indicates as supported.

    Secondly, GPUs are typically less sensitive to latency and much more sensitive to bandwidth (because they do more batched and pipelined processing where you have less surprising and truly random accesses), but a lot of them in parallel.

    The CPU side of things is the one more sensitive to latency. For CPUs bandwidth, once there is a good amount, only becomes relevant in very specialized workloads.


  • Have you actually measured any performance difference?

    According to my understanding, there should be no performance difference at all (hardware should not care). If there is, it would be caused by games & applications treating the iGPU wrongly, as if it was a dGPU and hence using that value in wrong ways that improve or worsen the performance.



  • Intel supports routing all the available display pipelines through a single DP MST connection (any DP output). While AMD does not themselves publicly document this, they do support the same, just as Nvidia does.

    Depending on involved peripherals, you might run into bandwidth limits. 4x 4K60 for example is possible over a single DP connection and Lenovo for example even lists that as supported with the right dock for their AMD laptops.