I am very curious and want to help to make Linux more accessible.

I wrote with some people and got some insights:

  • everything text, like a read-mode-only browser or a plain Terminal is best for TTS engines.
  • TTS engines are difficult, some are really good but need many resources, some are worse but save resources
  • TTS needs to be optimized to be really fast in some cases, to keep up with the speed
  • some apps are better, some are worse, but probably most apps dont really suit blind people, as the whole GUI concept makes no sense

I am really curious. How would it be best for you, braille vs. voice, voice input vs braille vs. gestures?

What apps do you find best, how do you browse the web, find media to listen, how do you use Document editors and what purpose do they have for you?

Thanks a lot!

  • Samuel Proulx@rblind.comM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    To respond to some of your points:

    Regularly tracking the blind Linux mailing lists and guiding newcomers on getting their systems accessible, problems with audio are actually very rare these days.

    And this, right here, is the issue. I don’t need guidance on either Windows or mac. I turn it on, and it works. I can install either Windows or Mac, by default, with a screen reader. The recovery partitions created by both Windows and Mac support screen readers, so I don’t need to keep a thumb drive with a live image around. Windows and Mac updates don’t break audio. They don’t require me to hack around with environment variables to get audio working, or to set accessibility options. I have a machine right beside me that I use as a home server, running the latest Ubuntu LTS. It has standard intel onboard audio, and when I tested just now with the TTY, even though the audio drivers seem to be installed, audio isn’t working. The only way I can use the TTY at all is with an ancient dectalk hooked up with a USB to serial port adapter, and even that took an hour of messing around with modprobe and other nonsense. I have no idea why; as this is a server, I don’t really care, but if it was my desktop (meaning I probably wouldn’t have SSHD going) I’d have no way to fix anything. I can only even test it because I happen to be old enough that I have dusty ancient DecTalk hardware in the cupboard. I can say that of the seven or eight Linux systems I’ve worked on, I have never one single time encountered a system with working audio. There’s probably some special thing I have to do to enable it, but without another computer I can use to look up what that might be, and the awareness of where and how to look, I’ll never know what it is. I’m sure you can tell me. But that’s not the point. The point is that by default, Linux systems are unusable by screen readers, unless you know exactly what to do and how to do it. How many thousands of hours did you spend figuring out how to get Linux systems working for you?

    Perhaps a more straight-forward interface wouldn’t hurt, but other than that, it works, whatever. :D

    And this kind of dismissive response to getting easier interfaces is another reason why Linux won’t be used by most people. I do complex presentations for work, that involve anywhere from three to five audio sources, and I need to change how they’re routed at least once or twice mid-presentation. This needs to be frictionless, as I need to do it while speaking, in less than 15 seconds. My use case isn’t that unusual, especially for blind people with screen readers presenting on Zoom or similar.

    screenreaders are applying workarounds for years to prevent soundcards from auto-sleeping,

    And these can be configured easily, usually work, and are fully supported and understood. None of that seems to be so on Linux, though as I’ve never found a Linux system with working audio, I can’t speak from direct experience.

    As a blind student using Windows, I was regularly calling my classmates to read me the screen, because I did something as simple as trying to turn my computer on.

    On Windows 11? Press control+windows+enter and narrator will launch. Even if you’re at the system recovery prompt. As long as you’re out of the BIOS, narrator will run.

    Just compare with Windows. When was the last time you discovered an inaccessible app, you wrote the developer “Hey, your app is inaccessible, but there is this great, well-documented MSUIA thing you can implement and get it working”, and the developer was like “Awesome, thanks for your feedback!”

    On mac? Last week. On Windows? A couple months ago. I was able to explain to them how the interfaces in Objective C or C# or whatever they’re using support accessibility, and point them at resources to work with it. On Linux, most things still are QT or GTK, and neither system properly supports accessibility without a bunch of hackery.

    Yes, improvements are happening. But they’re at least 10 years behind mac and Windows, and they’re going to require similar types of redesigns of basic building blocks that Windows and Mac required.

    • RastislavKish
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      And this, right here, is the issue. I don’t need guidance on either Windows or mac. I turn it on, and it works.

      So does it on Linux. Perhaps I have misphrased my point a bit, by writing that I’m tracking Linux mailing lists and helping new users to get to Linux, I meant to point out I have experience with different people of different backgrounds, skillsets and setups, with different environments, in other words, it’s not just mine laptop off what I’m claiming that all setups work, I’m just saying, that while working with various people, audio usually works out of the box, and when it doesn’t, it usually turns out to be a corrupted image or something along those lines.

      As for the general guidance for newcomers, this is something you simply do need for anything you want to learn in life. You once had to learn that computers have a keyboard, speakers, how to use them, someone probably told you how internet works, you likely figured out a lot on your own, so did you need to learn how to control smartphones, etc. Each platform, each device is different and when one comes to a new environment, it takes a while getting to know it. That’s completely natural and okay. In order to make this process easier, the community is here to answer any questions, just like on blind tech related mailing lists, I’m daily answering questions from Windows users about their system, because something doesn’t work for them.

      I do complex presentations for work, that involve anywhere from three to five audio sources, and I need to change how they’re routed at least once or twice mid-presentation. This needs to be frictionless, as I need to do it while speaking, in less than 15 seconds.

      I don’t. And no matter how pretty interface may Windows offer, if you told me “Hey, we have a presentation in 30 minutes, I need you to setup these things”, I would have absolutely no idea what to do. Actually, I don’t even think this is supported by Windows on its own, I do remember people working with sound using Virtual Audio Cable for doing this routing magic, but that’s a specialized software you need to get familiar with, study and properly install & configure. When you are already dedicating this time and resources for achieving something, what’s the difference between studiyng VAC and Jack? Actually, you may not even need something as complex as Jack, I think that’s used mostly by musicians and advanced audio engineers who need super-low latency and changes to the way system routes audio chunks, for just routing streams, perhaps PulseAudio already offers what you need.

      And these can be configured easily, usually work, and are fully supported and understood.

      Most of things can be configured easily if you understand them. Learning curve is a natural part of anything anyone is new to. Well, Linux is unfortunately not taught on elementary schools, not preshipped on 99% of laptops, it’s not the matter of 99% tech discussions on non-tech forums. If it was, it would generally be well understood how to setup an accessible Linux installation, and few adventurers trying out “the mythical Windows” would be complaining about how many hours their spent debugging until they found out that the high latencies of their screenreaders are not caused by low system responsibility but sleeping soundcards, and perhaps they wouldn’t even figure, they would just conclude that Windows accessibility API is bad and they would return to the well-understood and established Linux. So much depends on the background and social awareness, that some skills seem like completely natural and obvious, while they actually had to be acquired, it’s just that when everyone does it, the path appears to be more straight-forward and natural, because you don’t need to ask how to do things, since you see everyone else doing and talking about them, and in the process of following, you don’t even realize how much you learn.

      Does Linux do everything it can to communicate itself for the blind? Certainly not! There is already a lot of material for sighted folks, blind people can find some clues, but they’re usually either minimal, can be easily out of date or even straight-forward wrong, we could certainly do much better in this regard as a community. Currently, the best and most uptodate support is on our community mailing lists, where new people come to ask things like What is Linux, Which distribution to choose etc. and we help them as we can.

      But I think it’s important to recognize this is a communication problem, not technical problem. If we wanted to make Linux work exactly like Windows, so people wouldn’t need to learn anything new, well, why would they want to try a different system then, they could just stay with Windows. Linux is awesome, Linux is cool, but it’s Linux, not Windows. That’s a feature, not a bug.

      On Windows 11? Press control+windows+enter and narrator will launch. Even if you’re at the system recovery prompt.

      Awesome. Now what’s the point, if all the prompt offers me to do is a recovery that fails in the end or restarting my PC? I mean yes, if the recovery is successful, fair enough. But then, in 4 years of my usage of Linux, the system never, ever failed to boot. And in case it did, it’s likely for a reason that couldn’t be handled by auto-recovery. I’m regularly getting upset when using Windows terminal even in standard environment because NVDA’s flat review never works as expected, overal. Fixing a broken installation through a tty using Narrator sounds like a nightmare, even if it was actually possible.

      On mac? Last week. On Windows? A couple months ago. I was able to explain to them how the interfaces in Objective C or C# or whatever they’re using support accessibility, and point them at resources to work with it.

      Well, so could you point them to atspi-resources and documentation, there are resources, Python libraries, Rust libraries, C libraries, I communicated several accessibility bugs with Flutter devs and they implemented the necessary interfaces, so these are fixed now, as far as someone has the expertise and people to work with these things, it doesn’t matter whether they use MSUIA or atspi.

      On Linux, most things still are QT or GTK, and neither system properly supports accessibility without a bunch of hackery.

      that’s not really true. In case of GTK, as far as you use proper components, accessibility works just fine, and that’s the situation with all accessibility aware GUI frameworks, on all platforms. QT is a little bit more complicated, since it’s notoriously known for its inaccessibility even with proper usage, but there are apps that work really well with QT actually, like KeePassXC, so again, it comes down to individual apps. Many things these days are Electron or Tauri anyway, and those work very nicely with Orca, so the field is rather diverse. And even if something does not work, there is usually a TUI for it, which is 100% accessible.