I wish there was a screen reader I could use on Linux. Of course I knew of the existence of Orca, but getting it to work turns out to be a hassle. Maybe even impossible.
I remember that on Arch Linux, I somehow got Orca to work, but maybe I was using PulseAudio at the time, or something like that.
And now that I have Gentoo Linux, Orca stopped working. The most I've ever gotten it to work was saying "screen reader on", but it never spoke again. That was all it had to say. I had passed ALSAPCM=jack to Orca to force it to use JACK according to my configuration, and while it did actually speak now, all it said was "screen reader on" and nothing more.
So basically, Orca is a non-functioning piece of shit software. Even with PulseAudio it doesn't work anymore. My WM is i3 if that matters, but I remember it working with i3 on Arch.
IBM discontinued LSR long ago, so the only screen reader we're left with is Orca. This is even worse than Windows. You don't want an organization like GNOME to have a monopoly on something, because they will make sure it is broken. You know what happened to GTK: they keep breaking the API over and over again (GTK3, GTK4) and turn it into a smartphone UI with big ass buttons so your grandma can tap it with her big fat jittery fingers. Literally no one wants this ON A COMPUTER! But GNOME does it anyways. RedHat-style! Move fast and break things.
And with GNOME developing Orca, they decided to make it work with PulseAudio only, meaning JACK users like me are left out. But that's not even the worst thing.
Literally most GUI programs have fucking shit a11y. "How?" you might ask? Well, my first guess would be ignorance. While that's probably true, there's another reason that's actually worse. I wanted to make my own GUI program accessible, so I decided to dig into GNOME's AT-SPI and ATK APIs, as well as Microsoft's MSAA API and the IAccessible2 API, and what do I find? A cludge of awful documentation. In GNOME's case, it was all Doxygen generated. In Microsoft's case, there were only confusing non-sensical MSDN pages. No examples at all, from either side! No fucking wonder shit is not accessible! M$ and GNOME don't give a fuck.
So even if you want to make your programs accessible, the learning curve is unacceptably high due to the poor documentation. But this is only when I look at the low-level APIs. Over the years, GUI toolkits like GTK and Qt have been adopted greatly, and these toolkits interact with the low-level accessibility APIs. You know what that means?
Yep. We're all dependent on these guys, and yet we fuck up making our programs accessible even while using either toolkit. It's a real fucking mess. I don't even like either toolkit because GTK is fucked, and while GTK2 might be okay, it's not really cross-platform enough (no a11y on Windows). Qt is written in C++, and since I'm mostly a C programmer, I avoid using C++ libraries.
I've been thinking and I feel like textual environments make the most sense for blind people. There's no point to a GUI. However it provides a tabbable UI with a DOM of elements which some blind people seem to prefer over a CLI program. But CLI programs will be the easiest to make accessible, as there's a lot less cludge to deal with. As long as you avoid having multiple things on the screen, abusing ncurses, you will be fine. For inspiration, enable Links's braille mode.
I tried running brltty in a VT, but I never got speech to work. I even tried with regular ALSA but still no output. I did borrow a friend's refreshable braille display to try it on brltty and it worked, so if you got a braille display you're good to go. However these things are god-awful expensive! They easily cost 4000 euros.