I notice this so much in companies, big companies, who continuously have to justify their accessibility teams. "A screen reader isn't just for blind people, people who are outside in the sun and can't see the screen clearly use this!" Maybe, but that's not its main use case. That's basically sidelining blind people. Making our disability not even count. But it does. I have to live with it every minute of every day.
@devinprater So yeah I mean big companies need to learn to check their privalidge. But, big FOSS projects do too. I've pointed out the accessibility issues in Logseq, Mesh Central, and quite a few different projects that I either rely on, should be using, or that my boss uses so it'd be so much easier if I could use them too. But no. Why listen to a lone user with weird needs that no one else needs?
@devinprater If you got some edge-case report from a sighted person that your app isn't working with their size of monitors, would you flat out ignore them? No, you'd probably check and make sure your app scales to that size. Simple crap like that. What? You regularly test to make sure your app scales? They why the cat don't you regularly check accessibility?
@email@example.com Because checking accessibility is way harder than quickly resizing a window.
Also, implementing is even harder.
Case in point: my current game has no accessibility support other than being definitely color blind friendly by design. Being a platforming game, vision is certainly required and that is not negotiable. However screen reader support could still be great - e.g. would help people who can understand but not read the text (e.g. young children). Too bad existing screen reader APIs really wouldn't work well for it.
At the very least though I added visual cues to parts that had relied on audio. Should have thought of that earlier though...
@firstname.lastname@example.org Yeah, in games it is extra tricky. One person's accessibility feature is another person's cheat.
It then helps to decide what the game is about (and thus cannot be removed), and which sources of difficulty are intentional.
@email@example.com @firstname.lastname@example.org I would like two things:
- Events that can trigger TTS in game. E.g. when picking up an item or "reading" a "note".
- Clicking objects in the game (or wall textures) could read them out.
Basically, if I just had a function to say a string out loud, with a good voice etc., I would be happy. But it should honor accessibility settings, i.e. not be active for everyone but only those who want it.
Oh, and it should, while saying the string, turn down volume of the game, and somehow handle too many events coming in.
And I would want it cross platform and not just Android exclusive.
What about something that required your program's explicit cooperation (e.g. the "push event" function returned a handle, or an error value when the buffer was full, and you have to use the handles to cancel pending events to make room)?
What do you think of SAPI for this purpose? https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms720165(v=vs.85)
@email@example.com @firstname.lastname@example.org Having said that, priority queue is nice but not really needed - if I can just issue utterances and monitor if they're still ongoing, I can just build my own queue around it.
I'd generally have two kinds of events: wall writing, and notes. I want a note to be played always after whatever is currently played finishes, while wall writings are OK to be skipped if there's something else ongoing. In other words, I only need current and next utterance, and for the next one, notes have priority over wall writing.
As for what triggers them: notes would be triggered by walking on them in game, however wall writing would be triggered by walking over them too (with above queue-ish behavior) or by touching them with the finger (which should cancel ongoing utterances and force it right away). The game itself doesn't use touch input except for a game controller overlay, so touching on-screen text seems useful to have it read out.
This does mean I'd need:
- Issue utterance
- Get notified when utterance is done (polling is OK)
- Cancel utterance (if possible, without clicking noise, but let at least the word or syllable finish first)
- I need two voices for English for the MVP, i18n support would need the same later
@devinprater I asked them directly many times, many places. It’s because the number of (dollars paid by) blind users is not material enough to change our priorities. I.e. why would they diminish the immediate comfort of sighted people so that blind people are able to participate? They don’t care because blind people can’t force them with buying power.
@devinprater As a hobbyist FOSS Android dev I personally ran into issues with documentation on how Android Talkback works being very minimal and unclear.
When the Blind Android Users community over on https://blindandroidusers.com/ made a video showcasing Raise To Answer I was shocked at how difficult it seemed to use. I joined their Telegram and they helped me understand how to improve and fix things. I've learned a lot since.
So, I think many developers sadly still need education on how to do this.
@devinprater That's not to say it's your responsibility to teach people, it is not. And companies the size of Google have no excuse at all.
I'll try to keep helping explain the things I learned to others but if you have any specific resources you find really good I would love to know. As a sighted person it would be ridiculous to think that I would understand the best design patterns better than a blind person.
I will just keep doing my best for my apps, that's all I can do :)
@SylvieLorxu That's all I would *want* you to do, just consider us. I'm always looking for good accessibility material to share. But you're right, Google has no excuse for bad dev documentation for accessibility.
@devinprater I have heard of Logseq... never heard of MeshCentral. Is logseq usable, though? I want to begin using it if so. And what is MeshCentral? Questions aside... I completely agree with you.
@cambridgeport90 You can write in it, but it's hard to read what you've written, and the interface isn't very screen reader friendly. Mesh Central is a remote management system. But the remote desktop part has no sound, so not good for us.
@devinprater Damn. I think I could use Logseq, thoug; what I would be using it for is for Readwise exports. Please tell me you've heard of Readwise?
@cambridgeport90 I don't think so. What is that? I'd be using it to keep up with the 13 students I have to work with, and everything else about work. Cllege students could really get use out of this for all they have to keep up with, much better than a BrailleNote running Android 8 and a rinky dink word processor.
@devinprater Readwise is a tool to hlep you keep track of the things you read; it synchronizes your highlights from various sources; Kindle, iBooks, and several others, and then displays them in little increments, allowing you to recall what you read. It also supports fill in the blank and spaced repetition.
@devinprater You should. Oh... they have an API, too, so you could move highlights from weird places, if you wanted to, or create new export tools.
@devinprater Indeed. Haven't been able to find any clients already wrapping around it, but I'm sure it's not hard to implement.
A home where one can be themselves.