Application Shows Promise in 'Lending' a Set of Eyes to Visually Impaired Users
I can do all manner of things independently – attend school, participate in clubs and extra-curriculars, work, go to Starbucks, and any number of run-of-the-mill tasks one would hope to accomplish on my own. And accessibility efforts, both online and off, help to ensure that everyone has better access to information and areas.
But no matter what I accomplish in life, a soup can will still be indistinguishable from a can of kidney beans, and milk still has an expiry date I can't read.
To help, I need some form of identification – either a sighted friend or family member, or recently, technology.
There are of course barcode scanners and QR code readers to help identify products, but I can't get them to work: it's not obvious where barcodes or QR codes are on a product. I know other blind friends have used them successfully, but I end up waving a box or can over my phone in vain. And no barcode reader can tell me if the photo I want to hang is right-side up.
As there isn't always a sighted person around, technology has tried to bridge the gap. Three apps in recent memory have attempted this.
Early Option
VizWiz was meant to provide written answers to audio questions. Answers could be given by IQ engines (now unavailable), volunteers, web workers hired through Amazon's Mechanical Turk program, sent to a specific email address, or posted to Facebook or Twitter. This app now just hangs when a question is posted; it seems to be non-functioning.
There has also been criticism of inaccurate answers to questions, such as "white bottle" when a questioner asked for the name of the medication in said bottle.
The Present
Next came TapTapSee, an app which provides written identification of images taken with one's phone. It's incredibly accurate 9 times out of 10, but got criticized when it started charging for image recognition. I happily pay for a pack of picture recognitions whenever I run out, but I have the means to do so. The only trouble is, if a picture is taken but it's blurry or unrecognizable in some way, another picture credit must be used when retaking that image. This can mean that several image credits can be used to have the same object recognized. Without feedback as to how to change one's picture-taking, this can be frustrating.
Future Potential
Just recently, a third app entered the recognition/assistance market. Be My Eyes is an app designed to provide the best of all worlds – real-time sighted human assistance where several questions may be asked to guarantee accuracy. This app sets up video chats between a blind user and a randomly-selected sighted volunteer. Both must be using iPhones, though an Android version is in the works. I put it through its paces today, and would love to tell you about it.
Unlike similar apps in this niche, registration is required for Be My Eyes. After specifying I was a blind user, I had to provide my first and last name and my email.I opted for a truncated name and my seventh-grade email I only still keep to handle things that might be spam. (I'm still not sure how much sighted volunteers can see of my profile.)There was a lengthy Terms of Service Agreement and Privacy Policy. Data appears to be collected for analytical purposes, but also marketing ones. (the app is currently free but will have to explore funding options once its current funding runs out in September). As one might expect, users were reminded to be respectful, to not reverse-engineer or corrupt the system, and to refrain from posting illicit, defamatory, illegal or hateful material.
Once agreed to, I just had to select my language. English was the default: I'm not sure if that was because that was the language of my region or of Voiceover, or just because it was the default for everyone. Changing it was simple in theory, but some languages weren't spoken at all by Voiceover, and others were spoken in an Anglicized accent. The latter suggested to me that switching to these would cause Voiceover to continue to read their output in an Anglicized accent, which wouldn't be very helpful, though there isn't much on-screen content to be displayed. Nonetheless, having made that error once before (it's a long story), I wasn't brave enough to try switching English for something else.
Now to the fun part. Registered and ready to go, I tapped The "Connect to the First Available Helper" button, and waited. And waited. For 25 minutes I waited while the system looked for a helper.With apparently thousands of available helpers, I was dismayed at such a wait. Finally I disconnected and retried. I got three would-be volunteers on the line but they couldn't hear me and disconnected. On attempt number five I found someone who was able to answer my question. He was a friendly fellow from Romania, but we disconnected before I thought to ask how he heard of the service and what caused him to sign up. The audio quality wasn't great there either.
In the meantime I'd rallied the support of a sighted friend on her lunch break, who also owns an iPhone. She registered for the service as a helper, but didn't see my incoming requests for assistance, which I would have assumed all potential helpers would receive. She pointed out that it would be more helpful if we could opt to be a pair so that she could assist me specifically if desired. She kindly emailed the developer about this concern, and received a quick reply that it was a feature they'd considered for a future update. In the meantime, we agreed that if I wanted her help over all others', we'd arrange to discuss it over FaceTime.
Areas for Improvement
This app is a great concept, but there are gaps.
Firstly, the wait time. I'm using Wi-Fi on a pretty fast network, and still waited for some time. Obviously video streaming takes more bandwidth than audio or picture uploads, so I can't really fault Be My Eyes in this regard. I suspect much of their costs will be to maintain and upgrade their infrastructure.
The second is documentation. Blind people as a collective aren't used to taking pictures or video. The idea that something is better viewed 20 centimetres away rather than five centimetres away strikes many as odd. Reminding blind users of best practices in this regard is advisable.
A general guide as to how much a volunteer can see of your surroundings would also be helpful. As a female raised in the "never put your photos online ever" generation who was even reluctant to add a photo to LinkedIn, I'm a little squeamish about opening up a video chat with a stranger whose background hasn't been vetted by anyone. I'd like to know how much someone can see that is unrelated to the question at hand.
Helpers should be prepped that the questioner might not be entirely comfortable asking a stranger a question through such a service, so patience is standard operating procedure. Blind users need to remember too that this might be the first time a volunteer has assisted someone who is blind, so having to ask for more detail if required or clarify something that is unclear should be expected.
Also, a reminder that mobile phone users aren't always in a quiet environment is advised. It hadn't occurred to me that a volunteer might answer in the middle of a meal (still chewing), watching television, or (by the sounds of it) in a bar somewhere. The nature of mobile devices makes this a strong possibility, but it's a bit of a surprise the first time out.
Finally, a proper feedback system would be ideal. There's an option to report abuse or to say that a question wasn't answered, but no way to give positive feedback for a particularly good answer. There's also no way to give audio or video quality feedback, leaving me to feel that I had little recourse in handling the first four calls I made today.
In short, this is an interesting concept. I'm not sure if I would use it all the time, based on wait times and the existing challenges. However, with improvements I'd be certainly willing to give it another go.