The Be My Eyes Virtual Volunteer

I haven’t used Be My Eyes much in the last few years. Basically any time I would have, Carin’s been there with Aira to take care of whatever the problem was. But since I won’t have access to Carin or Aira for the next few weeks, it occurred to me that perhaps I should pop the app open and make sure that I still remember how to log in, just in case. And when I did, I found something potentially interesting. A new service, currently in development, called Virtual Volunteer.

The Virtual Volunteer feature from Be My Eyes will be integrated into the existing app and is powered by OpenAI’s new GPT-4 language model, which contains a dynamic new image-to-text generator. Users can send images via the app to an AI-powered Virtual Volunteer, which will answer any question about that image and provide instantaneous visual assistance for a wide variety of tasks.
What sets the Virtual Volunteer tool apart from other image-to-text technology available today is context, with a deeper level of understanding and conversational ability not yet seen in the digital assistant field. For example, if a user sends a picture of the inside of their refrigerator, the Virtual Volunteer will not only be able to correctly identify what’s in it, but also extrapolate and analyze what can be prepared with those ingredients. The tool can also then offer a number of recipes for those ingredients and send a step-by-step guide on how to make them.
If and when the tool is unable to answer a question, it will automatically offer users the option to be connected via the app to a sighted volunteer for assistance – our volunteer experience isn’t going anywhere.
This new feature promises to not only better support the blind and low-vision community through our app, but we also believe it will offer a way for businesses to better serve their customers by prioritizing accessibility. We plan to begin beta testing this with our corporate customers in the coming weeks, and to make it broadly available later this year as part of our Specialized Help offering.

An AI TapTapSee or Seeing AI that can answer followup questions has a lot of potential, if it works anywhere close to how they say it should. There are a lot of reasons why it might not (automatic image description can still be pretty janky and blind people can be awfully good at taking crappy pictures among them), but the idea that these sorts of technologies are ever improving and that people with know how have ideas for them is still somewhat exciting.

By the way, had I not stumbled upon this due to circumstance, I would have had no idea that any of it existed. Be My Eyes would be well served by jumping off of the useless release notes train as soon as it can.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.