Facebook’s work around accessibility took center stage in 2016 when it launched something called automatic alt-text for people using screen readers to identify what’s displayed, AAT uses object recognition technology to generate descriptions of photos on Facebook. But what Facebook deployed in 2016 represented the mere beginning of its efforts, Facebook Accessibility Specialist Matt King told me ahead of Global Accessibility Awareness Day.
“It was about as simple as you could get and still be valuable,” King said about version one of AAT, which initially launched for News Feed, profiles and groups. It later became available in 28 other languages before adding 17 different activities to the descriptions, like walking, running and so on.
“So we’re getting closer to being able to do a sentence, which is a long-run goal, instead of just having, you know, a list of words or concepts that describe a photo,” he said.
Then, last December, Facebook started taking advantage of facial recognition capabilities. That ensured that, even if a friend wasn’t tagged, someone using a screen reader would be able to know if their friend was in the photo.
“So, it’s bit by bit getting richer and of course there’s a lot of potential on the horizon,” King said.
Today, however, the product is still in its infancy — a toddler, at most, he said.
“It has a long way to go to even become like an adolescent level product, but I think that’s going to happen in the next couple of years,” King said.
As a grownup, this product would be more integrated with the photo viewer — the enlarged, full-screen version of the photo that lets you see photo tags and whatnot. With that integration, King envisions people being able to move their fingers around the photo and then be told about specific objects in the photo.
“You would be able to possibly hold your finger there and then ask a question about that object or tap the photo,” King said.
Or, maybe the description said the photo includes three people sitting at a table in a room. Based on that description, King said, you could maybe ask about the color of the person’s hair. From there, you could even ask if there are any decorations on the wall, and if so, what’s on the poster or the decoration.
“We might even be able to get to the point where it could potentially highlight unusual features of a photo,” King said. “So that can include something ironic or humorous that it would be able to potentially detect those kinds of circumstances and call them to your attention. So that’s what a grown up would look like.“
As it stands today, AAT’s descriptions are more like “image may contain three people, smiling, outdoors.”
AAT product falls into the category of what King calls the “plumbing of web accessibility,” he said. It’s not as “sexy and hot and cool as AI stuff,” he said, but it’s what makes it possible for things to actually work from an accessibility standpoint.
King helps ensure that what visually appears for sighted people gets translated well into something that’s non-visual. His work is about making all of that super friendly to screen-reader users, which is, he said, “not a technically straight-forward thing to do.”
He added that his energy often goes toward making sure interactions on Facebook “are as rich and enjoyable for people with disabilities as they are for other people.”
Facebook is not the only tech company working on accessibility. In April, Pinterest made its app a lot more accessible for people with visual impairments. Meanwhile, Google, Microsoft and Adobe have teamed up with Facebook to launch a program that brings together students, teachers and industry partners to explore accessibility.
In addition to its work with other companies, Facebook is actively researching how to better support people with cognitive disabilities, such as dyslexia. The work is around figuring out how to help those with dyslexia feel more comfortable sharing on Facebook, King said, because “there’s some emotional insecurity associated with like, ‘wow, what if I mess up’”?
Facebook’s accessibility team is also looking at an alt-text tool for video that could describe what’s happening in the video for those who are visually impaired. It’s early days, but King says at some point, “we want to have the ability to describe at least certain kinds of video.”
social experiment by Livio Acerbo #greengroundit #techcrunch http://feedproxy.google.com/~r/Techcrunch/~3/OHJ2CsWAbOY/