Hear The Sound, Blind Guys!

I’m not sure if you heard, since nobody seems to be talking about it, but there’s an eclipse tomorrow. Everybody’s all excited about it, except my grandma who hopes it will be too cloudy for people to stare at the sun and do dumb things. I get that the moon will block out the sun, but I don’t know if I’ll truly get what the fuss is all about. But maybe this neat little app might help.

Winter and a small team have now launched Eclipse Soundscapes, an app (already on iTunes with a Google version expected before Aug. 21) which can provide various ways for visually impaired and blind users to experience the eclipse.
The first experience will be to hear what’s happening; with help from the National Centre for Accessible Media the app will give “illustrative descriptions” of what’s happening during the eclipse. The descriptions can be read either by the voiceover option on a smartphone or through a recording on the app, Winter said.

Pretty freaking cool!

It reminds me of one time when there was an especially spectacular lunar eclipse. I can’t remember what year it was. Mom and dad took a piece of paper representing the moon and cut out pieces to show me how much was left and the shape the remaining visible moon made. At the time, all I did was sort of look at it and go “Hmmm cool I guess.” and run away to do something stupid and childlike, but now I appreciate what they were trying to do, and think they were pretty cool.

I downloaded the app, and it looks like they have plans to have it work for future eclipses and other astronomical events. How awesome is that? Plus, the NCAM is involved, so I’m sure it will be amazing.

I imagine being as distracted tomorrow as the folks staring at the eclipse through their glasses if I end up playing with this app.

So Long To Adobe Flash, One Of The Best Worst Things About The Internet

I realize this news isn’t exactly breaking, but there are only so many times you get to celebrate the death of something as goddamned irritating as Adobe Flash, sooooo…

Adobe Systems Inc.’s Flash, a once-ubiquitous technology used to power most of the media content found online, will be retired at the end of 2020, the software company announced Tuesday.
Adobe, along with partners Apple Inc, Microsoft Corp, Alphabet Inc’s Google, Facebook Inc and Mozilla Corp, said support for Flash will ramp down across the internet in phases over the next three years.

After 2020, Adobe will stop releasing updates for Flash and web browsers will no longer support it. The companies are encouraging developers to migrate their software onto modern programming standards.

To be fair, Flash wasn’t all bad. I think it’s safe to say that without it, the internet would be a drastically different place. YouTube, for instance, absolutely would not be what it is today had Flash not been around in 2005. For that reason, it deserves to be celebrated as the groundbreaking innovation it so clearly was.

But at the same time as it has absolutely been critically important to the evolution of the web as we know it, it’s also been responsible for some of the most frustrating, screenreader inaccessible user experiences in the history of the fucking earth. Ucking ear, reenreader periencfucking earth.

Sorry, most of you. That’s a little humour for any of my fellow screenreader users who have ever been caught in bouncing Flash animation hell while just trying to read a frigging webpage, a group I like to call all of us. And it is for that reason, not to mention the button button button button flash movie start flash movie end phenomenon and the countless dangerous security flaws it’s responsible for that it deserves to be thrown into a pit far underneath hell, never to return.

Good riddance and thank you all at once, you brilliant piece of garbage you.

Even Geniuses Create Monsters…

I know this is old, but in a sense, it’s really old, so it doesn’t matter.

A long time ago, I came across an article about talking dolls invented by Thomas Edison. These things, although technological marvels for the time, could give you nightmares. Observe.

Ok ok, you can take your hands off your ears now. Seriously. If you think those little kids reading prayers in horror movies are spooky, they have nothing on these big ol’ creepy dolls. Apparently, they looked just as creepy as they sounded.

In early April 1890, each doll that emerged from Edison’s vast West Orange, New Jersey, site stood 22’ inches tall, weighed a heavy four pounds, and sported a porcelain head and jointed wooden limbs. Embedded in each doll’s tin torso was a miniaturized model of his phonograph, its conical horn trained toward a series of perforations in the doll’s chest, its wax recording surface etched with a 20-second rendition of one of a dozen rhymes, among them “Mary Had a Little Lamb,” “Jack and Jill” and “Hickory Dickory Dock.” With the steady rotation of a hand crank located on the doll’s back, a child could summon from the doll a single nursery rhyme.

Even back then when they were an amazing technological feat, they didn’t sell too well. Gee, I wonder why! For $20, which was the equivalent of $574 in 1890’s money, you could have a heavy, fragile, buggy doll which you hand-cranked to get often incomprehensible speech. Notice how that video says “restored” on it. Eek!

When I was a kid I always wanted talking dolls. Maybe my mom should have showed me an Edison doll. I never would have wanted one again!

The Floppotron


We’ve posted a few different computer hardware musical creations here over the years, but nothing on this scale, I don’t think.

Polish engineer Paweł Zadrożniak built the Floppotron, a synchronized array of obsolete computer hardware programmed to play tunes. The current Floppotron 2.0 build sports 64 floppy drives, 8 hard drives, and a pair of flatbed scanners—most of these items have had their covers removed, apparently for improved acoustic performance.

Zadrożniak harnessed the power of the stepper motors in the floppy drives and scanners. By driving those motors at specific speeds, he can force them to generate pitches that sound a lot like string instruments. The hard drives can be gently overloaded to force the read/write heads to whack against metal guard rails—voila, percussion!

Saying it sounds “a lot like string instruments” is awfully generous, but that’s not me saying it isn’t pretty cool and even kinda good.

If you’d like to read more about how it all works and see more videos of it in action, here ya go.

Microsoft’s Seeing AI App Sounds Like TapTapSee On Steroids

I haven’t tried it for myself just yet since this is the first I’ve heard of it, but if Microsoft’s Seeing AI app works as advertised, holy shit!

Seeing AI, a free app that narrates the world around you, is available now to iOS customers in the United States, Canada, India, Hong Kong, New Zealand and Singapore.
Designed for the blind and low vision community, this ongoing research project harnesses the power of artificial intelligence to open up the visual world and describe nearby people, text and objects.

The app uses artificial intelligence and the camera on your iPhone to perform a number of useful functions:

  • Reading documents, including spoken hints to capture all corners of a document so that you capture the full page. It then recognizes the structure of the document, such as headings, paragraphs and lists, allowing you to rapidly skip through the document using voiceover.
  • Identifying a product based on its barcode. Move the phone’s camera over the product; beeps indicate how close the barcode is – the faster the beeps, the closer you are – until the full barcode is detected. It then snaps a photo and reads the name of the product.
  • Recognizing people based on their face, and providing a description of their visual appearance, such as their gender, facial expression and other identifying characteristics.
  • Recognizing images within other apps – just tap Share, and Recognize with Seeing AI.

In addition to full documents and barcodes, it will also be able to read things like signs and labels, which if done well could be a pretty big step up from what the still awesome and useful TapTapSee does now. Oh, and it will even try to describe any picture you take in detail, a handy feature for anyone who has ever let a sighted friend borrow their phone or had one take a photo for you only to discover that they actually took 12 of them.

And remember, all of this is free. Maybe it’s only free because it’s a research project, but if it’s going to lead to greater accessibility in all sorts of mainstream applications down the line, who cares?

DirecTV and U-Verse Now Have Accessibility Features

Good news, American blind kids. You now have even more choices when it comes to accessibly watching television.

In this interview John Herzog, Accessibility Solutions Engineer, describes the many advancements AT&T media based products have been gaining since November 2016. Both the DirecTV apps for Android and iOS are speech friendly now with their respective screen readers. John will take us on a tour of the iOS app with stops by the DVR Manager and live TV functions. You will also hear how you can turn on the talking guide on your DirecTV box along with some other information about accessing your secondary audio. John will also provide information about how similar features can be utilized via the U-Verse apps. All of this amazing access is free to U-Verse and DirecTV subscribers.

I’m listening now, and while not every feature is absolutely perfect, it sounds pretty amazing. Seriously Canada, can we get with the fuckin’ program here?

You can check out the interview here, and you can go here for more info on DirecTV’s talking guide and here for more about U-Verse’s offerings.

Photoshopped Voices, Now With Less Data And More AI

These Lyrebird people haven’t yet reached Adobe VoCo levels of voice fakery, but they’re getting there. And though their aim is to sell their technology to companies whose products include speech synthesis, once it’s widely available, the implications are quite similar.

What you’re listening to is a Lyrebird generated Donald Trump, Barack Obama and Hillary Clinton talking about the company. Right now they sound quite low quality and computer generated, but with the arguable exception of Hillary it isn’t hard to figure out which voices those are. And though it may be tempting to write that sample off as digitized garbage and move on, it’s worth keeping in mind that those voices are as close to the genuine article as they are after the use of voice samples under a minute long as opposed to the 20 minutes required by VoCo.

This is all made by possible through the use of artificial neural networks, which function in a manner similar to the biological neural networks in the human brain. Essentially, the algorithm learns to recognize patterns in a particular person’s speech, and then reproduce those patterns during simulated speech.

“We train our models on a huge dataset with thousands of speakers,” Jose Sotelo, a team member at Lyrebird and a speech synthesis expert, told Gizmodo. “Then, for a new speaker we compress their information in a small key that contains their voice DNA. We use this key to say new sentences.”
The end result is far from perfect—the samples still exhibit digital artifacts, clarity problems, and other weirdness—but there’s little doubt who is being imitated by the speech generator. Changes in intonation are also discernible. Unlike other systems, Lyrebird’s solution requires less data per speaker to produce a new voice, and it works in real time. The company plans to offer its tool to companies in need of speech synthesis solutions.
“We are currently raising funds and growing our engineering team,” said Sotelo. “We are working on improving the quality of the audio to make it less robotic, and we hope to start beta testing soon.”

The Bank Just Figured Out How To Get You To Like It More. A Dancing Robot


Should you find yourself both in Calgary and in need of a bank, there’s a chance you could be greeted by…that thing.

ATB Financial has teamed up with SoftBank Robotics America to unleash Pepper, a friendly 3-wheeled robot designed to make the banking experience better or something. They say that she is capable of recognizing human emotions (that ought to be fun) and that her purpose is to “draw more people into the bank and provide them with a fun and engaging experience that keeps them coming back.”

Alrighty.

Pepper’s interactions will be fairly basic at first.
The three-wheeled robot will be able to dance, recommend products and services, pose for selfies and interact with people via a mounted touch screen tablet, or verbally in several different languages.

I like how they just sorta slip recommend products and services in there between all the pictures and the dancing and the interactivity.

Why is this happening? Why is the bank becoming an arcade with ads? That’s because the company’s research (Research I say!) has shown that people think banks kinda suck.

ATB Financial says it partnered with SoftBank Robotics America after customer research found many people carry a lack of trust and high levels of discomfort in dealing with the banking industry.

“We found out that there’s some people who don’t really love banking, and don’t love coming into banks,” Boga said. “We want to bring happiness to people using banking.”

You know what would make people happy about the bank? Being honest and fair with them and not dinging the everloving bejeezling shit out of them on every transaction, you motherfuckers! Or you could let them take selfies with a commercial slinging robot that knows their names. Whatever works.

“Wait wait wait…what’d you just say, Steve?”

Well, I was about to say that option two sounds a lot like something somebody who just “found out” that a not insignificant number of folks believe that dealing with giant financial institutions traverses the universe in search of new dicks to suck would do, but I get the sense you’re wondering about something else.

“Yes, we are. What was that thing about our names?”

Oh that.

Yes, eventually the plan is that Pepper will know everything in order to shill more efficiently.

But ATB has hinted that Pepper’s functionality could eventually be expanded by connecting it to an artificially intelligent system. This would allow the robot to perform biometric authentication via the camera installed in its head, making it possible for Pepper to address customers by name and provide them with personalized banking recommendations based on their stored customer information.

You know, that personal, one-on-one service in every bank commercial you’ve ever seen.

The company insists that Pepper is not intended to replace human jobs, but rather to allow the human staff to engage on a more personal level with customers. As for what exactly that means, you’ve got me. A bit of small talk and some attempted upselling pretty well sums up every meeting with a bank human I’ve ever had, so I’m not sure what’s left. And now that I think about it, none of them have ever done a little dance and taken a picture with me at the end, so advantage robot.

If I were a banking human I just got a wee bit nervous, and I may have also signed up for some dance classes and photography lessons on my way home. You know, so I’ll at least have the smallest snowball’s chance of keeping that job I’m totally not being automated out of.

Her Text Said Sure, I’ll Drop In In A Minute

So much for the older the wiser, and for the it’s only young folks that spend their lives glued to their phones not paying attention stereotype.

First it was the 80-year-old man plowing into a police car that was on a distracted driving patrol, and now a 67-year-old woman has fallen six feet down an open sidewalk maintenance hatch.

Surveillance video captured the moment a woman in Plainfield glanced down at her cell phone before she tripped over open sidewalk doors and fell six feet into the space beneath them.
According to Plainfield police, units responded just after 12 p.m. on Thursday to the area in front of Acme Windows on Somerset Street on report of a woman injured.
There police found the 67-year-old woman, who they removed from the space beneath the open doors.

She was taken to hospital with what were only described as serious but non-life-threatening injuries.

Odd little side note: Both of these cases come from New Jersey. Coincidence? Or are the elderly mental defectives there more tech savvy than the ones in the rest of the country?

Sendero Wants To Know What You Would Like Out Of An Indoor Wayfinding App

It’s really nice to see GPS app makers start to focus on indoor navigation. If you’re blind I doubt I have to tell you that getting around in giant buildings can sometimes be its own special brand of pain in the ass, so being able to use the same apps that work so well outside inside is going to be pretty great as the technology improves.

Welcome to Sendero’s user survey. This study is part of a two-year project in which Sendero and partners are attempting to develop an indoor wayfinding application.  The project, entitled, an Accessible Environmental Information Application for Individuals with Visual Impairments, is funded by a federal grant from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), (grant number 90BISB0003-01-00).  The Project PI is Dr. Paul Ponchillia.
The survey may be completed in the comfort of your own home, at your leisure. The survey will include 27 questions about challenges, barriers, technology, access to information, and general user needs input for independently navigating indoor facilities. The survey will be used to assess two things: (1) the perceived barriers of indoor orientation and navigation through a series of questions such as: What information should it provide? What are some of the value-added features? And (2) preferred delivery of information, specifically in output of information (tactile, verbal or audio) and exact phrasing of information. How should the device provide information to assist users as they navigate independently from store to store, gate to gate, and point to point within a variety of venues?

The survey is here. Assuming you’re not currently lost in an airport, take a few minutes and help them out.