Last Updated on: 15th July 2022, 05:06 pm
I got my GDB Alumni Newsletter yesterday, and they had an article in their pedestrian safety series that started me thinking about something. They were talking about hybrid cars, and how since car-makers had worked so hard to make their vehicles soundless, they weren’t keen on adding noise back in. So another solution was apparently being dreamed up by Arizona State University. They have been developing a haptic belt, a belt you would wear that would give you vibratory feedback on where a hybrid car is, so you could detect it. This is of course only going to work if you also wear another gizzmo… and if the cars have the right transmitters on them…and if they’re working…and the battery doesn’t go out on the belt or the gizzmo…and you don’t have other technology failures. Do you see where I’m going? I can’t find a link to this, but I found links to other things I wanted to mention. Boy oh boy, did I ever.
As you can see, I’m not keen on this solution. If I need another level of technology to tell me what my ears have told me for years, it doesn’t seem like a good solution. The sad part is we already had a solution to cars. They made noise. Now we’re going backwards.
What if Joe’s hybrid’s transmitter has gone on the blink. Joe won’t notice because he doesn’t need the transmitter to drive his car. But I sure will notice that his transmitter is out when my waist doesn’t buzz and then I meet his car with a crunch.
And what about all those bikers and joggers who can see perfectly well, but also rely on hearing when they’re biking and jogging, and tend to not notice hybrids? Do you seriously believe they will all strap on expensive haptic belts? No way. Us blinks have come to accept using expensive technology to accomplish tasks that other people do with a pen and paper, but sighties just won’t do it. The solution to the hybrid car dilemma needs to work without the user acquiring additional technology. The car needs to vroom and I don’t need to buy vroom-makers. I shouldn’t have to pay for someone not thinking about the needs of a whole ton of people.
Like I said, I was trying to find a link to the hybrid car project, and failed. But what I did find was a project that is hoping to use a haptic belt tohelp blinks in social situations. I did a double take. A buzzing belt around our waist is going to help us in social situations? Yes, they say, because the vibrations will tell us how far away the person is and in what direction. Um…isn’t that what my ears are for? here’s a google cache of a PDF with more details. I guess they want to give more information than distance and direction. They want to have it do face-recognition, facial expression recognition and gesture recognition.
Their intentions are quite noble, but I don’t know if I’m a fan of their idea, and here’s why. First, it’s the whole wearable tech thing. I know it’s only a belt and glasses, but if a blink wears glasses, they will be stared at, so any social benefits afforded by this technology will be negated by the people’s need to stare at the glasses. I am very happy that the folks doing this study realized we needed our ears, though. It does show some level of research into their target population.
Also, if we’re any good socially, we’ll figure this stuff out without technology. Sure there are a lot of things conveyed through gestures and such, but someone who is perceptive enough will find other ways to find the same information. And if they’re not perceptive enough, even if a machine can tell them that Fred is frowning, they won’t know what to do with that information. The only time where I’ve really noticed people more heavily relying on visual communication without augmenting it with something I can perceive is when our French group meets. Sometimes, people who are having trouble finding the words will point at something, and then I’ll be left confused. But that’s a special circumstance, and this technology couldn’t bail me out of that one anyway.
And there’s one little piece of the description that freaks me out.
Current efforts in this project are focused towards the integration of information from the face and speech modalities in a manner such that one modality can supplement/complement the predictions from the other modality. For example, if the name of a person is heard in a conversation context, this may be used to automatically label the face images obtained during the conversation, and train the classifiers accordingly.
Uh, so it’s listening in on, and storing my conversations? No thanks, no thanks guys. Sure the data would only be used to benefit the system…and my naked ass can’t be stored or sent anywhere from those body-scanners either.
I feel like a total shit for raining on someone’s parade. They want to help. But there are some instances where simple is better, and that’s all I’m saying.