The other night, while trying to go to sleep, I started thinking about what someone needed to invent next.
I recall reading in a science fiction work about computer screens being somehow embedded on our eyeballs and wired into our brains, eliminating the need to go to a phone or computer for a google search, or even to ask Alexa. We may actually be headed that way, but I decline. Think about driving a car with that going on! Of course, all cars may be self-driving by then – or you can “go” someplace just by using your eyeball computer, so you don’t really have to go. Whatever – I don’t want it.
Then there is another computer device that I believe folks are actually working on: You speak your message into your phone, it translates your spoken words into text, that text is sent to your recipient, which translates it back into spoken words. Pretty cool, huh? I think they should call it a “telephone.”
Here’s another one: Kim and I watch a lot of stuff that we stream with the help of subtitles. We started doing this with some of the British programs, where they speak fast, with (believe it or not) English accents, and they move their mouths like ventriloquists. We soon found it helpful to use subtitles for everything we watch, except, of course, live television. One of my small pleasures is watching a foreign film that is both dubbed and subtitled, for occasionally the dubber and subtitler don’t agree. Usually it’s minor, but one time the foreign word was dubbed “No” while the subtitle read “Yes.”
So, here’s my idea: We should use subtitles for our everyday conversations. Most of the subtitles are done using some sort of voice-recognition software, so why not have people wear little screens so we can actually see what each of us is saying? This would be especially useful as some of us are getting older and have hearing issues. (This might help someone like me who can hear just fine, but I have listening issues – not the same thing.)
And let’s take it one step further. Why not have our wearable subtitles state not just what we said, but also what we really mean? This would eliminate all the guesswork and interpretation that are so much part of conversations, especially within families, but also in the workplace. I’ve heard that there now exists software that can take the part of psychotherapists, so why not use that software to help with subtitles? It might even be possible to have what you are saying dubbed into what you really mean but can’t quite say. I know some people who have the ability to hear those subtitles already, without the software.
And from what I understand, cars now have the ability to communicate with one another. I assume that those “conversations” involve safety issues, but I can’t help but think that they may be commenting on our driving abilities. I know my car is capable of judging me, as the occasional warning beeps indicate. My previous Toyota would suggest that I pull over and take a break after I veered from my lane too often. But maybe our personal subtitling software might communicate with another person’s – possibly without my knowing about it, possibly commenting on us! Maybe . . ..
I recall a 1966 Woody Allen film called “What’s Up, Tiger Lily.” Woody took a Japanese James Bond type film and totally redubbed it. I remember that it was hilarious (though probably not worth seeing again), but I only recall one specific example. The main character gets up after getting smashed in the head, rubs his injured skull, and says, “Ow! My knee!” Perhaps we could make our self-captions amusing, even creative.
But it’s not all positive. I’m sure that as soon as the self-caption technology is available, someone will be busy devising a means to hack into our captions, so it will appear that we are saying whatever the Bad Guys want us to appear to be saying.
Inventors – there is work to be done . . ..