Forbes Magazine is reporting on the ongoing work that Chuck Jorgensen is doing in developing subvocal speech for NASA.
Chuck Jorgensen is a NASA scientist whose team has begun to digitize subvocal speech using nerve signals in the throat that control speech. Jorgensen's team discovered that small, button-sized sensors, stuck under the chin and on either side of the 'Adam's apple,' could gather nerve signals, and send them to a processor and then to a computer program that translates them into words.
It's thought that this technology will initially help astronauts working in space, Navy Seals working underwater, emergency workers charging into loud, harsh environments, fighter pilots, and so forth. More practically, one can imagine this technology taking a considerable role in defining the next generation of cell phone and Internet communications.
The team's next goal is to see how much of a speech system can be generated. They are in the equivalent of the early stages of auditory speech recognition, where there is only one speaker and individual words. Ultimately, the team wants to have multiple speakers and continuous speech. They're also working on capacitive sensors which are sensors that don't touch the body and are embedded into clothing or other wearable device.
Jorgensen's work is an obvious precursor to technologically enabled telepathy, or techlepathy as I've referred to it. It's conceivable that someday the neural signals sent to the vocal chords to instigate speech will be re-routed and converted to a signal that can be received directly by another individual's neural audio receptors. The result will be virtual subvocal telepathy.
This won't be true telepathy in the classic sense, however, as it is language that's been conveyed rather than subjective conscious experience.
But one hurdle at a time....
Tags: telepathy, techlepathy, subvocal speech.
No comments:
Post a Comment