Categories
General

We’re Getting Closer to ‘Mind Reading’

While NVIDIA’s Canvas isn’t reading minds, research projects suggest we are moving toward being able to capture images from brain waves, or even hear internal voices. These research projects are a very long way from the premises of that first sentence, but advances happen so quickly it won’t be long before they move out of the lab into our creative process.

Russian researchers have found a way to:

… visualize a person’s brain activity as actual images mimicking what they observe in real-time

To be clear the researchers were able to partially reconstruct what the subject was seeing, not what they are imagining, but it’s an important first step in understanding how images are represented in the visual cortex, which will lead to understanding how images are imagined in the brain.

The video is pretty amazing, even at this stage of research.

At a similar stage of research, The Neural Acoustic Processing Lab at Columbia University in New York City, has reconstructed synthesized speech from the brain waves of what their subjects were listening to. Again, not internal voices, but a deep insight into how the brain is processing incoming audio.

The results were very robotic-sounding, but fairly intelligible. In tests, listeners could correctly identify spoken digits around 75 percent of the time. They could even tell if the speaker was male or female.

At their current state of research, these projects are interesting, but of no direct application to creative story telling, but the journey from research project to practical application is shortening, particularly in any area related to Artificial Intelligence of Machine Learning.