AI Can Predict Behavior From Brain Activity

New reports reveal the fact that AI is able to predict behavior from brain activity. Check out the latest reports about this below.

AI can predict behavior from brain activity

Advancements in neuroscience, behavioral science, and medical research have been accelerated by the predictive capabilities of artificial intelligence (AI) machine learning.

A recent peer-reviewed study published in PLoS Computational Biology has demonstrated an AI deep learning model that can predict behavior with an impressive 95% accuracy, almost in real-time.

“Deep learning is a powerful tool for accurately decoding movement, speech, and vision from neural signals from the brain and for neuroengineering such as brain-computer interface (BCI) technology that utilizes the correspondence relationship between neural signals and their intentional behavioral expressions,” reported corresponding author Toru Takumi at the Kobe University School of Medicine, along with researchers Takehiro Ajioka, Nobuhiro Nakai, and Okito Yamashita.

Brain-computer interfaces allow individuals with impaired motor or speech functions and other disabilities to control external devices such as computers and robotic limbs, as well as communicate.

For people who suffer from neurological disorders, locked-in syndrome, motor impairment, and paralysis, brain-computer interfaces offer the possibility of improving their quality of life.

Deep learning is a type of machine learning that involves the usage of algorithms that learn from extensive amounts of training data instead of relying on explicit hard coding of instructions.

A deep neural network is comprised of an input layer, an output layer, and multiple layers of artificial neural networks between them. Deep learning algorithms are responsible for the current AI renaissance.

The Kobe University School of Medicine researchers’ AI model design consists of a convolutional neural network (CNN) for image data analysis and a recurrent neural network (RNN) for processing sequential, time-variable data. The researchers call this approach an “end-to-end” deep learning.

The AI model was able to generalize and screen out individual attributes when tested on five mice by the researchers.

“Our findings demonstrate possibilities for neural decoding of voluntary behaviors with the whole-body motion from the cortex-wide images and advantages for identifying essential features of the decoders,” reported the Kobe University School of Medicine research team.

Psychology Today reveals more interesting details about the findings.

Rada Mateescu
I'm hungry for truth, thirsty to learn, and eager to share. At Optic Flux, my goal is to deliver breaking juicy health, financial, and tech/science-related content. I focus on all that's meaningful and impactful for my readers.