Researchers at the Viterbi School of Engineering, University of Southern California, are using Generative Adversarial Networks (GANs) to improve brain-machine interfaces for disabled individuals. GANs are a type of generative model known for creating deepfake videos and realistic human faces.

The team successfully taught AI to generate synthetic brain activity data in a paper published in Nature Biomedical Engineering. This data, particularly neural signals known as spike sequences, can be input into machine learning algorithms to enhance the usability of brain-machine interfaces (BCIs).
BCI systems work by analyzing a person’s brain signals and converting that neural activity into commands, allowing users to control digital devices like computer cursors using only their thoughts. These devices can improve the quality of life for individuals with motor impairments or paralysis, even for those struggling with locked-in syndrome (where a person is fully conscious but unable to move or communicate).
Various forms of BCIs are available, from caps that measure brain signals to devices implanted in brain tissue. New use cases are continually being discovered, from neurorehabilitation to treating depression. However, making these systems operate fast enough and robustly enough in the real world is challenging.
Specifically, to understand their inputs, BCIs require large amounts of neural data and long periods of training, calibration, and learning.
Computer Science Professor and co-author of the study, Laurent Itti, states, “If a paralyzed person cannot generate strong enough brain signals, it may be very difficult, costly, or even impossible to obtain enough data to support the BCI algorithms.”
Another barrier is that this technology is user-specific and must be trained from scratch for each individual.
Generating Synthetic Neural Data
But what if you could create synthetic neurological data—artificially computer-generated data—that could “replace” data obtained from the real world?
Enter Generative Adversarial Networks. GANs are known for creating “deepfakes” and can generate an almost infinite number of new similar images through repeated trials.
Itti suggested to Shixian Wen whether GANs could also generate training data for BCIs by creating synthetic neural data that is indistinguishable from real data.

Experimental Examples and Training Baseline BCI LSTM Decoder
In an experiment described in the paper, researchers first recorded a segment of neural data when a monkey reached for an object. As shown in the experimental example above: the monkey sat in front of a video screen, grasping a plane joystick that controlled the cursor’s position. The monkey performed touch actions on a series of targets randomly placed on the screen while we recorded neural activity from the primary motor cortex using implanted electrode arrays.
The researchers used this data to train a deep learning spike synthesizer, as shown in Figure b, training the baseline BCI LSTM decoder. The recorded spike sequences were fed into the BCI LSTM decoder.

Training Baseline BCI LSTM Decoder
Afterwards, they used the synthesizer to generate a large amount of similar (albeit fake) neural data. The specific steps are as follows:
Step 1. Train the spike synthesizer on the neural data from the first session of Monkey C (S.1, M.C) to learn the direct mapping from kinematics to spike sequences and capture the embedded neural attributes. Gaussian noise and actual kinematics were input into the spike synthesizer (composed of a generator and a reader). The spike synthesizer first learns the embedded neural attributes using the generator (a bidirectional LSTM recurrent neural network) through bidirectional time-varying generalizable internal representations (symbols t−1, t, t+1), thus generating realistic synthetic spike sequences.
Step 2. Adjust the spike synthesizer to generate synthetic spike sequences suited for another session or subject based on real kinematics and Gaussian noise. We first freeze the generator to retain previously learned embedded neural attributes or virtual neurons. Then, we replace and fine-tune the readout module using limited neural data from another session or subject (Session 2 of Monkey C (S.2, M.C) or Session 1 of Monkey M (S.1, M.M)). The fine-tuned readout module adjusts the captured expressions of these neural attributes to fit the spike sequences for another session or subject.
Step 3. Use the same small amount of real neural data (from Step 2) and a large number of synthetic spike sequences (from Step 2) to train the BCI decoder for another session or subject.

Overall Framework
Then, the research team combined the synthetic data with a small amount of new real data (either from the same monkey on a different day or from different monkeys) to train the BCI. This method allows the system to start and operate much faster than current standard methods. In fact, the researchers found that GAN-synthesized neural data improved the overall training speed of the BCI by up to 20 times.
“Combining less than a minute of real data with synthetic data achieves the effect of 20 minutes of real data,” Wen said.

Normalized Position Activity Map, Constructed as a Histogram of Neural Activity as a Function of Position
“This is the first time we have seen artificial intelligence generate synthetic spike sequences to produce the tricks of thought or movement. This research is a key step toward making BCIs more suitable for practical use.”
Moreover, after a training phase of one experiment, the system quickly adapts to new phases or subjects using limited additional neural data.
Itti said, “This is a significant innovation—when this person imagines doing different actions, it generates fake data sequences that look like they come from the same person, and then uses this data to help learn the next person.”
In addition to BCIs, GAN-generated synthetic data can achieve breakthroughs in other AI fields that require large amounts of data by accelerating training and improving performance.