The field of neuroscience and brain-computer interface technology is making rapid strides forward, with developments occurring on both the hardware and software sides of the equation. The findings have moved from the realm of experimentation to that of industry, where they can be of use to end consumers. There are a number of firms on the front edge of developing and producing BCI devices that are efficient, safe, and reliable for use by those who could benefit the most from them. Among these companies is Neuralink. Neuralink, co-founded in 2016 by Musk, Hodak, and Merolla, aspires to develop a generalized input-output device that can interface with the whole brain in the long term. To make this long-term ambition a reality, they are focusing on the short-term plan of developing BCI devices that can interface with a specific portion of the brain and aid the user in doing specific tasks, such as navigating a mobile device. Last year, the team demonstrated a monkey implanted with the Neuralink-manufactured device playing mind ping pong on a computer. This year, the dedicated team of engineers and neuroscientists demonstrated the current progress of the Neuralink device, the potential applications, and the future prospects for the company in the show and tell that happened on November 30, 2022. Elon, who presided over the meeting, also mentioned that the company has applied to the FDA for clearance of the device’s use in humans and expected to hear back from the agency in roughly six months.
For this year’s show and tell, the team demonstrated a monkey moving a cursor on the computer screen to select the highlighted text on the virtual keyboard using just his mental activity. Although a monkey doesn’t know how to write and is only trained to select the highlighted key, a quadriplegic or tetraplegic person can use this device to write and navigate on a computer just by using their brain activity. Microfabricated on flexible thin film arrays called “threads” embedded with electrodes, a 1024-channel N1 device was implanted in the monkey’s brain. The device transmits the neural activity to the software wirelessly for further analysis and decoding.
R1 Robot And Surgical Engineering
The R1 robot, which is akin to a sewing robot, can conduct safe and scalable surgery thanks to its capabilities like accurate grabbing and insertion, precise brain motion tracking, and vascular avoidance. They also mounted an optical stack on the needle, which consisted of three parts: illumination for viewing the visible imaging, a laser interferometry system OCT (optical coherence tomography) that provided the precise position of the brain while it was moving in real-time, and visible imaging of the brain region where the thread needed to be inserted. Currently, the R1 robot only inserts the threads, which leaves the craniectomy – the removal of a small portion of the skull to expose the dura and then the peeling off of the dura in the little portion to expose the brain region – to the neurosurgeons. The team is currently working on a way to automate craniectomy, or the process of cutting through the skull to reach the dura, for the next generation R1 Robot. A live demonstration of the robot injecting the threads into a brain proxy was also shown.
The N1 device requires energy just like any other electronic gadget in order to record neural activity and send it to the software for decoding and task implementation. Engineers were able to reduce the chip’s power usage to 32 mV by including a spike detection module, which allows for the transmission of spike events rather than the raw neural activity to the software, thus achieving the desired functionality while using less energy. This is achieved by applying a bandpass filter with a frequency range of 500 Hz – 5 kHz to the raw brain activity and then generating an adaptive threshold based on an estimate of the noise floor for each individual channel. The spike detector module can detect not only the spike’s presence but also its approximate shape by locating three points on the spike: the left, middle, and right points. This is crucial for differentiating between groups of neurons adjacent to a single channel. Using this technique, they also cut down on memory accesses by 30% within the chip.
The N1 device’s battery also features some impressive innovations. Wireless power transfer is used for charging the in-built battery. Given that it was attached to an implant in the brain, there were a few difficulties to overcome. The system must operate over a wide charging volume without relying on magnets for alignment. Furthermore, the implant’s exterior surface that comes into contact with brain tissue must not get hotter than 2 degrees Celsius, and it must be resistant to disruption and high charge rates. In its current iteration, the battery pack makes use of an aluminum case, a drive circuit operating at 6.78 MHz, a remote coil that can be removed, higher switching frequencies, and better coil coupling. To speed up the charging process, the team is working on a third-generation charger with bidirectional near-field communication to cut down on control latency and improve temperature regulation.
The trajectory of the cursor is predicted from the neural activity by passing it through a neural network trained to decode the cursor’s velocity called a neural decoder. The decoder is first trained with spike events coming from the N1 device while the monkey moves the pointer using a joystick. The cursor velocity is predicted using the decoder to map the trajectory of the cursor. Although typing on a virtual keyboard is fast, it is not as fast as typing on a conventional keyboard with our ten fingers. Inspired by Stanford researchers who developed a deep learning model to decipher a person’s handwriting by utilizing his cerebral activity, the team attempted to decode the monkey’s hand trajectory as he traced the digits and characters on the screen. With the help of a handwriting classifier, we may deduce which character the trajectory represents. The method could also be expanded to support the user’s preferred writing language.
Given the dynamic nature of brain activity, variations can occur between sessions and even between days. Therefore, the decoder’s performance can vary greatly between sessions, rendering the BCI unreliable. To overcome this, data is collected over multiple days to train the decoder to recognize the right pattern in the variability. Another approach they use to tackle this is to continuously sample statistics of neural activity on the implant and use the latest estimates to preprocess the dataset before feeding it to the decoder. The device’s unreliability is further exacerbated by the long latency required for a brain spike to influence the location of the mouse pointer on the screen. The use of phase locks ensures that the cursor can be moved with precision and ease. Phase locks time the beginning of each packet sent from the implant to coincide with the activation of the Bluetooth radio. As a result, the brain’s spike is fed into the prediction pipeline with as little delay as possible.
Future Updates In Next Generation Device
The Neuralink team is now developing improved versions of the N1 device and the R1 robot to make the BCI system more user-friendly, effective, and durable for humans. One potential improvement in the future is making the device easily interchangeable in case any updates are made after it has been implanted in the brain. After the dura has been removed, the current N1 device must be implanted directly into the brain surface, making device replacement extremely challenging due to the formation of a hard-to-dislodge tissue capsule around the implant. The Neuralink team is working to solve this problem by developing a device that does not require removing the dura but is instead implanted on top of it. This also makes the surgery safer and simpler. However, this presents its own set of difficulties because the dura is a dense network of collagen fibers that makes it hard to inject threads and visualize the section being injected. Iterations of the needle design are being tested to find the best possible structure for injecting the thread past the dura and into the brain. The group is developing a unique optical imaging device that can image blood vessels beneath the skin with a fluorescent dye that is currently used in medical practice. Additionally, the group is working to improve the brain proxy by creating synthetic material that physically, chemically, and structurally resembles human tissues. They will try out various needle designs and surgical procedures on this brain dummy before putting an animal through experimental trials.
The current Neuralink device has the potential to restore vision in people who have lost sight due to eye injury or other diseases. In addition to capturing brain activity, the device can also simulate neurons by injecting currents down the channel. This facilitates the creation of mental images without the use of the eyes. The huge number of electrodes in the device is also helpful in restoring eyesight, as the higher the number of electrodes, the higher the density of the image that will be created. The visual part of the brain is located deep within the brain; therefore, the robot may inject the thread directly into this area, making the device ideal for visual prostheses. Many research findings in visual neuroscience have discovered that a single neuron in the visual cortex of the brain reflects a little part of the visual field. Our perception is made of a collection of tiny receptive fields belonging to each cell in the visual cortex. Each cell is represented by a virtual pixel that the viewer can perceive even without visual aid through direct simulation. For this to be useful as a visual prosthesis, the single-cell simulation must be scaled up to many cells, each of which would represent a different part of the visual field. To make this work, a high-resolution camera will capture the scene in front of the user, and then the image will be translated into an electrical stimulation pattern for the visual brain. The team’s goal for the next-generation gadget is to include 16K electrodes; this would allow the image to be transformed into 32K points that could be triggered in the cortex if the device were fitted to both the left and right cortices. This would result in an extremely high resolution of the scene in front of the person.
Spinal Cord Application
As motor intentions travel through the spinal cord to the muscle, a person with a spinal cord injury loses the ability to transmit these intentions from the brain to the targeted muscles for real movement. The N1 device can provide the appropriate stimulations to the neurons in the spinal cord that can activate the subsequent muscle, allowing the person with a spinal cord injury to easily engage with their surroundings and move about. For this, the team investigated the spinal cord activity, motor cortex activity, and join angles to decode the pattern of simulations that can cause muscle contraction. They also explored the possibility of simulating the neurons of the somatosensory cortex to provide tactile feedback. Motor cortex threads record spikes, which are decoded into a simulation pattern and then transmitted wirelessly in real-time to the spinal cord device in order to trigger the appropriate muscle and produce movement. In a perfect closed loop, the sensory information in the spinal cord is captured as spikes, decoded into patterns of stimulation, and communicated to the device in the brain, where it stimulates the somatosensory cortex to give the person the experience of touch.
Stream the whole event here. What do you think about the current progress and future prospects of Neuralink? Let us know in the comments section.