Self-Taught AI Shows Similarities to How the Brain Works

Many of the most outstanding artificial intelligence systems have been trained utilizing a vast collection of labeled data during the past ten years. For example, to “train” an artificial neural network to accurately differentiate a tabby from a tiger, an image may be tagged “tabby cat” or “tiger cat.” The approach has been both horribly inadequate and wildly successful.

The neural networks frequently take short cuts, learning to correlate the labels with sparse and perhaps superficial information. Such “supervised” training requires material that has been painstakingly classified by humans. Because cows are usually taken in fields, a neural network may utilize the presence of grass to identify a photo of a cow, for instance.

Alexei Efros, a computer scientist at the University of California, Berkeley, remarked, “We are producing a generation of algorithms who are like undergrads [who] didn’t come to class the whole semester and then the night before the final, they’re cramming.” Despite not actually understanding the topic, they perform well on the test.

Furthermore, the information that can be gleaned from this “supervised learning” regarding biological brains may be restricted for researchers who are interested in the junction of animal and artificial intelligence. Animals, including humans, do not learn from labeled data sets. They investigate their surroundings on their own for the most part, which helps them develop a deep and comprehensive grasp of the world.

Recently, a few computational neuroscientists have started investigating neural networks that were trained using little to no human-labeled input. The modeling of human language and, more recently, picture identification, using these “self-supervised learning” algorithms, has been incredibly effective. Compared to their supervised-learning equivalents, computational models of the mammalian visual and auditory systems developed utilizing self-supervised learning models have demonstrated a stronger relationship to brain function. Some neuroscientists believe that some of the genuine ways that our brains actually learn are starting to emerge from the artificial networks.

Inadequate Supervision

Artificial neural network-inspired brain models first appeared roughly ten years ago, at the same time when AlexNet, a neural network, transformed the process of categorizing unknown pictures. This network was constructed using layers of artificial neurons, which are computational units that create connections with one another and can change in “weight,” as with all neural networks. When a neural network misclassifies a picture, the learning algorithm modifies the weights of the connections between the neurons to reduce the likelihood of misclassification in the subsequent training cycle. The algorithm repeatedly goes through this procedure with all of the training pictures, adjusting weights, until the error rate of the network is acceptable.
Using neural networks like AlexNet and its offspring, neuroscientists created the first computer models of the monkey visual system around the same time. The union appeared optimistic: The activity of genuine neurons and artificial neurons, for instance, displayed an intriguing connection when monkeys and artificial neural networks were exposed to the same pictures. Then came synthetic models for hearing and smell perception.

However, as the area developed, researchers became aware of the shortcomings of supervised training. For instance, in 2017, computer scientist Leon Gatys and his colleagues at the University of Tübingen in Germany took a picture of a Ford Model T and then added a leopard skin pattern to it, creating an odd yet instantly recognized image. The changed image was categorized as a leopard by a top artificial neural network, yet the original image was properly identified as a Model T. It had been concentrated on the texture and was oblivious to the car’s form (or a leopard, for that matter).

Strategies for self-supervised learning are created to prevent such issues. This method does not label the data by humans. Friedemann Zenke, a computational neuroscientist at the Friedrich Miescher Institute for Biomedical Research in Basel, Switzerland, said that rather, “the labels emerge from the data itself.” In essence, self-supervised algorithms leave gaps in the data and expect the neural network to fill them in. For instance, in a so-called big language model, the neural network will be shown the first few words of a phrase and asked to predict the following word as part of the training process. The model appears to understand the language’s grammatical structure when trained with a sizable corpus of material extracted from the internet, displaying outstanding linguistic abilities without the use of labels or supervision.

Under computer vision, a comparable endeavor is in progress. the end of 2021, Kaiming His “masked auto-encoder,” which relies on a method invented by Efros’ team in 2016, was made public by him and his colleagues. Almost three-quarters of each image is hidden by the self-supervised learning algorithm’s random masking. The unmasked areas are converted into latent representations via the masked auto-encoder, which are compressed mathematical descriptions that provide crucial details about an item. The latent representation in the instance of a picture may be a mathematical description that, among other things, captures the form of an item in the image. These representations are subsequently transformed back into whole pictures via a decoder.
The encoder-decoder pair is trained using the self-supervised learning method to convert masked pictures into their whole representations. Any discrepancies between the real and recreated pictures are put back into the system so it can continue to improve. This procedure is repeated for a series of training photos until the error rate of the system is acceptable. In one instance, when a trained masked auto-encoder was presented a bus picture that had been previously unseen but had about 80% of it hidden, the system correctly recreated the vehicle’s structure.

Efros remarked, “This is a very, very outstanding outcome.

This technology appears to provide latent representations that are far deeper in content than those produced by earlier methods. Instead of only learning their patterns, the system may also learn the form of an object, such as a vehicle or a leopard. The basic tenet of self-supervised learning is to construct knowledge from the ground up, according to Efros. No cramming the night before the test.

Brains with Self-Supervision

Some neuroscientists see parallels of how we learn in systems like this. According to computational neuroscientist Blake Richards of McGill University and Mila, the Quebec Artificial Intelligence Institute, 90% of what the brain accomplishes is self-supervised learning. Similar to how a self-supervised learning algorithm tries to anticipate the gap in an image or a section of text, it is believed that biological brains are always forecasting things like an object’s future location as it moves or the next word in a phrase. Additionally, brains make mistakes and learn from them on their own; just a small portion of our brain’s input comes from an outside source that effectively says, “wrong response.”
Take the visual systems of humans and other primates, for instance. Despite being the most thoroughly researched of all animal sensory systems, neuroscientists have had difficulty understanding why the visual system consists of two distinct pathways: the ventral visual stream, which processes faces and objects, and the dorsal visual stream, which interprets movement (the “what” and “where” pathways, respectively).

A self-supervised model that suggests a solution was developed by Richards and his colleagues. They developed an AI by fusing two distinct neural networks: The first, known as the ResNet architecture, was created for processing pictures; the second, referred to as a recurrent network, was able to keep track of a series of past inputs and forecast what would happen next. The team used a series of, say, 10 video frames from a video, and let the ResNet analyse each frame individually to train the merged AI. Instead of just matching the first 10 frames, the recurrent network then predicted the latent representation of the 11th frame. The neural networks’ weights were told to be updated by the self-supervised learning algorithm after a comparison between the forecast and the actual value.

Richards’ team discovered that an AI that had only been taught using a single ResNet was proficient at object identification but not movement classification. The AI produced representations for objects in one circuit and for movement in the other when they split the same ResNet into two, establishing two pathways (without affecting the overall number of neurons), enabling downstream classification of these features – exactly as our brains probably do.

The team used a series of films that had previously been given to mice by researchers at the Allen Institute for Brain Science in Seattle to further test the AI. Similar to primates, mice have brain areas dedicated to both motion and still imagery. As the mice watched the films, the Allen researchers captured the brain activity in the visual cortex of the mice.

The responses of the AI and real brains to the videos were identical in this instance as well, according to Richards’ team. During training, one of the artificial neural network’s routes grew more resemblant of the dorsal, movement-focused parts of the mouse’s brain while the other pathway grew more resemblant of the ventral, object-detecting sections.

According to Richards, the findings show that human visual system includes two specialized routes because they aid in visual future prediction; only one pathway would suffice.

Similar stories are presented by models of the human hearing system. An AI program dubbed Wav2Vec 2.0, which employs a neural network to convert audio into latent representations, was trained in June by a group under the direction of Jean-Rémi King, a research scientist at Meta AI. The researchers disguise some of these representations, which are subsequently fed into a transformer neural network, one of the component neural networks. The transformer forecasts the hidden data during training. Again, no labels are required as the entire AI learns to convert sounds into latent representations. To train the network, the researchers used around 600 hours of speech data, or “roughly what a youngster would acquire in [the] first two years of experience,” according to King.

The researchers played chunks of audiobooks in English, French, and Mandarin to the system after it had been taught. The researchers then compared the AI’s performance to data from 412 individuals, a mixture of native speakers of the three languages, who had their brains scanned in an fMRI scanner while listening to the identical audio segments. Despite the noisy and blurry fMRI pictures, King claimed that his neural network and human brains “not only correlate with one another, but also correlate in a systematic fashion”: Early levels of the AI exhibit activity that is similar to that of the primary auditory cortex, but deeper layers of the AI exhibit activity that is similar to that of higher brain layers, in this case the prefrontal cortex. Richards remarked, “It’s very great data. Even though it’s not definitive, this is “another persuasive piece of evidence to imply that, in fact, the way we acquire language is in large part by attempting to predict next things that will be said” (unknown).

Not everybody is persuaded. Josh McDermott, a computational neuroscientist at the Massachusetts Institute of Technology, has used both supervised and self-guided learning to develop models of visual and auditory perception. His group has created what he refers to as “metamers,” which are audio and visual signals that are generated but are simply noise to a person. But metamers seem identical to genuine signals to an artificial neural network. This shows that even with self-supervised learning, the representations in the deeper layers of the neural network may not correspond to those in human brains. Self-supervised learning techniques like this “are advances in the sense that you can acquire representations that can enable a lot of recognition behaviors without having all these labels,” according to McDermott. But many of the diseases of supervised models are still present.
Even the algorithms themselves require greater improvement. In Wav2Vec 2.0 by Meta AI, for instance, the AI can only anticipate latent representations for a few tens of milliseconds of sound, which is less time than it takes to utter a perceptually distinguishable noise, much alone a word. King stated, “There are numerous things that can be done to achieve something like what the brain does.

Self-supervised learning alone won’t be enough to truly comprehend how the brain works. One difference between current models and the brain is that the latter has few, if any, feedback loops. A logical next step would be to train highly recurrent networks using self-supervised learning, which is a challenging procedure, and then compare the activity in these networks to actual brain activity. The activity of individual biological neurons must be matched to that of artificial neurons in self-supervised learning models, which is the next essential step. King expressed the hope that further single-cell recordings would support [our] findings.

It will be even more clear that whatever brain-magic our brains are capable of requires self-supervised learning if the observed parallels between brains and self-supervised learning models hold for additional sensory tasks. According to King, “if we do discover systematic parallels amongst quite dissimilar systems, it [would] imply that there may not be that many methods to handle information in an intelligent manner.” At the very least, that’s the lovely theory we would want to pursue.

Leave a Reply

Your email address will not be published. Required fields are marked *