Posted in | News | Machine-Vision

Study Shows How Machine-Learning Systems Translate Text from One Language to Another

MIT researchers, in association with the Qatar Computing Research Institute (QCRI), are placing the machine-learning systems called neural networks under the microscope.

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope. (Image credit: MIT)

In the latest study that provides a better understanding of how these machine-learning systems are able to decode text from one language to another, the research team came up with a new technique that identifies separate nodes, or “neurons,” in the networks capturing certain linguistic features.

Neural networks process large sets of training data and learn to carry out computational tasks. Where machine translation is concerned, a network crunches human-annotated language data and apparently “learns” linguistic features, like word meaning, word morphology, and sentence structure. Considering the new text, these networks produce a translation by matching these learned linguistic features from one language to another.

However, during training, such networks tend to modify the internal values and settings in ways that cannot be interpreted by the creators. Therefore, for machine translation, the creators do not essentially know which linguistic features are captured by the network.

In a study being delivered at this week’s Association for the Advancement of Artificial Intelligence conference, the team has described a technique that pinpoints which types of neurons are most active when classifying certain linguistic features. The researchers also developed a toolkit for users to examine and exploit how their networks decode text for many different purposes, for example, compensating for any classification biases that are inherent in the training data.

In the latest study, the research team identified neurons that are used for classifying, for example, past and present tenses, gendered words, singular and plural words, and numbers at the middle or beginning of sentences. The researchers also demonstrated how a few of these tasks need just one or two neurons, while others need several neurons.

Our research aims to look inside neural networks for language and see what information they learn. This work is about gaining a more fine-grained understanding of neural networks and having better control of how these models behave.

Yonatan Belinkov, Study Co-Author and Postdoc, Computer Science and Artificial Intelligence Laboratory, MIT.

The paper’s co-authors are James Glass, a senior research scientist, and Anthony Bau, an undergraduate student of CSAIL; and Hassan Sajjad, Fahim Dalvi, and Nadir Durrani of QCRI.

Putting a microscope on neurons

Neural networks are organized in layers, in which every layer contains numerous processing nodes, each joined to nodes in layers below and above. Information is initially processed in the lowest layer, which transmits an output to the layer above, and so on. Moreover, each output carries a varied “weight” to establish how much it figures into the computation of the subsequent layer. These weights are continuously readjusted at the time of training.

Neural networks, which are utilized for machine translation, train only on annotated language data. At the time of training, every layer learns varied “word embeddings” for a single word. Word embeddings are basically tables of several hundred numbers integrated in a way that matches to one word and the function of that word in a sentence. A single neuron calculates each number in the word embedding.

The investigators trained a model in their previous work to inspect the weighted outputs of each layer and thus establish how the layers are able to classify any specified word embedding. They noticed that higher layers classified more intricate features, like how the words integrate to create meaning, while lower layers classified comparatively simpler linguistic features like the structure of a specific word.

The researchers, in their latest study, applied this method to establish how learned word embeddings are able to do a linguistic classification. However, a novel technique, known as “linguistic correlation analysis” was also implemented by the team; this technique trains a model to zero in on the separate neurons in all the word embeddings that were most significant in the linguistic classification.

The latest approach integrates all the word embeddings captured from varied layers—each containing data about the final classification of the word—within one embedding. When a specified word is being classified by the network, the model learns weights for each neuron that was stimulated at the time of the classification process. This offers a weight to every neuron in every word embedding that fired for a particular portion of the classification.

The idea is, if this neuron is important, there should be a high weight that’s learned. The neurons with high weights are the ones more important to predicting the certain linguistic property. You can think of the neurons as a lot of knobs you need to turn to get the correct combination of numbers in the embedding. Some knobs are more important than others, so the technique is a way to assign importance to those knobs.

Yonatan Belinkov, Study Co-Author and Postdoc, Computer Science and Artificial Intelligence Laboratory, MIT.

Neuron ablation, model manipulation

Each neuron can be ranked in the order of importance because each is weighted individually. In this regard, the team developed a NeuroX toolkit that has the ability to automatically rank all neurons of a neural network based on their prominence and to envisage them in a web interface.

In addition to new text, users can also upload a network they have already trained. The app shows the text and, adjacent to it, a list of particular neurons, each containing an identification number. Upon clicking on a neuron, the text will be emphasized based on which phrases and words the neuron triggers for. From there, users can fully displace—or “ablate”—the neurons, or alternatively, they can adjust the level of their activation, to manage how the neural network actually translates.

The ablation task was used to establish if the team’ technique precisely identified the right high-ranking neurons. In their study, the investigators applied the technique to demonstrate that when high-ranking neurons are ablated in a network, their performance with regards to classifying correlated linguistic features reduced considerably. On the other hand, when lower-ranking neurons were ablated, their performance also suffered but not that significantly.

After you get all these rankings, you want to see what happens when you kill these neurons and see how badly it affects performance,” stated Belinkov. “That’s an important result proving that the neurons we find are, in fact, important to the classification process.”

One useful application of the technique is to help restrict biases in language data. Google Translate and other similar machine-translation models may be able to train on data with gender bias, which can otherwise be difficult for languages that have gendered words. Some professions, for example, could be usually called male, while others are referred to as female. When translating new text, a network may only create the learned gender for those specific words. For example, in several online English-to-Spanish translations, “nurse” often translates into its feminine version, while “doctor” usually translates into its masculine version.

But we find we can trace individual neurons in charge of linguistic properties like gender. If you’re able to trace them, maybe you can intervene somehow and influence the translation to translate these words more to the opposite gender … to remove or mitigate the bias.

Yonatan Belinkov, Study Co-Author and Postdoc, Computer Science and Artificial Intelligence Laboratory, MIT.

In initial experiments, the investigators altered the neurons in a network to modify a translated text from past to present tense with an accuracy of 67%. They altered to switch the gender of the words with an accuracy of 21%. “It’s still a work in progress,” stated Belinkov. He added that the next step is to enhance the methodology to obtain more precise ablation and manipulation.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.