Posted in | News | Machine-Vision

Convolutional Neural Networks Used in Detection Systems are Affected by Visual Illusions

The University of Valencia contributed to a study on visual illusions in artificial neural networks that reveals artificial perception does not remove the biases and subjectivities of the human brain.

Image Credit: Asociación RUVID.

One of the main conclusions of the study, which was published recently in the Vision Research journal, is that machines can go wrong in their perception of reality, quite similar to people.

At the Image Processing Laboratory (IPL) of the University of Valencia, scientists have demonstrated that convolutional neural networks (CNN)—a kind of artificial neural network generally utilized in detection systems—are also impacted by visual illusions, similar to the human brain.

This was achieved in collaboration with the Department of Information and Communication Technologies (DTIC) of the Pompeu Fabra University (UPF).

Neurons in a convolutional neural network are arranged in receptive fields quite similar to neurons in the visual cortex of a biological brain. Recently, CNN are used in an extensive range of independent systems, like face detection and recognition systems or self-driving vehicles.

The study evaluates the phenomenon of visual illusions in convolutional networks in comparison with their impact on the vision of human beings.

The researchers trained CNN for simple tasks like eliminating noise or blur and identified that such networks are also vulnerable to a biased perception of reality, due to visual illusions of color and brightness.

Moreover, the article adds, “some illusions of networks may be inconsistent with the perception of humans.” This implies that the visual illusions that take place in the CNN do not essentially have to coincide with the biological illusory perceptions, but that in such artificial networks, different illusions foreign to the human brain could exist.

This is one of the factors that leads us to believe that it is not possible to establish analogies between the simple concatenation of artificial neural networks and the much more complex human brain.

Jesús Malo, Professor of Optics and Vision Sciences and Researcher, Image Processing Laboratory, University of Valencia

The Researchers Propose a Paradigm Shift

In this regard, the researchers have very recently published another article in Scientific Reports that elaborates on the restrictions and differences between the two systems, the results of which make the authors offer an alert about the applications of CNN to study human vision.

CNN are based on the behavior of biological neurons, in particular on their basic structure formed by the concatenation of modules made up of a linear operation (sums and products) followed by a non-linear one (saturation), but this conventional formulation is too simple. In addition to the intrinsic limitations of these artificial networks to model vision, the non-linear behaviour of flexible architectures can be very different from that of the biological visual system.

Jesús Malo, Professor of Optics and Vision Sciences and Researcher, Image Processing Laboratory, University of Valencia

Malo is also the co-signer of the articles presented by the University of Valencia.

The text claims that artificial neural networks with intrinsically non-linear bio-inspired modules, in contrast to the typical much deeper concatenations of linear plus non-linear modules, not just better mimic fundamental human perception, but can also offer higher performance in general purpose applications.

Our results suggest a paradigm shift for both vision science and artificial intelligence.

Jesús Malo, Professor of Optics and Vision Sciences and Researcher, Image Processing Laboratory, University of Valencia

Journal Reference

Gomez-Villa, A., et al. (2020) Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications. Vision Research. doi.org/10.1016/j.visres.2020.07.010.

Source: https://ruvid.org/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.