pexels-pavel-danilyuk-8438993

During the last few decades, much research was done about computers’ ability to recognize details in photos. Recognition means the ability to differentiate between pictures of a dog and a cat, for example, or recognize all objects in a picture. These last years we have seen that computers’ ability to recognize objects could be higher than 90%. In photo calcification (recognizing a dog or a cat), these advanced algorithms surpass humans’ performance. These advancements in photo recognition with computers had significant implications in many fields, such as video surveillance, autonomic driving, and healthcare. The moving force behind the last advancement in photo recognition is artificial intelligence (AI), using artificial deep neuron networks.

So, what are artificial neuron networks, and what is a deep network?

Artificial neuron networks are computing systems that act similarly to the biological neural network in animals’ brains. These systems can “learn” how to complete different tasks by examining samples, and in the process of sample viewing, there is an update of the strength between the calculating “neurons”. After the system had been trained – shown photos that are fully object tagged, one can present pictures to the systems that it had never “seen” before, and the system could guess the content of the photo. A deep artificial neuron network is a network with many neurons assorted in many layers.

A digital photo consists of 3 tables of numbers, the size of the tables is determined by the photo’s resolution. One table represents the color red, the second represents the color green, and the third represents the color blue (R.G.B – Red, Green, Blue). Each number describes the intensity of that specific color. The combination of the 3 tables creates a fully colored photo. These tables could be an input of a calculative neuron network. The input in the first layer is the list of numbers in the tables. The network transfers the values and mathematically manipulates them through all the network’s layers. An output comes out at the end of the process; a list of numbers between 0-1 that describes the statistical chance of the existence of an object in the picture.

Analyzing mammography photos is a very challenging task since often the cancer is hidden or camouflaged between dense tissues. This problem increased the efforts of developing an AI that will make a better diagnosis. An article in Nature newspaper reported the development of an AI system that surpasses even the abilities of expert radiologists to precisely interpret mammography photos. The researchers used about 30 thousand tagged pictures to train the system. The system was compared to the decision of 6 expert radiologists’ interpretation of 500 cancer cases chosen randomly.

The result of the training was that the system tagged the presence or absence of cancer in the photos better than the expert doctors. The researchers even checked the system’s participation in the double reading, in which each radiologist examines the mammography, and a second radiologist criticizes his findings. The researchers claim that the system could reduce the workload of the second radiologist by 88%.

A system that could find and identify pathological conditions with AI, where humans cannot see some of the details, will become a significant medical diagnostic tool. But the implications of a mistake done by a computer are very different in how we see it from how we see a mistake done by humans. An autonomic vehicle mistakenly killing a person will most likely catch headlines, as opposed to thousands of causalities on the road every year from human mistakes. The benefits computers can give us needs to be further examined before we put our health in their hands, but it seems like we are getting closer to it each day.

 

https://www.pexels.com/photo/bearded-man-in-white-coat-holding-a-controller-8438993/

Facebook
Twitter
Email
LinkedIn