Новости
09.05.2023
с Днём Победы!
07.03.2023
Поздравляем с Международным женским днем!
23.02.2023
Поздравляем с Днем защитника Отечества!
Оплата онлайн
При оплате онлайн будет
удержана комиссия 3,5-5,5%








Способ оплаты:

С банковской карты (3,5%)
Сбербанк онлайн (3,5%)
Со счета в Яндекс.Деньгах (5,5%)
Наличными через терминал (3,5%)

ИССЛЕДОВАНИЕ НЕЙРОННЫХ СЕТЕЙ

Авторы:
Город:
Москва
ВУЗ:
Дата:
20 мая 2020г.

Neural networks research.

The IDF (Industry Development Fund) under the Ministry of Industry and Trade of the Russian Federation on February 13, 2017 launched the IPI 4.0 discussion platform, where the prospects for the 4th industrial revolution in Russia will be discussed. The aim of the work is to study the structure, the principle of operation of a neural network. Neural networks are one of the technologies of the 4th industrial revolution. Right now, neural networks are learning to recognize pneumonia in x-rays of patients with coronavirus in order to make the work of doctors more efficient.

What is a neural network?

A neural network is a sequence of neurons interconnected by synapses. The structure of the neural network came into the world of programming directly from biology. Thanks to this structure, the machine gains the ability to analyze and even remember various information. Neural networks are also capable of not only analyzing incoming information, but also reproducing it from their memory. In other words, a neural network is a machine interpretation of the human brain, in which there are millions of neurons transmitting information in the form of electrical impulses.

What are neural networks for?

Neural networks are used to solve complex problems that require analytical calculations like what the human brain does. The most common applications of neural networks are:

Classification is a distribution of data by parameters. For example, a set of people is given to enter, and it is necessary to decide which of them to give a loan to and who not. This work can be done by a neural network, analyzing information such as: age, solvency, credit history, etc.

Prediction is the ability to predict the next step. For example, the rise or fall of stocks based on the situation in the stock market.

Recognition is currently the widest application of neural networks. Used by Google when you are looking for a photo or in the cameras of phones, when it determines the position of your face and highlights it and much more.

Now, to understand how neural networks work, let's take a look at its components and their parameters.




Neuron is a computing unit that receives information, performs simple calculations on it, and passes it on. They are divided into three main types: input (blue), hidden (red) and output (green). In the case when the neural network consists of a large number of neurons, the term layer appears. There is an input layer that receives information, n hidden layers (usually not more than 3) that process it and an output layer that outputs result. Each of the neurons has 2 main parameters: input data and output data. In case of input neuron: input = output. In the rest, the input field contains the total information of all neurons from the previous layer, after which it is being normalized using the activation function (so far, just imagine it as f (x)) and falls into the output field.



It is important to remember that neurons operate with numbers in the range of [0,1] or [-1,1]. But what, you may ask, then handle the numbers that come out from this range? At this stage, the simplest answer is to divide 1 by this number. This process is called normalization, and it is very often used in neural networks.

Synapse




A synapse is a connection between two neurons. Synapses have 1 parameter - weight. Thanks to that, the input information changes when transmitted from one neuron to another. Let's say there are 3 neurons that transmit information to the next one. Then we have 3 weights corresponding to each of these neurons. The neuron with more weight will dominate the next neuron (example - color mixing).

In fact, the set of weights of a neural network or weights matrix is a kind of brain of the entire system. Through these weights the input information is processed and transformed into results.

Opportunities and prospects of neural networks.

Some items of news about the neural networks that have appeared in recent months can safely be sent to the shelf of fantastic stories only a couple of years ago. But now, in the year 2020, it's not fiction.

Researchers from new Jersey have developed a neural network that can distinguish the cries of the babies from each other and classify them. A test with more than a hundred babies showed that the neural network, in most cases, truly understands what the baby wants: to eat, sleep, a diaper change, attention, feels pain or other discomfort.

Employees of Samsung AI Center-Moscow and specialists from SKOLKOVO have created a system capable of creating an animation only by the several (1 to 8) images of people (photographs or portraits). As a result, we can look at the quite realistically moving faces of Albert Einstein, Marilyn Monroe, Fyodor Dostoevsky, and many others.




Information technology allows to centralize data collection. This is, incidentally, a huge amount of organizational and technical work, the work of engineers and managers that no artificial intelligence will not replace even in the long term. Then we build on these data the predictive models that make decisions, provide predictions and recommendations.

The increase in computing power and decrease in their value gradually leads to the fact that the computer memory can hold more and more adequate model of a larger piece of reality. For example, in an industrial information system, we can know everything that is happening with every detail that goes through the pipeline. In a sense, this is virtual reality. If these models are adequate for all production units, it can be almost completely robotized. This really brings us to the fact that many processes could take place almost without people.

 

References

1. Christopher M. Bishop, Pattern Recognition and Machine Learning. Springer (August 17, 2006)

2. Li Deng, Dong Yu, Deep Learning: Methods and Applications. MSR-TR-2014-21 | May 2014 (Published by Microsoft)

3. Simon S Haykin, Neural Networks and Learning Machines. PHIL; Third Edition (2010)