Скачать книгу

bernetics and transport processes automation

      Tutorial

      Alexander Korpukov

      Dmitry Abramov

      Vadim Shmal

      Pavel Minakov

      © Alexander Korpukov, 2022

      © Dmitry Abramov, 2022

      © Vadim Shmal, 2022

      © Pavel Minakov, 2022

      ISBN 978-5-0059-3941-8

      Created with Ridero smart publishing system

      About cybernetics

      Cybernetics is the science of communication and control. He also explores the self-perception of people and social groups. This concerns how human activities and communication affect collective behavior. The social context of cybernetics is vast and growing. Cybernetics is a dynamic and diverse field of research. New trends and scientific discoveries will influence the field of cybernetics in the coming years.

      Cybernetics is a portfolio word that combines cybernetic with biology. American mathematician John von Neumann published Automata, an article on cybernetics, in which he outlined the fundamental paradigm of the theory: there are situations that are controlled by a central computer. Von Neumann applied the term «automaton» to any device or system that «can be analyzed like automata».

      Cybernetics takes a holistic approach and works with communication at an elementary level. Early cybernetics also explored how language affects the way people interact. Topics of how society and people perceive and interact with information technology are of great interest. A special issue of «Cybernetics» examines the meaning and development of the word «cybernetics». These reviews shed light on this rather little-known branch of science. Despite these theoretical advances and new developments, this area is still poorly understood. Only 10% to 30% of researchers working in this field publish more than three articles a year. A 2006 study found that there is a dead end in attracting the attention of leading journals to new research proposals.

      The applied part of cybernetics deals with the control and movement of systems, as well as how to regulate or control their behavior. Along with systems theory, statistics, and operations research, cybernetics is one of the three main disciplines of science and technology and the first scientific discipline to deal with controlling and influencing the behavior of a system. The main goal of the broad field of cybernetics is to understand and define human intelligence. According to cybernetics, the process of understanding how to build and maintain the human brain and its intellectual capacity is complex and multidimensional.

      Cybernetics is defined as the study of interactions between people and things, the study of the interaction between people and their environment, the study of systems, the systematization of actions. The importance of understanding these interrelationships is what made cybernetics one of the most widespread sciences in the 20th century. The scientific study of any human phenomenon – action, planning, protection, communication, etc. – was included in the disciplinary study of cybernetics.

      Cybernetics has been defined in different ways, by different people, from a wide variety of disciplines. It is a broad concept that encompasses many areas. On one level, it has to do with the nature of all life; transfer and control of information within biological systems and between them. On the other hand, we are talking about the control of processes at the atomic and molecular levels and the network connections between them.

      Research automation proves that a key innovation in machine intelligence is achieving or exceeding the ability of humans to control and manipulate data. The fundamental role of a computer (or smart machine) is not to make calculations; but manage the information processed by the machine. The information network is the basis of intelligence. AI’s primary focus is to develop systems that can monitor the network and dynamically change its connections to improve its performance in response to changing circumstances.

      The «correlation versus causality» discussion in cybernetics means that we need to interpret the data without succumbing to Cartesian dualism. In terms of neoclassical economics, the main driving forces of business are the subjective preferences of people, driven by incentives. The emergent point of view is an emergent system in which different levels of causal structure appear and disappear over time. Bostrom uses this model to see the nature of intelligence.

      Robots and other artificial intelligence systems must evolve following a strategy of making the system as responsive to the environment as possible. They must constantly adapt and improve, following the rules given to them, the strategy adopted for this reason, because a human programmer cannot foresee all future events. The rule-based nature of AI is a key ingredient in its evolution and also, in other words, its goal (although this goal is often overlooked). The ability to learn from experience (learning by doing) is fundamental to intelligent behavior.

      The development of AI led by humans will not be associated with the construction of a high-performance «superintelligence»; but about strengthening and expanding the system in relation to those fundamental principles of cybernetics that we expect from people: learning, adaptation and repetition. A certain «learning to learn» (programmability, emergent behavior) is the foundation of cybernetics.

      Once the AI is created, the system must evolve like any other living system; learning to adapt to the environment as it develops through natural selection, kind of like a Darwinian process. The emulation (evaluation) process is critical to what happens in AI. We can simulate an AI system by simulating a problem. We did this by simulating a chess program. However, the result is limited. He is only able to reproduce simple chess-related activities. This is possible because we have limited the number of things the system can do. We have only simulated the output of the program.

      It is impossible to create a robot unless we first understand the basic process by which the system learns, building it, based on trial and error. To learn, a system must understand what it is doing and have some ability to reverse the processes it is learning. The process of developing an AI system should be copying a simpler system with its own rules.

      Since we do not design «upgrades» to our artificial intelligence systems, they evolve by copying some simpler system. The adaptive system does not repeat a fixed sequence of events in order to learn; rather, it needs to learn about different patterns, behaviors and habits. This imitation process is based on a stimulus-response function.

      The principle of adaptive learning (or learning by doing) is a good example of imitation in action. It is the process by which any machine, any computer or intelligent agent learns how it should behave based on its experience. Learning by imitation is similar to this principle, but it is based on the fact that a person (or group of people) imitates another group or person in order to learn something new. The emulated group or person has their own individual rules of operation (rules of imitation) that determine what types of reactions or behaviors are learned.

      Adaptive emulators (learning by emulation) play a crucial role in the development of intelligence. This is the most important mechanism for learning and developing knowledge. According to Bostrom, they will also play a crucial role in the evolution of intelligent systems.

      Emulation cannot learn if the observer does not, and the observer must be able to learn. This is called an observer loophole. This is the simplest explanation for the so-called social intelligence problem. In practice, the observer loophole makes the emulations look like real intelligent agents. But they have all their inherent limitations.

      Emulation also fails if the emulated system has problems that the observer is not aware of. If the observer cannot tell that the emulated system has problems, he cannot learn from these problems.

      This brings us to the final problem with emulation: learning by emulation is only one mechanism by which intelligent systems can evolve. A true adaptive agent is intelligent because it is designed to evolve with the characteristics of an evolving system.

      Emulation is useful for teaching how to build intelligent systems similar to intelligent systems. However, he cannot learn what an intelligent system

Скачать книгу