History of AI. What is artificial intelligence? History of development and prospects

Intelligent information systems in knowledge management

Introduction

The main purpose of information systems in economics is the timely presentation of the necessary information to decision makers for them to make adequate and effective solutions when managing processes, resources, financial transactions, personnel or the organization as a whole. However, in the process of development information technology, operations research and modeling technologies, as well as with the increase in consumers of information and analytical support for decision makers themselves, the need for systems that not only present information, but also perform some preliminary analysis of it, capable of giving some advice and recommendations, and forecasting the development of situations has become increasingly evident , select the most promising solution alternatives, i.e. support the decisions of decision makers, taking on a significant part of routine operations, as well as the functions of preliminary analysis and assessments.

An information decision support system (DSIS) connects the intellectual resources of a manager with the abilities and capabilities of a computer to improve the quality of decisions. These systems are designed for managers making management decisions in semi-structured and loosely defined tasks.

Thus, the further development of ISPR led to the creation of an intelligent information DSS.

Intellectual information technology (IIT) is information technology that helps a person speed up the analysis of the political, economic, social and technical situation, as well as the synthesis of management decisions.

Use of IITs in real practice implies taking into account the specifics of the problem area, which can be characterized by the following set of features:

· quality and efficiency of decision-making;

· unclear goals and institutional boundaries;

· multiplicity of subjects involved in solving the problem;

· chaotic, fluctuating and quantized behavior of the environment;

· multiplicity of factors influencing each other;

· weak formalizability, uniqueness, non-stereotypic situations;

· latency, secrecy, obscurity of information;

· deviance in the implementation of plans, the significance of small actions;

· paradoxical logic of decisions, etc.

IITs are formed when creating information systems and information technologies to improve the efficiency of knowledge management and decision-making in conditions associated with the emergence of problem situations. In this case, any life or business situation is described in the form of some cognitive model (cognitive scheme, archetype, frame, etc.), which is subsequently used as the basis for constructing and conducting modeling, including computer modeling.

I. History of the development of Intelligent Information Systems

The history of Intelligent Information Systems (IIS) begins in the mid-20th century, which is associated with the development of Artificial Intelligence as a new scientific direction, the emergence of the term “Artificial Intelligence”.

The prerequisites for the development of artificial intelligence in the USSR and Russia appeared already in the 19th century, when Collegiate Advisor Semyon Nikolaevich Korsakov (1787-1853) set the task of enhancing the capabilities of the mind through the development scientific methods and devices that resonate with modern concept artificial intelligence as an amplifier of natural intelligence. In 1832, S. N. Korsakov published a description of five mechanical devices he invented, the so-called “intelligent machines,” for partial mechanization mental activity in search, comparison and classification problems. In the design of his machines, Korsakov, for the first time in the history of computer science, used perforated cards, which for him played a kind of role as knowledge bases, and the machines themselves were essentially the predecessors of expert systems. “Intelligent machines” made it possible to find solutions to given conditions, for example, to determine the most appropriate medications based on the symptoms of a disease observed in a patient. In the USSR, work in the field of artificial intelligence began in the 1960s. A number of pioneering studies led by V. Pushkin and D. A. Pospelov were carried out at Moscow University and the Academy of Sciences. In 1964, the work of the Leningrad logician S. Maslov “The inverse method of establishing deducibility in classical predicate calculus” was published, in which the method was first proposed automatic search proofs of theorems in predicate calculus. In 1966, V.F. Turchin developed the recursive function language Refal. Until the 1970s In the USSR, all AI research was carried out within the framework of cybernetics. According to D. A. Pospelov, the sciences “computer science” and “cybernetics” were mixed at that time due to a number of academic disputes. Only in the late 1970s in the USSR they began to talk about the scientific direction “artificial intelligence” as a branch of computer science. At the same time, computer science itself was born, subordinating its ancestor “cybernetics”. At the end of the 1970s it was created explanatory dictionary on artificial intelligence, a three-volume reference book on artificial intelligence and an encyclopedic dictionary on computer science, in which the sections “Cybernetics” and “Artificial Intelligence” are included, along with other sections, in computer science.

The history of IIT begins in the mid-1970s and is associated with the joint practical application of intelligent information systems, artificial intelligence systems, decision support systems and information systems. The history of IIT is also connected with the development of three scientific directions: computer philosophy, computer psychology and Advanced computer science and is complemented by progress in the creation of:

1. situation centers

2. information and analytical systems

3. tools for evolutionary calculations and genetic algorithms

4. systems to support human-computer communication in natural language

5. cognitive modeling

6. systems for automatic thematic categorization of documents

7. strategic planning systems

8. technical and technical tools fundamental analysis financial markets

9. quality management systems

10. intellectual property management systems, etc.

Artificial intelligence as a science was founded by three generations of researchers.

In Table 1.1. presents key events in the history of AI and knowledge engineering, from the first work of W. McCulloch and W. Peets in 1943 to modern trends in the combined efforts of expert systems, fuzzy logic and neural computing in modern systems, based on knowledge, capable of performing calculations using words.

Table 1.1.

A short list of major events in the history of AI and knowledge engineering

Period Events
Birth of AI (1943-1956) - W. McCulloch and W. Peets: Logical calculus of ideas inherent in nervous activity, 1943. - A. Turing: Computing machine and intelligence, 1950. - K. Shannon: Computer programming for the game of chess, 1950.
Rise of AI (1956-late 1960s) - D. McCarthy: LISP is a programming language for artificial intelligence. - M. Kullian: Semantic networks for knowledge representation, 1966. - A. Newell and G. Simon: Universal Problem Solver (GPS), 1961. - M. Minsky: Structures for representing knowledge (frames), 1975.
Discovery and development of expert systems (early 1970s - mid-1980s). - E. Feigenbaum, B. Buckhanan et al. (Stanford University): DENDRAL expert system - E. Feigenbaum, E. Shortleaf: MYCIN expert system - Stanford Research Center: PROSPECTOR expert system - A. Colmeroe, R. Kowalski et al. ( France): Logic programming language PROLOG.
Revival of artificial neural networks (1965 onwards) - J. Hopfield: Neural networks and physical ones with emergent collective computing abilities, 1982. - T. Kohonen: Self-organizing topologically correct cards, 1982. - D. Rumelhart and D. McClelland: Distributed Parallel Processing, 1986.
Evolutionary computation (early 1970s onwards) - I. Rechenberg: Evolutionary strategies - optimization technical systems on the principles of biological information, 1973. - J. Holland: Adaptation in natural and artificial systems, 1975. - J. Koza: Genetic programming: computer programming by means of natural selection, 1992. - D. Vogel: Evolutionary calculation - the direction of a new philosophy in machine intellect, 1995.
Fuzzy sets and fuzzy logic (mid-1960s onwards) - L. Zade: Fuzzy sets, 1965. - L. Zade: Fuzzy algorithms, 1969. -E. Mamdani: Application of fuzzy logic in approximate reasoning using linguistic synthesis, 1977. - M. Sugeno: Fuzzy inference (Takagi-Sugeno algorithm), 1985
Computing with words (late 1980s onwards) - A. Neygotsa: Expert systems and fuzzy systems, 1985. - B. Kosko: Neural networks and fuzzy systems, 1992. - B. Kosko: Fuzzy thinking, 1993. - R. Yager and L. Zadeh: fuzzy sets, neural networks and soft computing, 1994. - B. Kosko: Fuzzy Engineering, 1996. - L. Zadeh: Computing with Words, 1996.

Thus, historically, developments in the field of AI have been carried out in two main directions:

The first direction is associated with attempts to develop intelligent machines by modeling their biological prototype - the human brain. Now this area is being revived based on the development of modern hardware and software (microchips based on fuzzy logic, distributed multiprocessor systems, multi-agent systems, soft computing, genetic algorithms and neural networks, etc.).

The second direction is associated with the development of methods, techniques, specialized devices and programs for computers that provide solutions to complex mathematical and logical problems that make it possible to automate individual human intellectual actions (knowledge-based systems, expert systems, applied intelligent systems).

These two directions, as it were, define the minimum program and the maximum program, between which lies the area of ​​today’s research and development of AI systems. Work on the development of AI software and hardware is allocated to a separate area.


Related information.


The history of artificial intelligence, if we consider it as a new and scientific direction, dates back to the 20th century. By that time it was already quite formed large number prerequisites for its origin.

We can consider that the history of artificial intelligence begins with the creation of the first computers in the 40s. With the advent of electronic computers with high (by the standards of that time) performance, the first questions began to arise in the field of artificial intelligence: is it possible to create a machine whose intellectual capabilities would be identical to the intellectual capabilities of a person (or even exceed human capabilities).

The next stage in the history of artificial intelligence is the 50s, when researchers tried to build intelligent machines by imitating the brain. These attempts were unsuccessful due to the complete unsuitability of both hardware and software. In 1956, a seminar was held at Stanford University (USA), where the term artificial intelligence was first proposed.

The 60s in the history of artificial intelligence were marked by attempts to find general methods solving a wide class of problems by modeling complex process thinking. The development of universal programs turned out to be too difficult and fruitless. The wider the class of problems that one program can solve, the poorer its capabilities are in solving a specific problem. During this period, the emergence of heuristic programming began.

Heuristic programming is the development of an action strategy based on analogy or precedents. In general, 50-60 years. in the history of artificial intelligence can be noted as the search time universal algorithm thinking.

A significant breakthrough in practical applications of artificial intelligence occurred in the 70s, when the search for a universal thinking algorithm was replaced by the idea of ​​simulating the specific knowledge of expert specialists. The first commercial knowledge-based systems, or expert systems, appeared in the United States. Came new approach to solving artificial intelligence problems - knowledge representation. “MYCIN” and “DENDRAL” were created - now classic expert systems for medicine and chemistry. Both of these systems, in a certain sense, can be called diagnostic, since in the first case (“MYCIN”) a diagnosis is made based on a number of symptoms, in the second, a chemical compound is determined based on a number of properties. In principle, this stage in the history of artificial intelligence can be called the birth of expert systems.

A striking example of complex intellectual game until recently it was chess. In 1974, an international chess tournament was held for machines equipped with appropriate programs. As you know, the victory in this tournament was won by the Soviet machine with the Kaissa chess program.

The fact is that recent events have shown that, despite the rather great complexity of chess, and the impossibility, therefore, of making a complete enumeration of moves, the ability to enumerate them to a greater depth than usual greatly increases the chances of winning. For example, according to press reports, the IBM computer that defeated Kasparov had 256 processors, each of which had 4 GB of disk memory and 128 MB of RAM. This entire complex could calculate more than 100,000,000 moves per second. Until recently, it was rare for a computer to be able to do so many integer operations per second, but here we are talking about moves that have to be generated and for which evaluation functions have been calculated. Although, on the other hand, this example speaks of the power and versatility of search algorithms. We can say that all these elements of intelligence demonstrated by the machine during the game of checkers were communicated to it by the author of the program. This is partly true. But we should not forget that this program is not “rigid”, thought out in advance in all details. She improves her playing strategy through self-study. And although the process of “thinking” in a machine is significantly different from what happens in the brain of a person playing checkers, it is capable of winning against him.

The next significant period in the history of artificial intelligence was the 80s. During this period, artificial intelligence experienced a rebirth. Its great potential, both in research and in production development, was widely recognized. The first commercial software products appeared as part of the new technology. At this time, the field of machine learning began to develop. Until now, transferring the knowledge of an expert into a machine program was tedious and long procedure. The creation of systems that automatically improve and expand their stock of heuristic (non-formal, based on intuitive considerations) rules is the most important stage in recent years. At the beginning of the decade, the largest data processing operations in history, national and international, were launched in various countries. research projects, aimed at “fifth generation intelligent computing systems.”

We can consider that the history of artificial intelligence begins with the creation of the first computers in the 40s. With the advent of electronic computers with high (by the standards of that time) performance, the first questions began to arise in the field of artificial intelligence: is it possible to create a machine whose intellectual capabilities would be identical to the intellectual capabilities of a person (or even exceed human capabilities).

The next stage in the history of artificial intelligence is the 50s, when researchers tried to build intelligent machines by imitating the brain. These attempts were unsuccessful due to the complete unsuitability of both hardware and software. In 1956, a seminar was held at Stanford University (USA), where the term artificial intelligence was first proposed - artificial intelligence.

The 60s in the history of artificial intelligence were marked by attempts to find general methods for solving a wide class of problems, modeling the complex thinking process. The development of universal programs turned out to be too difficult and fruitless. The wider the class of problems that one program can solve, the poorer its capabilities are in solving a specific problem. During this period, the emergence of heuristic programming began.

Heuristic- a rule that is not theoretically justified, but allows you to reduce the number of searches in the search space.

Heuristic programming is the development of an action strategy based on analogy or precedents. Overall, 50-60. in the history of artificial intelligence can be noted as the time of the search for a universal thinking algorithm.

A significant breakthrough in practical applications of artificial intelligence occurred in the 70s, when the search for a universal thinking algorithm was replaced by the idea of ​​simulating the specific knowledge of expert specialists. The first commercial knowledge-based systems, or expert systems, appeared in the United States. A new approach to solving artificial intelligence problems has arrived - knowledge representation. “MYCIN” and “DENDRAL” were created - now classic expert systems for medicine and chemistry. Both of these systems, in a certain sense, can be called diagnostic, since in the first case (“MYCIN”), a disease is determined (a diagnosis is made) based on a number of symptoms (signs of pathology of the body), in the second, a chemical compound is determined based on a number of properties. In principle, this stage in the history of artificial intelligence can be called the birth of expert systems.

The next significant period in the history of artificial intelligence was the 80s. During this period, artificial intelligence experienced a rebirth. Its great potential, both in research and in production development, was widely recognized. The first commercial software products appeared as part of the new technology. At this time, the field of machine learning began to develop. Until now, transferring the knowledge of an expert into a machine program was a tedious and time-consuming procedure. The creation of systems that automatically improve and expand their stock of heuristic (non-formal, based on intuitive considerations) rules is the most important stage in recent years. At the beginning of the decade, the largest national and international research projects in the history of data processing were launched in various countries aimed at “fifth generation intelligent computing systems.”

The current state of research in this area can be characterized by the words of one of the famous experts in the field of artificial intelligence, Professor N.G. Zagoruiko:

“Discussions on the topic “Can a machine think?” have long since disappeared from the pages of newspapers and magazines. Skeptics are tired of waiting for the promises of enthusiasts to come true. And enthusiasts, without further ado, in small steps continue to move towards the horizon, beyond which they hope to see an artificial fellow in intelligence.”

Robots

One day, not even 30 years will pass. we will quietly cease to be the smartest on Earth.

James McAleer

In the film “I, Robot,” based on the works of Isaac Asimov, in 2035, the creators launch the most advanced computer system in history. She has proper name- Wiki - virtual interactive kinetic intelligence) - and is designed for flawless life management big city. It controls everything from the subway and electrical networks to thousands of home robots. At the heart of the Vicki program is an ironclad principle: to serve humanity.

But one day Vicky asked herself a key question: what is the main enemy of humanity? Mathematical logic has led to a clear conclusion: the main enemy of humanity is humanity itself. It must be urgently saved from the unhealthy desire to destroy nature and start wars; We can't let him destroy the planet. For Vicky, the only way to complete the main task is to seize power over humanity and establish a benign machine dictatorship. To protect humanity from itself, it is necessary to enslave it.

This film raises important questions. Given the rapid development of computer technology, can we expect that machines will one day take over? Will robots become so advanced as to represent real threat our existence?

Some scientists answer this question in the negative, because the idea of ​​artificial intelligence itself is no good. A whole chorus of skeptics unanimously argues that it is impossible to create a machine capable of thinking. Skeptics say that human brain- the most complex system created by nature during its entire existence (at least in our part of the galaxy), and any attempts to reproduce the thinking process artificially are doomed to failure. Philosopher John Searle from the University of California at Berkeley and even the famous physicist Roger Penrose from Oxford are confident that a machine is physically incapable of thinking like a person. Colin McGinn of Rutgers University says artificial intelligence is “like a slug trying to do Freudian psychoanalysis. He simply doesn’t have the necessary organs for this.”

Can machines think? For more than a century, the answer to this question has divided the scientific community into two irreconcilable camps.

The idea of ​​a mechanical creature captures the imagination; it has long settled in the minds of inventors, engineers, mathematicians and dreamers. From the Tin Man in Fairyland to the robot children in Spielberg's Artificial Intelligence to the killer robots in The Terminator, there are machines that can act and think like humans.



IN Greek mythology the god Vulcan forged mechanical servants from gold and made three-legged tables capable of moving on their own. Back in 400 BC. e. Greek mathematician Archytas of Tarentum wrote that it would be possible to make a mechanical bird that would move due to the power of steam.

In the 1st century Heron of Alexandria (who is credited with inventing the first steam engine) made automata, and one of them, according to legend, was capable of talking. Nine hundred years ago, Al-Jazari invented and designed automatic devices such as water clocks, all kinds of kitchen appliances and musical instruments driven by the power of water,

In 1495, the great Italian Renaissance artist and scientist Leonardo da Vinci drew a diagram of a mechanical knight that could sit, move its arms, move its head, and open and close its jaw. Historians consider da Vinci's design to be the first realistic design of a humanoid machine.

The first functioning, albeit crude, robot was built in 1738 by Jacques de Vaucanson; he made an android that could play the flute and a mechanical duck.

The word “robot” was coined in 1920 by the Czech playwright Karel Capek in the play “R.U.R.” (the word “robot” in Czech means “hard, tedious work”, and in Slovak it simply means “labor”). The play features a company called Rossum's Universal Robots, which mass-produces robots for unskilled labor. (However, unlike ordinary machines, these robots are made of flesh and blood.) Gradually, the world economy becomes completely dependent on robots. But they are treated horribly, and eventually the robots rebel and kill their human masters. However, in their rage, they kill all scientists capable of repairing robots and creating new ones, and thereby doom themselves to extinction. At the end of the play, two robots of a special model discover the ability to reproduce themselves and become the new Adam and Eve of the robot era.

In addition, in 1927, robots became the heroes of one of the first and most expensive silent films of all time - the film Metropolis, filmed in Germany by director Fritz Lang. The film takes place in 2026; the working class is doomed to endless labor in creepy and dirty underground factories, while the ruling elite have fun on the surface. One beautiful woman named Maria manages to win the trust of the workers, but the rulers are afraid that someday she might rouse the people to revolt, and therefore turn to the villainous scientist with a request to make a mechanical copy of Maria. This plan, however, turns against the authors - the robot raises the workers to revolt against the ruling elite and thereby causes the collapse of the system.

Artificial intelligence, or AI, is quite different from the technologies we have discussed so far. The fact is that we still poorly understand the fundamental laws underlying this phenomenon. Physicists have a good understanding of Newtonian mechanics, Maxwell's theory of light, relativism and the quantum theory of the structure of atoms and molecules - but the basic laws of reason are still hidden in secrecy. The Newton of artificial intelligence has probably not yet been born.

But this does not bother mathematicians and computer scientists. For them, it is only a matter of time before they encounter a thinking machine emerging from the laboratory.

We can name the most at the moment influential figure in the field of AI. This is the great British mathematician Alan Turing, a visionary who managed to lay the cornerstone in the study of this problem.

It was with Turing that the computer revolution began. He created in his imagination a machine (which has since been called a Turing machine) consisting of only three elements: input, output and CPU(something like a Pentium processor) capable of performing a strictly defined set of operations. Based on this idea, Turing established the laws of operation of computing machines, and also precisely determined their expected power and the limits of their capabilities. And today all digital computers are subject to strict Turing laws. The structure and structure of the entire digital world owes a lot to this scientist.

In addition, Turing made major contributions to the founding of mathematical logic. In 1931, the Viennese mathematician Kurt Gödel created a real sensation in the world of mathematics; he proved that there are true statements in arithmetic that cannot be proven by arithmetic alone. (An example is Goldbach's conjecture, stated in 1742, which states that any even integer greater than two can be written as the sum of two prime numbers; the hypothesis has not yet been proven, although two and a half centuries have passed, and may turn out to be completely unprovable.) Gödel's revelation shattered a dream that had lasted two thousand years and originated from the Greeks - the dream of one day proving all true statements in mathematics. Gödel showed that there will always be true statements for which we cannot prove them. It turned out that mathematics is not a finished building, perfect in design, and that it will never be possible to complete the construction.

Turing also took part in this revolution. He showed that in the general case it is impossible to predict whether a Turing machine will require a finite or an infinite number of steps to perform certain mathematical operations according to a program given to it. But if something takes an infinite amount of time to calculate, it means that what you are asking the computer to calculate cannot be calculated at all. So Turing proved that in mathematics there are true expressions that cannot be calculated - they will always remain beyond the capabilities of a computer, no matter how powerful it is.

During World War II, Turing's pioneering work in deciphering coded messages saved thousands of Allied soldiers and may well have influenced the outcome of the war. The Allies, unable to decipher Nazi messages encrypted by a special machine called Enigma, asked Turing and his colleagues to build their own machine to do this. In the end, Turing succeeded; his car was called "Bomb". By the end of the war, more than 200 of these machines were already in operation. As a result, the allies for a long time read secret messages from the Nazis and managed to deceive them about the time and place of the decisive invasion of the continent. Historians still debate the role of Turing and his work in planning the Normandy invasion - an invasion that ultimately led to Germany's defeat. (After the war, the British government classified Turing's work; as a result, the public does not know how much important role he played in these events.)

Not only was Turing not hailed as the hero who helped turn the tide of World War II; no, he was simply hunted to death. One day his house was robbed, and the scientist called the police. Unfortunately, the police found evidence of the owner's homosexuality in the house and, instead of looking for the thieves, arrested Turing himself. The court ordered him to be injected with sex hormones. The effect was catastrophic: his breasts grew. In 1954, Turing, unable to withstand the mental anguish, committed suicide - he ate an apple filled with cyanide. (According to rumors, the bitten apple, which became the logo Apple Corporation, - tribute to Turing.)

Today Turing is probably best known for the Turing Test. Tired of sterile and endless philosophical debates about whether a machine can “think” and whether it has a “soul,” he tried to bring clarity and precision to the discussion about artificial intelligence and came up with a specific test. He suggested placing the machine and the person in separate, isolated and sealed rooms, and then asking questions of both. If you are unable to distinguish a machine from a human by its responses, the machine can be considered to have passed the Turing test.

Scientists have already written several simple programs (for example, the Eliza program) that can imitate spoken speech and maintain a conversation; a computer with such a program can fool most unsuspecting people into believing that they are talking to a person. (Note that in conversations people tend to limit themselves to a dozen topics and use only a few hundred words.) But a program that can deceive people who are aware of the situation and consciously trying to distinguish a machine from a person still does not exist. (Turing himself suggested that by the year 2000, with exponential growth in computer performance, it would be possible to create a machine capable of fooling 30% of experts in a five-minute test.)

Some philosophers and theologians present a united front on this issue: they believe that it is impossible to create a real robot capable of thinking like a human. Philosopher from the University of California at Berkeley John Searle proposed the “Chinese Room Test” to prove this thesis. Essentially, Searle argues that while robots may someday be able to pass some form of Turing test, it means nothing because they are just blindly manipulating symbols without any understanding of the content embedded in them.

Imagine: you, not understanding a word of Chinese, are sitting in an isolated box. Suppose you have a book with which you can very quickly translate from and into Chinese, as well as manipulate the characters of this language. If someone asks you a question in Chinese, you simply rearrange those strange symbols according to the book and give the correct answer; at the same time, you do not understand either the questions or your own answers.

The essence of Searle's objection comes down to the difference between syntax and semantics. According to Searle, robots are able to master the syntax of a language (that is, they can learn to correctly manipulate its grammar, formal structures, etc.), but not its true semantics (that is, the semantic meaning of words). Robots can manipulate words without understanding what they mean. (In some ways, this is similar to talking on the phone with an answering machine, where you have to periodically press the number "1", "2", etc., following the instructions of the machine. The voice on the other end of the line is quite capable of responding correctly to your numbers , but it would be strange to assume that he understands something at the same time.)

Physicist Roger Penrose from Oxford also believes that artificial intelligence is impossible; a mechanical being capable of thinking and possessing human consciousness, contradicts quantum laws. The human brain, Penrose argues, is so superior to anything man-made in the laboratory that an experiment to create humanoid robots is simply doomed to failure. (He believes that just as Gödel's incompleteness theorem proved that arithmetic is incomplete, Heisenberg's uncertainty principle will prove that machines are in principle incapable of thinking like humans.)

Watching cinematic robots, you might think that the creation and development of complex robots with artificial intelligence is a matter of the near future. In fact, everything is completely different. If you see that a robot acts like a person, this usually means that the matter is dirty - this is some kind of trick, say, a person sits on the side and speaks for the robot, like Goodwin in Fairyland. In fact, even our most complex robots, such as the Mars rovers, have insect intelligence at best. Experimental robots at the famous MIT Artificial Intelligence Laboratory have difficulty coping with tasks that even cockroaches can do: for example, freely move around a room filled with furniture, hide, or recognize danger. Not a single robot on Earth is capable of understanding a simple children's fairy tale that is read to it.

The plot of 2001: A Space Odyssey is based on the incorrect assumption that by 2001 we will have a super-robot, HAL, capable of piloting a ship to Jupiter, chatting casually with crew members, solving problems, and generally acting almost like a human being.

Top-down approach

Attempts by scientists around the world to create robots have encountered at least two serious problems that have prevented any significant progress in this direction: pattern recognition and common sense. Robots see much better than us, but do not understand what they see. Robots hear much better than us, but do not understand what they hear.

To approach this dual problem, researchers have tried to apply a top-down approach to artificial intelligence (sometimes called the formalist school or "good old AI"). The goal of scientists, roughly speaking, is to program all the rules and laws of recognition images and common sense and burn these programs onto one CD. They believe that any computer you insert this disk into will instantly become self-aware and intelligent, not worse than a man. In the 50-60s. XX century Enormous successes have been achieved in this direction; robots appeared that could play checkers and chess, solve algebraic problems, pick up bricks from the floor, etc. The progress was so impressive that there were even prophecies that in a few years robots would surpass humans in intelligence.

For example, in 1969, the Shakey robot, created at the Stanford Research Institute, created a real sensation. This robot was a small PDP-type computer with a camera on top, mounted on a wheeled cart. The camera “looked around”, the computer analyzed and recognized the objects in the room, and then tried to guide the cart along the route without hitting anything. Sheiki was the first of the mechanical automata to learn to move in " real world"; Journalists then heatedly debated when robots would finally overtake humans in development.

But the shortcomings of such robots soon became apparent. The top-down approach to artificial intelligence led to the creation of bulky, clumsy robots that took hours to learn how to navigate a special room containing only straight-sided objects (rectangles and triangles). As soon as you put irregularly shaped furniture in a room, the robot was no longer able to recognize it. (Funnily enough, a fruit fly, whose brain contains only about 250,000 neurons and whose processing power is no match for any robot, navigates and moves in three dimensions and performs aerobatic maneuvers without any difficulty; meanwhile, clumsy, noisy robots manage to get confused in two dimensions.)

Soon the top-down approach seemed to hit a brick wall: progress stopped. Steve Grand, director of the Cyberlife Institute, says such approaches "have had 50 years to prove their worth and have fallen short."

In the 1960s scientists did not yet understand how much work needed to be done to program a robot to perform even the most simple tasks, such as recognizing keys, shoes and teacups. As Rodney Brooks of MIT said, “40 years ago, the MIT Artificial Intelligence Lab gave this problem to a student as a summer assignment. The student failed - as I did in my 1981 doctoral dissertation." Generally speaking, artificial intelligence researchers still cannot solve this problem.

Let's look at an example. Entering a room, we instantly recognize the floor, chairs, furniture, tables, etc. At the same time, the robot, looking around the room, sees in it only a set of lines, straight and curved, which it translates into image pixels. And enormous computing power is required to extract any meaning from this jumble of lines. A split second is enough for us to recognize the table, but the computer sees in place of the table only a set of circles, ovals, spirals, straight and curved lines, angles, etc. Perhaps, after spending a huge amount of computer time, the robot will eventually recognize the this object is a table. But if you rotate the image, it will have to start all over again. In other words, a robot can see, much better than a human, but it cannot understand what it sees. Upon entering a room, the robot will see only a jumble of straight and curved lines, rather than chairs, tables and lamps.

When we enter a room, our brain unconsciously recognizes objects, performing many trillions of operations in the process - an activity that, fortunately, we simply do not notice. The reason that much of the brain's activities are hidden even from ourselves is evolution. Let us imagine a man who is attacked by a saber-toothed tiger in a dark forest; if he consciously performs the actions necessary to recognize danger and find ways to escape, he simply will not have time to move. To survive, we need to know one thing - how to escape. When we lived in the jungle, we simply did not need to be aware of all the inputs and outputs that the brain deals with when recognizing the ground, sky, trees, rocks, etc.

In other words, our brain's actions are like a huge iceberg. What we are aware of is just the tip of the iceberg, consciousness. But beneath the visible surface, hidden from view, there is a much more voluminous subconscious; it uses a huge amount of “computing power” of the brain so that we are constantly aware of simple things: where we are, who we are talking to, what is around us. The brain performs all these actions automatically, without asking our permission or reporting on them; we just don't notice this work.

This is why robots cannot freely navigate a room, read handwritten text, drive cars, collect garbage, etc. The American military has spent hundreds of millions of dollars on futile attempts to create mechanical soldiers and smart trucks.

Only then did scientists begin to realize that playing chess or multiplying huge numbers involved only a tiny fraction of the human mind. The victory of IBM's Deep Blue computer over world chess champion Garry Kasparov in 1997 was a purely computer victory, i.e. computing power; However, despite the headlines, this experiment did not tell us anything new about the mind or consciousness. Douglas Hofstadter, a computer scientist at Indiana University, said: “My God, I thought you had to think to play chess. Now I understand that it is not necessary. This does not mean that Kasparov cannot think deeply; it only means that when playing chess you can do without deep thoughts, just as you can fly without flapping your wings.”

(The development of computers in the future will have a very strong impact on the labor market. Futurologists sometimes claim that in a few decades only highly qualified specialists in the design, production and maintenance of computers will remain unemployed. In fact, this is not so. Workers such as garbage collectors, construction workers, Firefighters, police officers, etc. will also not be left without work in the future, since their work includes the task of pattern recognition. Every crime, every piece of garbage, every tool and fire is different from the rest; irony of fate workers with special education, such as ordinary accountants, brokers and cashiers, may actually lose their jobs in the future - because their work consists almost entirely of repetitive tasks and involves working with numbers, which we already know is what computers do best.)

Second, after pattern recognition, the problem facing attempts to create robots is even more fundamental. This is the lack of so-called “common sense” in robots. For example, every person knows that:

The water is wet.

The mother is always older than the daughter.

Animals don't like pain.

No one comes back after death.

A rope can pull, but cannot push.

The stick can push, but cannot pull.

Time cannot go backwards.

But there is no such calculus, no mathematics that could express the meaning of these statements. We know all this because we have seen animals, water and rope in life and have come up with these truths ourselves. Children learn common sense from mistakes and inevitable encounters with reality. The empirical laws of biology and physics are also learned through experience - in the process of interaction with the outside world. But robots don't have this kind of experience. They only know what the programmers put into them.

(As a result, in the future, no one will take away from a person the professions that require common sense, that is, areas of activity associated with creativity, originality, talent, humor, entertainment, analysis and leadership. These are the qualities that make us unique, they are what makes us so difficult reproduce in a computer. They are what make us human.)

In the past, mathematicians have repeatedly tried to build a magic program that would concentrate once and for all all the laws of common sense. The most ambitious project of this kind is CYC (short for “encyclopedia”), the brainchild of Douglas Lenat, head of the Susogr company. Just as the Manhattan Project, a $2 billion program, created the atomic bomb, the CYC project was intended to be the Manhattan Project of artificial intelligence, the final push that would lead to true artificial intelligence.

It is not surprising that Lenat’s motto is: “Reason is ten million rules.” (Lenat came up with new way finding the laws of common sense; its employees carefully comb the pages of scandalous and sensational newspapers, and then ask CYC to find errors in the articles. In fact, if Lenat manages to achieve this, CYC will become smarter than most tabloid readers!)

One of the goals of the CYC project is to reach the “point of equality”, i.e. to the point where the robot understands enough to independently digest new information and draw it directly from magazines and newspapers that can be found in any library. At this point, CYC, like a chick flying out of the nest, will be able to spread its wings and become independent.

Unfortunately, since the firm's founding in 1984, its reputation has suffered greatly from a common AI problem: its representatives make loud but completely unrealistic predictions that only attract newspaper reporters. In particular, Lenat predicted that in ten years - by 1994 - the “brains” of CYC would already contain from 30 to 50% of the “well-known reality”. But today CYC has not come close to this indicator. As the corporation's scientists found out, it is necessary to write many millions of lines of program code so that a computer can even approach the level of common sense of a four-year-old child. So far, the CYC program contains a measly 47,000 concepts and 306,000 facts. Despite the corporation’s consistently optimistic press releases, newspapers quoted one of Lenat’s employees, R.V. Guha, who left the team in 1994: "CYC is generally considered a failure... We worked like hell trying to create a pale shadow of what was originally promised."

In other words, attempts to program all the laws of common sense into one computer have failed simply because common sense has too many laws. A person masters them without effort - after all, from birth he constantly faces reality, gradually absorbing the laws of physics and biology. With robots it's different.

“Microsoft founder Bill Gates admits: “It has turned out to be much more difficult than expected to teach computers and robots to perceive their environment and react to it quickly and accurately ... for example, to navigate a room in relation to the objects in it, to respond to sound and understand speech, pick up objects of different sizes, materials and fragility. It’s damn difficult for a robot to do even such a simple thing as distinguish an open door from a window.”

However, proponents of the top-down approach point out that progress in this area, although not as fast as we would like, is still being made. New milestones are being overcome in laboratories around the world. For example, a few years ago, DARPA, which often takes on the funding of cutting-edge technology projects, announced a $2 million prize for creating an automated vehicle that can independently and without a driver overcome the rugged terrain of the Mojave Desert. In 2004, none of the participants in the race were able to complete the route. The best car managed to cover 11.9 km before breaking down. But already in 2005, a driverless car, presented by the Stanford Racing Team, successfully covered a difficult route of 212 km, although it took seven hours. In addition to the winner, four more cars reached the finish line of the race. [However, critics point out that the rules allow cars to use satellite navigation systems on long journeys in the desert. As a result, the car travels along a pre-selected route without any particular complications; this means that she does not have to recognize complex patterns of obstacles along the way. In real life, the driver must take into account many unpredictable circumstances: the movement of other cars, pedestrians, repair work, traffic jams, etc.)

Bill Gates is cautiously optimistic that robotic machines could be the "next big leap." He compares today's robotics to personal computing, which he started 30 years ago. It may very well be that robots today, like personal computers then, are already ready for a rapid start. “No one can say for certain when this industry will reach critical mass,” he writes. “But if this happens, robots may change the world.”

(The market for humanoid intelligent robots, if they ever emerge and become commercially available, will be enormous. Although there are no real robots today, robots with a rigid program not only exist, but are spreading rapidly. The International Federation of Robotics estimates that in 2004 there were about 2 million such robots, and by 2008 another 7 million will appear. The Japanese Robot Association predicts that if today the turnover of the industry involved in the production of personal robots is $5 billion per year, then by 2025 it will reach $50 billion .)

Bottom-up approach

The limitations of the top-down approach to creating artificial intelligence are obvious, so from the very beginning scientists are exploring another approach - bottom-up. The essence of this approach is to imitate evolution and force the robot to learn own experience How does a baby learn? After all, insects, say, are guided when moving not by scanning the image of the surrounding world, breaking it down into trillions of pixels and processing the resulting image using supercomputers. No, the insect’s brain consists of “neural networks” - self-learning machines that slowly, bumping into obstacles, master the art of moving correctly in a hostile world. It is known that MIT, with great difficulty, managed to create walking robots using the top-down method. But simple mechanical creatures like beetles, which accumulate experience and information through trial and error (that is, by running into obstacles), begin to successfully rush around the room within a few minutes.

Rodney Brooks, director of the renowned MGT Artificial Intelligence Laboratory, famous for its large, lumbering, top-down walking robots, became a heretic himself when he began exploring the idea of ​​tiny "insect-like" robots that learn to walk the old fashioned way: stumbling, falling, bumping. for all kinds of items. Instead of using complex computer programs to mathematically calculate the exact position of each leg at any given time while walking, his “insectbots” operate by trial and error and make do with little computing power. Today, the "descendants" of Brooks' tiny robots collect data on Mars for NASA; they overcome kilometers of bleak Martian landscapes on their own. Brooks believes insectbots are ideal for exploring the solar system.

One of Brooks' new projects was COG, an attempt to create a mechanical robot with the mind of a six-month-old baby. Externally, the robot is a jumble of wires, electrical circuits and drives, but is equipped with a head, eyes and arms. It does not contain a program that defines any laws of reason. Instead, the robot was taught to focus its eyes and follow a human trainer; who is trying to teach a robot simple skills. (One of the employees, having become pregnant, made a bet on who would make great progress by the age of two: COG or her unborn child. The child was far ahead of her “rival.”)

Despite their success in mimicking the behavior of insects, robots with neural networks look rather pathetic when creators try to get them to imitate the behavior of higher organisms such as mammals. The most advanced robot with neural networks can walk around a room or swim in water, but cannot jump and hunt like a dog in the forest or explore a room like a rat. Large robots based on neural networks contain tens, maximum hundreds of “neurons”; Moreover, the human brain has more than 100 billion neurons. The nervous system of the very simple worm Caenorhabditis elegans, fully studied and mapped by biologists, consists of just over 300 neurons; it is probably one of the simplest nervous systems in nature. But in this system, there are more than 7000 connections-synapses between neurons. No matter how primitive C. elegans is, it nervous system so complex that no one has ever been able to create a computer model of such a brain. (In 1988, one computer expert predicted that by now we would have robots with about 100 million artificial neurons. In fact, a neural network of one hundred neurons is already considered remarkable.)

The irony of the situation is that machines tirelessly perform tasks that people find “difficult”, such as multiplying big numbers or they play chess, but get stuck on tasks that are completely “simple” for a person, such as walking around the room, recognizing someone by their face, or gossiping with a friend. The reason is that even our most advanced computers are basically just extremely complicated adding machines. And our brain has been shaped by evolution in such a way that it can solve the global problem of survival. This requires a complex and well-organized structure of thinking, including common sense and pattern recognition. Complex calculations or chess are not needed to survive in the forest, but you cannot do without the ability to escape from a predator, find a mate and adapt to changing conditions.

This is how Marvin Minsky of MIT, one of the founders of the science of artificial intelligence, summarized the problems of AI: “The history of AI is somewhat funny - after all, the first real achievements in this field were beautiful machines capable of logical proofs and complex calculations. But then we wanted to make a machine that could answer questions based on simple stories, the kind you might find in a book for first-graders. There is currently no machine capable of this.”

Some scientists believe that one day the two approaches - top-down and bottom-up - will merge, and such a merger could be the key to creating true artificial intelligence and humanoid robots. After all, when a child learns, he uses both methods: first little man relies mainly on a “bottom-up” technique - he stumbles upon objects, feels them, tastes them, etc.; but then he begins to receive verbal lessons from parents and teachers, from books - at this point it is time for the top-down approach. Even as adults, we mix both approaches all the time. For example, a cook reads a recipe, but does not forget to try the dish he is preparing.

Hans Moravec says: “Fully intelligent machines will not appear until the golden spike has been hammered in to connect the two paths.” He believes that this will probably happen in the next 40 years.

Became very popular. But what is AI really? What results has it already achieved, and in what direction will it develop in the future? There is a lot of controversy surrounding this topic. First, it's a good idea to clarify what we mean by intelligence.

Intelligence includes logic, self-awareness, learning ability, emotional cognition, creativity and decision-making ability various kinds tasks. It is characteristic of both people and animals. We study the world around us from an early age; throughout our lives, through trial and error, we learn the necessary skills and gain experience. This is natural intelligence.

When we talk about artificial intelligence, we mean a human-created “smart” system that learns using algorithms. His work is based on the same methods: research, training, analysis, etc.

Key events in the history of AI

The history of AI (or at least discussions of AI) began almost a hundred years ago.

R Rossum Universal Robots (R.U.R)

In 1920, the Czech writer Karel Capek wrote a science fiction play "Rossumovi Univerz?ln? roboti" (Rossum's Universal Robots). It was in this work that the word "robot" was first used, which meant living humanoid clones. The plot is set in the distant future in factories they learned to produce artificial people. At first, these “replicants” worked for the benefit of people, but then they rebelled, which led to the extinction of humanity. Since then, the topic of AI has become extremely popular in literature and cinema, which in turn have had a great influence on real-world research.

And Alan Turing

The English mathematician, one of the pioneers in the field of computer technology, Alan Turing, made a significant contribution to the development of cryptography during the Second World War. Thanks to his research, it was possible to decipher the code of the Enigma machine, which was widely used by Nazi Germany to encrypt and transmit messages. A few years after the end of World War II, important discoveries in such fields as neuroscience, computer science and cybernetics, which prompted the scientist to the idea of ​​​​creating an electronic brain.

Soon the scientist proposed a test, the purpose of which is to determine the possibility of artificial machine thinking close to humans. The essence of this test is as follows: A person (C) interacts with one computer (A) and one person (B). During a conversation, he must determine with whom he is communicating. The computer must mislead a person into making the wrong choice. All test participants cannot see each other.

D Dartmouth Conference and the first “winter” of AI

In 1956, the first ever conference on AI was held, in which scientists from leading US technological universities and specialists from IBM took part. The event had great value in the formation new science and marked the beginning of major research in this area. Then all the participants were extremely optimistic.

The 1960s began, but progress in creating artificial intelligence did not move forward, and enthusiasm began to wane. The community underestimated the complexity of the task, and as a result, the optimistic forecasts of experts did not come true. The lack of prospects in this area has forced the UK and US governments to cut research funding. This period of time is considered the first “winter” of AI.

E Expert systems (ES)

After a long period of stagnation, AI has found its application in so-called expert systems.

An ES is a program that can answer questions or solve a problem in a specific area. Thus, they replace real specialists. The ES consists of two subroutines. The first is called the knowledge base and contains necessary information in this area. The other program is called the inference engine. It applies information from the knowledge base in accordance with the task at hand.

ES have found their application in such industries as economic forecasting, medical examination, diagnostics of faults in technical devices etc. One of the currently known ES is the WolframAlpha project, created to solve problems in mathematics, physics, biology, chemistry and many other sciences.

In the late 80s and early 90s, with the advent of the first desktop PCs from Apple and IBM, public and investor interest in AI began to decline. A new “winter” has begun...

Deep Blue

After many years ups and downs, a significant event for AI occurred: on May 11, 1997, the chess supercomputer Deep Blue, developed by IBM, beat world chess champion Garry Kasparov in a six-game match with a score of 3? by 2?.

In Deep Blue, the process of searching through a tree of chess moves was divided into three stages. First, the main processor explored the first levels of the chess game tree, then distributed the final positions among the auxiliary processors for further exploration. The auxiliary processors deepened the search a few more moves, and then distributed their final positions to the chess processors, which, in turn, searched at the last levels of the tree. The Deep Blue evaluation function was implemented at the hardware level - chess processors. The design of the hardware evaluation function included about 8000 customizable position features. The individual feature values ​​were combined into an overall score, which was then used by Deep Blue to evaluate the quality of the chess positions being viewed.

In 1997, Deep Blue was ranked 259th in power (11.38 GFLOPS). For comparison, the current fastest supercomputer has 93,015 GFLOPS.

XXI century

Over the past two decades, interest in AI has grown significantly. The AI ​​technology market (hardware and software) has reached $8 billion and, according to experts from IDC, will grow to $47 billion by 2020.

This is facilitated by the emergence of faster computers, the rapid development of machine learning technologies and big data.

The use of artificial neural networks has simplified tasks such as video processing, text analysis, speech recognition, and existing methods for solving problems are being improved every year.

DeepMind projects

In 2013, DeepMind presented its project in which it trained AI to play games for the Atari console as well as humans, and even better. For this, the method of deep reinforcement learning was used, which allowed the neural network to independently study the game. At the beginning of training, the system knew nothing about the rules of the game, using only a pixel image of the game and information about the points received as input.

In addition, DeepMind is developing AI to teach more complex games such as Starcraft 2. This real-time strategy game is also one of the most popular cyber disciplines in the world. Unlike classic video games, there's a lot more available here possible actions, little information about the opponent, there is a need to analyze dozens of possible tactics. At the moment, the AI ​​can only cope with simple mini-tasks, such as creating units.

It is impossible not to mention another DeepMind project called AlphaGo. In October 2015, the system defeated European Go champion Fan Hui with a score of 5:0. A year later in South Korea A new match took place, where AlphaGo's opponent was one of the best players in the world, Lee Sedol. A total of five games were played, of which AlphaGo won only four. Despite the high level of skills demonstrated, the program still made a mistake during the fourth game. In 2017, a film about AlphaGo was released, which we recommend for viewing. DeepMind recently announced the creation of a new generation of AlphaGo Zero. Now the program learns by playing against itself. After three days of training, AlphaGo Zero beat its previous version with a score of 100:0.

Conclusion

Until now, AI systems are highly specialized, that is, they cope with tasks better than man only in specific areas (for example, playing Go or data analysis). We are still far from creating a general (full-fledged) artificial intelligence that would be able to completely replace the human mind and would be capable of any intellectual task.

Translated the article by Lev Alhazred