Contribute philosophers to cognitive science

archive

For a long time the brightest of the bright have puzzled as to why understanding is so difficult to grasp. A new academic discipline now wants to shed light on the matter. It is called cognitive science and works on old philosophical topics with modern, scientifically shaped systematics. Cognitive scientists use the computer - practically as well as theoretically.

The aim of cognitive scientists is to simulate the human mind as much as possible in order to gain a clearer picture of how the human brain works.

There has been speculation about mind and consciousness for more than 2000 years. It used to be the domain of philosophy, a royal science as it proudly called itself. Since the emergence of the natural sciences in the last century, the influence of philosophers has decreased dramatically. With the computer it could grow again, provided the philosophers recognize their opportunity.

So far this has only been the case in the USA, not in Germany at all. If philosophy continues to ignore the computer, the cognitive scientists will work out their own philosophy, as is already the case in other individual sciences. So the philosophers have a historic chance to jump on a new high-speed train going into the future - or they sit in their little ivory turrets and dawn there.

Intelligence only in heads or in the whole universe?

The cognitivist community today is made up of computer scientists, psychologists, philosophers, linguists, neurophysiologists, and anthropologists. The most radical ones are concerned with whether intelligence is only in human heads or is buzzing around the entire universe - something like Hegel's world spirit or Einstein's energy.

A more moderate research approach is that through computer simulation of various intellectual achievements, their occurrence can be better understood, and that is the only thing that matters. That is the level of understanding, of understanding. Whether machines can think and understand in the human sense is of little or no concern to these researchers. But it is precisely this question that repeatedly stirs up the greatest amount of dust in public.

The mathematician Marvin Minsky from the Massachusetts Institute of Technology (MIT) in Cambridge (USA) offers a classic answer to this complex of questions. "Artificial intelligence," he says, is the "science of getting machines to perform activities that would require human intelligence."

However, when trying to find out what human intelligence is, how thinking works and what the mind is made of, the avant-garde of computer scientists has to dig deep into their bag of tricks. And that is the point where she cannot avoid going back to the ancient philosophers.

In the early days this happened almost exclusively as a negation of the philosophical standpoint. Now that cognitive science has grown in maturity, the computer fundraisers are stepping back a little and the prudent ones come to the fore. You prefer to dig in the ideas of Kant, Descartes, Wittgenstein, Heidegger and Gadamer.

What it looked like in the beginning can be demonstrated with the example of the two professors Allen Newell and Herbert Simon from the Department of Psychology and Computer Science at Carnegie Mellon University, Pittsburgh / Pennsylvania. Both are highly respected scientists, Simon received the 1978 Nobel Prize.

In 1976 Newell and Simon wrote a contribution to the Association for Computing Machinery with the rather innocuous-sounding title "Computer Science as Empirical Research: Symbols and the Search for Solutions". A translation of this memorable treatise is given by Dieter Muench, lecturer at the Institute for Computer Science at the TU Berlin, in his book "Kognitionswissenschaft" / 1 /.

In this treatise, Newell and Simon amuse themselves delightfully about Plato's doctrine of remembrance from the Menon dialogue. In it, Socrates teaches mathematics to a slave who is unfamiliar with arithmetic by bumping into the logic of things with his own nose - Socrates speaks of "remembering". We would say: "It falls like scales from our eyes."

The two American computer gurus, however, considered the Socrates declaration to be rather silly. "The topic of problem-solving was discussed by philosophers and psychologists (?) For 2000 years, often with a very strong mystical influence. Anyone who thinks that there is nothing problematic or mystical with a problem-solving symbol system is a spiritual child of our time."

So much for Newell and Simon's view, but doesn't every computer have to "remember" the logic of its circuits and programs as they were implanted in it by people who think logically? Cognitive science now knows that things are not quite as simple as Newell and Simon thought. The demystification of the world is far from over and may never succeed, as there is always a little bit of ignorance. This is ensured by the limited human sensors and the filters in the perception against "memory overflow".

But Newell and Simon are right - and the majority of American philosophers agree with them: Computers help to better understand how the world is represented in the brain. There has to be this representation, otherwise we would have major orientation problems and would run into a door post or something at every opportunity.

The computer is also subject to this dilemma. Anyone who wants to teach a machine to perceive and logically grasp things that we take to be real must be fairly well informed about how we perceive, recognize, learn, understand, notice, remember. Can we do that and can the computer do that, where is the limit for artificial intelligence (AI)?

Even an AI critic as vehemently as the MIT computer scientist Joseph Weizenbaum does not deny one thing: "The computer gives you this unique ability to see things more clearly, with better focus. If someone has a problem and writes a program for it, then in order to do that, he has to understand his problem. And when he writes the problem is illuminated anew. He sees that something is not working and he recognizes that he has not understood something about the problem deeply enough Understanding of your very real everyday problems comes from using the computer, that is a very nice role of the computer. "

The French philosopher and Descartes reminiscence, Andre Glucksmann emphasizes this: "Because the technicians imitate nature, they can see what it is hiding." Well, computer technology hides something very special: pure spirit, driven by formal logic. But that is exactly what philosophy has always been concerned with. The spectrum ranges from Aristotle to Hegel, the Scholastics to the Vienna Circle, the Stoa to Wittgenstein. The common focus is formed by the syllogisms of Aristotle: "All people are mortal - Socrates is a person - so Socrates is mortal." Expressed in Boolean algebra, this would mean: If A equals B and C equals A, then C equals B. First-year students of philosophy buff this as predicate logic.

It is sad that in Germany the applicability of the predicate logic in the computer is so little understood; after all, it was a German bridge builder who invented the computer in the 1940s. But the Nazis didn't understand Konrad Zuse either. The computer therefore made its altitude flight from the ground of the USA with a delay of several decades. It has remained that way to this day.

Cognitive science is growing in the New World

How could it be otherwise: The delicate plant of cognitive science sprouts almost exclusively in the New World. The spiritual ground was prepared there. In 1938, the MIT mathematician Claude Shannon wrote a dissertation entitled "An Analysis of Symbolism in Relays and Switches". Shannon's theses were celebrated by the cognitive psychologist Howard Gardner in his book "Dem Denk auf der Spur" / 2 / (The Mind's New Science), published in 1985 in English and in 1989 by Klett-Cotta in German, as "probably the most important thesis of our century".

Shannon proved that the algebraic true-false logic of Boole could be transferred to the on-off principle of electrical circuits. This laid the foundation for machines that could perform truth-functional operations. Shannon also demonstrated the applicability of algorithms (description of methods for solving a task in a finite time) for such machines. Far-sighted, he recommended feeding computers with formal logic rather than arithmetic.

Around the same time, the Englishman Alan Turing formulated his epochal ideas about the basic features and commonalities of human and machine intelligence. Today Turing is one of the pillars of the AI ​​philosophy. The question of whether a machine behaves intelligently is answered by this group exclusively with the Turing test. According to this, a machine is considered intelligent if a skeptical contemporary without visual contact with the other person cannot exactly see whether he is receiving answers from a machine or a person.

Turing mentally constructed the archetypes of all machines that can process algorithms. Every computer is therefore a Turing machine. Possibly we are also Turing machines. Some believe it and some don't.

How did the history continue? The aforementioned Newell and Simon, together with Cliff Shaw, were given access to an electron tube calculator at the Rand Corporation in the 1950s. Soon they wanted to prove that it could do more than just eat numbers. They programmed the legendary "Logic Theorist". His task was to prove the theorems of the English mathematicians and philosophers Bertrand Russel and Alfred North Whitehead from the "Principia Mathematica" (1910-1913).

And it worked: In August 1956 the logic theorist delivered his first complete proof of a Russell-Whitehead theorem; another 37 followed. The program, for which Newell and Simon devised a higher, not machine-like, but human-oriented programming language, used Aristotelian syllogisms in the reasoning.

A Minsky student then developed a pure syllogism program at the end of the 1960s, which could deduce from two figures to other matching pairs of figures in a puzzle. The program understood descriptions like "in", "over", "left of", "rotated". It did its homework at the level of a 16 year old student. Another Minsky disciple wrote a program that solved the rule of three calculations - not algebraically, but semantically.

Another American, John McCarthy, developed the symbolic programming language "Lisp". The Europeans followed with "Prolog" only decades later. For McCarthy, intelligence consisted of axioms and logical conclusions. Lisp adopted much of the Aristotelian syllogisms and manipulated lists of symbols using formal logic. That was tempting and led to the belief that knowledge consists only of logic at all. Even Minsky now contradicts this.

Ore deposits discovered by AI computers

Edward Feigenbaum, Simon-Schueler and today one of the most important serious representatives of artificial intelligence created the world's first expert system together with the genetics Nobel laureate Joshua Lederberg. The computer used it to analyze data on organic compounds that it received from a mass spectrograph. The expert system formulated hypotheses about molecular structures from mountains of data and tested them at the same time.

Expert systems have made their way since then, even if some experts fear that they will be replaced by the computer. The first commercially available Lisp machine came from Texas Instruments. It immediately caused a sensation: the AI ​​computer tracked down a California ore deposit that had been searched in vain for decades within two weeks.

When more and more Lisp machines that processed symbols and applied predicate logic emerged, two MIT computer scientists again astonished the world with sophisticated psychological and cognitive computer programs: Weizenbaum presented his dialogue program "Eliza", which simulated a therapist so deceptively that test subjects were convinced to communicate to a person. Weizenbaum was speechless because he had only taught his program a few platitudes. His sympathy for the AI ​​waned; since then he has been ruthlessly criticizing her. In Germany in particular, he found numerous sympathizers among the critical intelligentsia.

Not quite as critical, but still cautious, is Terry Winograd's view of what he used to do today. In 1970, the semantic breathed life into an AI robot named "Shrdlu". It was a robot with limited ability to think, which found its way surprisingly well in a building block world. The computer processed natural language. Shrdlu knew what was geometrical and what was not. He was also able to distinguish between colors, which was by no means a trivial task. Several "specialists" worked behind the scenes in the form of interacting software modules, such as an expert for syntax, another for semantics and then something like a dramaturge, the pronouns like "this", "that" and "that, which" "indicated.

Shrdlu solved most of the problems by means of deduction, a final method of philosophical epistemology: cuboids have several rectangular areas, pyramids at most one (general), so if there are several rectangular areas, it should be one cuboid (concrete). There was compact knowledge of this kind in Winograd's robot. Shrdlu astonished the world and sparked heated discussions as to whether the thing really understood what it was doing. Winograd now categorically denies this. He withdrew into linguistics. In the book "Knowledge of Machines Understanding" / 3 /, which he wrote together with a former minister in the Allende government, Winograd oracles before diving into the old subject that all previous approaches in computer science were probably not enough to actually build human-friendly computers . The Bremen computer science professor Wolfgan Coy writes in the afterword: "The long successful suppression of philosophical foundations collapses."

In the spirit of Winograd and other AI skeptics, most of the logic software engineers have meekly found back to the modesty of old Socrates ("I know that I know nothing"). This is less due to the never-ending criticism of the anti-computer fundis. Rather, cognitive science itself recognizes, the deeper it penetrates into matter, how extremely difficult it is with intelligence. Winograd's successors prefer to deal with the learning abilities of young children rather than with the knowledge of adult experts. Some even got into insect research.

"The rule teaches," says Descartes, "that we must not immediately deal with difficult and arduous objects, but should first consider very insignificant and extremely simple procedures, especially those in which there is order." The cognitive scientists have grasped the lesson of the French enlightener whom they consider to be their spiritual forefather alongside Kant.

Descartes is interested because he mentally separated the cogito from the body and thereby gave it autonomy. Thus, thinking faculties can also be synthesized in a machine or in some other way. Kant is valued because of his idea that the mind, which has no direct access to reality, creates it in the form of a symbolic representation of the world in consciousness. This encourages the search for computer programs that can do the same.

Linguistics in particular makes other important contributions on the way to a philosophy of the computer age. The proof of close connections between linguistic grammar and formal logic led to the (re) discovery of the syntax rules in argumentative statements. This time, however, it went far beyond Aristotle. At the end of the 1950s, the linguist Noam Chomsky, with his description of syntactic structures, shattered the basic beliefs of the humanities. He believed that all areas of the mind - including language - operate according to rules or principles that can be revealed and formalized.

The simulation of thought processes is accepted

According to Chomsky, thought processes are best explored through language. Philosophers like Karl Popper see language as the source of all objective knowledge. And Chomsky's syntax allows algorithms to be written down in a computer-friendly manner. The computer understands syntax, makes statements with it and draws conclusions. That he understands the semantics must be doubted.

The Californian philosopher of language John Searle is such a skeptic. He relentlessly criticizes the AI ​​in its harsh form. However, Searle accepts logical simulation of thought processes on computers in order to learn more about the human mind. Nevertheless, he absolves computers of semantic knowledge for all time.

He demonstrated the argument using the example of his famous "Chinese Room". Searle is sitting in a closed room and is handed documents labeled in Chinese. He only understands English, not Chinese. To get by, he was given an English manual with rules on how to map Chinese characters to English. So Searle learns nothing else than to assign sets of formal symbols of one kind to symbol sets of another kind. For outsiders, he gives the impression of knowing Chinese. In reality, however, he has no clue about it.

Searle's idea reduces the computer to a manipulator of lists of symbols that only tell humans something. The machine, on the other hand, understands absolutely nothing, because it does not understand the meaning of the signs. No amount of intelligent action by the computer or robot can hide this.But Searle should be asked: Which industrial worker or office worker already understands the real background of his actions? Don't we all function like intelligent robots within a largely misunderstood economic, social and natural structure?

British AI philosopher Margaret Boden from the University of Sussex puts her finger right into this wound and strongly contradicts Searle. Rather, she thinks that there are already computer programs "that do things - or in any case start with them - that poorly informed critics have described a priori as impossible. This includes: perceiving holistically instead of atomistically; using language creatively; with the help of a language-neutral semantic Translate representation meaningfully from one language into another; plan actions roughly outlined and decide on details only during implementation; recognize different emotional reactions depending on the psychological context of the person concerned. "

Be that as it may: The study of human intelligence, its ability to think logically and to differentiate, constantly expanding linguistic expression enters a new stage with cognitive science. The computer serves as a model of intellectual work and as a laboratory to analyze thinking. Theories that were previously only speculated about by teaching can now be tested for the first time without any major danger to the mind and life of the individual and of society as a whole. Philosophy and computer become a team.

Many philosophers may view this as blasphemy. You should keep in mind that the mental adaptation of modern technology by a culture is of the utmost importance, especially if it is a technology that mimics the brain. A philosophy that ignores the computer is on its way to the barrel of Diogenes. The computer scientists among the cognitive scientists know: Without philosophy, without better knowledge of human learning, thinking and understanding, there will be no more intelligent and more powerful computers.

To control important parts of the everyday world and the economy by computers requires that the programmers have meaningful theories about the world and human society. For this it is necessary to unearth the treasure trove of philosophical knowledge, even if it only turns out that the limits of the computer are narrowly drawn.

The philosopher Hans-Georg Gadamer drew these lines razor-sharp: "What we understand is based on what we already know, and what we already know we owe to our ability to understand." If we nevertheless occasionally make spiritual progress, it is only because here and there we overcome prejudices "and thereby emancipate ourselves from some of the limits that they set on our thinking".