UNIVERSITY AT BUFFALO, THE STATE UNIVERSITY OF NEW YORK
The Department of Computer Science & Engineering
cse@buffalo
CSE 111: Great Ideas in Computer Science

What Cannot Be Computed and Artificial Intelligence

 

The Halting Problem

We've mentioned before that there exists some things a computer is incapable of computing, given any amount of time. Let's consider what such a machine would act like.

Imagine a computer program (Turing Machine), H, which takes as input a program C as well as C's input (which we'll call i). Functionally H simulates C on the input i and returns "yes" if C halts on i (meaning "yes, C halts"), and returns "no" (meaning "no, C does not halt") if C loops forever and will never halt. Let's refer to this as H(C, i) - that is H with the inputs C and i.

Now, notice we can't just answer the question of whether C halts by running C on the input i. There are two possibilities:

  1. C halts, and we know it (since it halted).
  2. C loops. We'll never know if it halts since we'll have to wait forever.

So intuitively H(C, i) seems at least very unlikely to exist since it possibly never produces an answer! It turns out that it cannot exist. There is a proof sketch for this here, but we won't go into it.

This forms the basis of much of the research in computability theory.

That's a bit abstract, but there's actually a very large number of these problems. One computationally relevant example is computer antivirus systems. Lets consider a virus detection program A which takes as input a potential virus V and some input to the potential virus i. If executing V on the input i on the computer would result in infection we'll return "yes" otherwise it returns "no". Now:

  1. If A is able to simulate V on i, then it is a virus and "yes" is printed.
  2. If A is not able to simulate V then two things could happen. Either V halts and is known not to be a virus, or V does not halt.

As we know, there's no way to tell a priori if V will halt, therefore antivirus programs cannot exist.

But, we know antivirus programs exist! Very true, and they're okay - not great. Just because something is uncomputable doesn't mean we can't attempt to answer the question, it just means we can't be perfect at it. This is why antivirus programs allow some viruses through, and produce false positives in other places.

Artificial Intelligence

If in computability theory the question of whether a program halts is the most foundational question, then in AI the question of whether cognition is computable is the question. So really we're asking whether or not there can exist a computer program which computes cognition.

Let's look at a few definitions of Artificial Intelligence:

"[AI is] the science of making machines do things that would require intelligence if done by humans." --Marvin Minsky (1968)

"[AI is] the use of computer programs and programming techniques to cast light on the principles of intelligence in general and human thought in particular." --Margaret Boden (1977)

"[AI is] a field of computer science and engineering concerned with the computational understanding of what is commonly called intelligent behavior, and with the creation of artifacts that exhibit such behavior." --Stuart C. Shapiro (2010)

As you read each of these, a particular ambiguity may come to mind, especially in the Minsky and Shapiro definitions - are we talking about truly intelligent computers, or are we simply simulating intelligent behavior (acting intelligent!)?

We call simply acting intelligent "weak AI" and actually being intelligent "strong AI." Researchers are largely divided on whether strong AI is really possible, there's certainly no 'proof' either way (yet!).

The definition by Margaret Boden though seems to indicate that we use computer programs which exhibit behavior the same as a human (ie. intelligence) to understand humans. This is much of the foundation of cognitive science. These programs don't even need to do it using the same methods (algorithms) as a human, they need to have similar results though.

The Turing Test

The Turing Test is a fairly simple concept. The idea is that there are two rooms with computers in them, both with some sort of "instant message" like application on them for talking back and forth. In the first room is a human (the interrogator), and in the second room is either a human or a computer. If it is impossible for the interrogator to tell the difference between a human and the computer he/she is talking to, the machine passes the Turing test.

"I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" --Alan Turing (1950)

What kinds of questions do we want to ask? We have machines like Watson, which to pretty good accuracy can answer trivia questions. Very few people would say Watson is intelligent though. Deep Blue is really good at playing chess, but few would say it's intelligent either.

What we might want is a sort of Artificial IQ test which tests many different things with questions similar to an IQ or the SAT.

Some say though that the Turing test only tests for intelligent *behavior*, not real intelligence!

The Chinese Room

John Searle's Chinese Room argument is one famous example of an argument against the use of the Turing test to determine if a computer is intelligent.

Searle's thought experiment is as follows:

Imagine you have a two-room setup as before. In one room is an interrogator who only speaks Chinese, and in the other room is a person who speaks no Chinese at all, only English. The human who speaks English has a book which contains an algorithm which tells him or her how to manipulate the "squiggles" (Chinese characters) from the interrogator to produce a response.

The interrogator then gives the English speaker a story in Chinese and asks a series of reading comprehension questions. He (in the original, this person was Searle) takes the input and transforms it by his algorithm and gives it back to the interrogator. The interrogator sees a perfectly grammatical response in Chinese.

Therefore the human seems to have just passed the Turing test for knowing Chinese... But that human neither knows nor understands Chinese! Therefore he only simulated proficiency in Chinese by using the book, and doesn't actually know any.

AI Techniques

Perhaps the most obvious idea for how we might build an artificial intelligence is just to build an artificial brain in software, exactly as it is in real life. This is being done on a small scale now, but there are a few issues with this approach (in my opinion):

  1. The degree of understanding we will have of the artificial brain is not necessarily more than we could have without it.
  2. It's just a copy of a biological brain - we already have those and they are easy to make more of.

On the other hand though, we're almost assured real (not simulated) cognition from such an artificial brain.

Most cognitive neuroscientists aren't really interested in the above 'solution' to AI problems. They are much more interested in understanding the biological processes involved in cognition.

As we've seen throughout this course as we begin to understand things at a low level it is natural for us to build abstractions upon those levels to make more powerful models and understanding easier (we did this moving from binary codings to binary arithmetic to Turing machines).

Artificial Neural Networks (ANNs):

ANNs (or the connectionist approach to AI) uses artificial neural which are connected to each other to each other with weights.

A neuron is activated to some amount via its inputs. That amount is multiplied by each of its outgoing weights on the way to neurons it's connected to, eventually reaching the output. ANNs have to be trained (by hand or automatically) by many different sample sets of input and output in order to be able to perform the required operations.

There are many problems with ANNs. While the name ANN sounds like there's a large basis in biology, a neuron in an ANN barely has any of the complexity that one in your brain has.

Good Old Fashioned Artificial Intelligence (GOFAI):

The idea behind the GOFAI approach to AI is that the brain is really just a symbol manipulation machine. The techniques used in GOFAI are things such as logical reasoning and using grammars for parsing and generating natural language. In either of these cases there are rules defined by the creator of the system or knowledge engineer which define how to manipulate the incoming data to produce output.

The Reality:

Even though people who study AI and cognitive science separate themselves into these two separate camps it's likely the case that neither of these approaches is perfect. Most work in ANNs tries to solve small problems - parts of computer vision (object recognition, edge detection, etc...), language processing, and so on. Note that these are all problems which seem to have statistical solutions too.

Work in GOFAI overlaps somewhat with ANNs - some (not much) natural language work still uses the ideas of symbol manipulation. Most of the work in this are though has to do with representing knowledge and performing inference (the reasoning tasks of the brain).

 


Copyright © 2011 Daniel R. Schlegel. Last modified July 31, 2011. Adapted in part from Dr. Rapaport's notes on the Halting Problem and Artificial Intelligence.