Horse Sense About AI | Psychology Today

Clever Hans, the educated horse.

Source: Dall-E 3 / Open AI

Clever Hans was the world’s smartest horse. So it appeared, anyway. Hans and owner Wilhelm von Osten—a retired Berlin math teacher and phrenologist—traveled around Germany giving free demonstrations of the horse’s uncanny intelligence. Hans could understand German, spoken or written, and reply to questions by tapping a hoof. The answer had to be yes-or-no or numerical, indicated by the number of taps. Von Osten would ask the horse questions such as: “If the eighth day of the month comes on a Tuesday, what is the day following Friday?” The horse tapped its hoof 11 times.

article continues after advertisement

Through this language of taps, the horse could tell time, spell, and do algebra. One contemporary report marveled: “He can distinguish between straw and felt hats, between canes and umbrellas.”

Crowds cheered on the demonstrations even as skeptics huffed that something must be amiss. The German board of education had psychologist Carl Stumpf organize a committee to investigate. The Hans Commission, as it was called, included the director of the Berlin zoo, a veterinarian and a circus manager. In 1904, this blue-ribbon panel reported that no trickery was involved.

Stumpf’s assistant, Oskar Pfungst, was not so sure. He conducted his own evaluation and came to a different conclusion: that owner von Osten was cueing the horse with his facial expressions and posture. Pfungst noted that von Osten tensed when the horse was tapping out a number. He relaxed at the final tap of the correct answer. This change in demeanor was effectively the horse’s cue to stop tapping.

Pfungst found that the horse was unable to answer correctly when it couldn’t see von Osten or when von Osten didn’t know the correct answer. In Pfungst’s analysis, published in 1907, von Osten was probably not a charlatan. The demonstrations were after all free. The cues may have been unconscious, like a poker tell that the horse had picked up on. But von Osten and his audiences wanted to believe in the novelty of an educated horse. As for Hans, he must have wanted to please his owner. Von Osten had trained Hans by giving him a lump of sugar or carrot after the correct tap. Von Osten thought the horse was learning math; more likely, Hans was learning to keep tapping until he got a treat.

article continues after advertisement

Pfungst’s debunking had little effect on von Osten or his audiences. The showman continued to exhibit Hans to delighted crowds. Pfungst’s paper did however affect the course of science. Today those studying animal behavior are warned of the “Clever Hans effect.” It is easy to attribute human attributes to an animal, even when there is another explanation. Animals can draw on unconscious cues from would-be neutral observers.

So can humans. Pfungst’s work was a motivation for the use of double-blind studies of experimental drugs. In these, patients do not know whether they are getting medicine or a placebo, and neither does the doctor. Otherwise, researchers may unintentionally transmit their expectations to the patients.

AI of the Beholder

Clever Hans has been in the news lately. The world is abuzz with talk of artificial intelligence, chatbots, and large language models. Scholars David B. Auerbach and Herbert Roitblat have remarked on the parallel between Hans and our new digital frenemies. This is not to deny the truly amazing things that chatbots do. But it is less clear what is going on “inside” a chatbot. Auerbach and Roitblat believe that we are too quick to project human attributes onto the algorithms.

article continues after advertisement

There is already a growing body of research studying the psychology of human-AI interactions. In a recent study by a group at MIT’s Media Lab, volunteers interacted with the chatbot GPT-3. Some were told that the chatbot was caring; others that it was manipulative; and still others that it was completely neutral. These cues made a big difference in what the volunteers asked the chatbot and how the conversations unfolded.

“To some extent, the AI is the AI of the beholder,” explained MIT graduate student Pat Pataranutaporn. “When we describe to users what an AI agent is, it does not just change their mental model, it also changes their behavior. And since the AI responds to the user, when the person changes their behavior, that changes the AI, as well.”

Artificial Intelligence Essential Reads

The MIT team notes that pop culture has already primed us. AI can be benign comic relief (Star Wars) or an elusive love object (Her). But let’s face it: Cinematic AI is largely evil. There are a lot of HAL 9000s and Skynets in our collectively imagined future. This may be affecting our experience of the real AI now coming online.

Source link

credite