You’re locked in a room and trying to communicate with someone outside who only understands Chinese but you don’t speak the language. The only tools you have are a book explaining how to translate Chinese characters, paper and something to write. You and the person outside can pass letters through a hole in the wall.

You perfectly follow the rules in the book, producing meaningful output which can be interpreted by the Chinese speaker outside. They are left with the impression that you are indeed proficient in the language, having made no mistakes.

Suppose the same would happen with a machine. You run the input through the machine and it correctly provides the output expected. Does that mean the machine has understood what it is doing? Does this show any sign of intelligence?

Philosopher John Searle came up with this thought experiment to argue that the Turing test alone is inadequate to judge a machine’s understanding of the world. His conclusion is that machines are incapable of simulating the human mind and are instead only automating replies to instructions.

It also shows that the human brain isn’t simply a computer-like processor.