at://rude1.blacksky.team/app.bsky.feed.post/3maeuexti4k24
Back to Collection
Record JSON
{
"$type": "app.bsky.feed.post",
"createdAt": "2025-12-19T23:39:15.658Z",
"embed": {
"$type": "app.bsky.embed.recordWithMedia",
"media": {
"$type": "app.bsky.embed.images",
"images": [
{
"alt": "The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled \"Minds, Brains, and Programs\" and published in the journal Behavioral and Brain Sciences.[1] Similar arguments had been made by Gottfried Wilhelm Leibniz (1714), Ned Block (1978) and others. Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]\n\nThe argument is directed against the philosophical positions of functionalism and computationalism,[4] which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence. Specifically, the argument is intended to refute a position Searle calls the strong AI hypothesis:[b] \"The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.\"[c]\n\nAlthough its proponents originally presented the argument in reaction to statements of artificial intelligence (AI) researchers, it is not an argument against the goals of mainstream AI research because it does not show a limit in the amount of intelligent behavior a machine can display.[5] The argument applies only to digital computers running programs and does not apply to machines in general.[6] While widely discussed, the argument has been subject to significant criticism and remains controversial among philosophers of mind and AI researchers.[7][8] ",
"aspectRatio": {
"height": 610,
"width": 981
},
"image": {
"$type": "blob",
"ref": {
"$link": "bafkreifwkw3iyjdwyuptrr5pxrfahdsomerjrgbprrtfc5vckf6de7y5uq"
},
"mimeType": "image/jpeg",
"size": 557492
}
},
{
"alt": "Chinese room thought experiment\n\nSuppose that artificial intelligence research has succeeded in programming a computer to behave as if it understands Chinese. The machine accepts Chinese characters as input, carries out each instruction of the program step by step, and then produces Chinese characters as output. The machine does this so perfectly that no one can tell that they are communicating with a machine and not a hidden Chinese speaker.[6]\n\nThe questions at issue are these: does the machine actually understand the conversation, or is it just simulating the ability to understand the conversation? Does the machine have a mind in exactly the same sense that people do, or is it just acting as if it had a mind?[6]\n\nNow suppose that Searle is in a room with an English version of the program, along with sufficient pencils, paper, erasers and filing cabinets. Chinese characters are slipped in under the door, and he follows the program step-by-step, which eventually instructs him to slide other Chinese characters back out under the door. If the computer had passed the Turing test this way, it follows that Searle would do so as well, simply by running the program by hand.[6]\n\nSearle can see no essential difference between the roles of the computer[d] and himself in the experiment. Each simply follows a program, step-by-step, producing behavior that makes them appear to understand. However, Searle would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either.[6]\n\nSearle argues that, without \"understanding\" (or \"intentionality\"), we cannot describe what the machine is doing as \"thinking\" and, since it does not think, it does not have a \"mind\" in the normal sense of the word. Therefore, he concludes that the strong AI hypothesis is false: a computer running a program that simulates a mind would not have a mind in the same sense that human beings have a mind.[6] ",
"aspectRatio": {
"height": 599,
"width": 959
},
"image": {
"$type": "blob",
"ref": {
"$link": "bafkreibjm6a4ks27oj5yvlp5qqb4252vk73f3wl6vxhfqzifilcsprurym"
},
"mimeType": "image/jpeg",
"size": 430556
}
}
]
},
"record": {
"$type": "app.bsky.embed.record",
"record": {
"cid": "bafyreibkhybgyvqrwrojcgirwn5nsadujfrlmlg7ytkybxik2udtkwheby",
"uri": "at://did:plc:w4xbfzo7kqfes5zb7r6qv3rw/app.bsky.feed.post/3mae43a4dsc2o"
}
}
},
"langs": [
"en"
],
"text": "I was pretty sold on Peter Watts' \"Blindsight\" for having both neurodivergent vampires AND aliens in the same book, but now it's introduced me to a great quote and a nice philosophical thought experiment for AI in basically the same sitting."
}