Chapter 2 of the "book" you and I are working on. Remember: this is also a video game. It's almost sort of done but not quite. maybe come back later and see if it changes. i sure will.
IF YOU ARE NOT PART OF THE STORY PLEASE USE THE PHRASE NOT PART OF THE STORY IN ALL CAPS AT THE TOP OF YOUR COMMENT FOR CLARITY THANK YOU.
I think you're dead wrong on your primary point, the assertion that language is designed to be a way to convert sounds to thoughts. The language doesn't need to do that at all. Your brain doesn't respond to meat sounds - it responds to its own electrical impulses that it generates in response to other electrical impulses that, yes, are triggered by meat sounds hitting meat detectors. However, it's a Chinese Room. I would have thought anyone who even has a chance at stumbling this deep into a rabbit hole is familiar with the thought experiment but I just had a brilliantly inspiring phone call with the most brilliant artist of our generation and they had never heard of it. I met Noam Chomsky once though when I was visiting MIT to consider it for school. I told him I was a fan of his work. He grunted and said he was glad young people care or something like that. Or maybe not, I can't remember what I told people the last time I repeated this lie. But I've told it too many times for it not to be true.
> The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences.[1] Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]
> The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
This is mostly bullshit, but I think a useful framework for thinking about ways that maybe language absolutely can be an approximate truth vehicle for something it could never understand, but instead is the illiterate translator. While that sounds like it supports your argument, I think a more keen observer would note that this leaves the door open for a very specific situation that can produce perfect communication. If the pathways in between meat-detectors and meaning neurons are either identical or perfectly complementary or perfectly out of phase with each other, then they CAN recreate the same electrical impulses that, when filtered through the universal hardware and software of human brains, produce the perfect transmitted meaning experience. If we can imagine this case, however, it breaks your theory.
hi. thank you for your comment. i see from your profile you came out as sort of trans today. congrats! i have no personal issues with gender but idk cool for you i guess. anyway, when you transitioned why didn't you transition to someone who's fun at parties and happy with their life? i would have just done that instead of doing whatever you just did here in my book im trying to write. kindly, unsubscribe and discontinue whatever drugs you are taking. or start taking better ones or something
IF YOU ARE NOT PART OF THE STORY PLEASE USE THE PHRASE NOT PART OF THE STORY IN ALL CAPS AT THE TOP OF YOUR COMMENT FOR CLARITY THANK YOU.
I think you're dead wrong on your primary point, the assertion that language is designed to be a way to convert sounds to thoughts. The language doesn't need to do that at all. Your brain doesn't respond to meat sounds - it responds to its own electrical impulses that it generates in response to other electrical impulses that, yes, are triggered by meat sounds hitting meat detectors. However, it's a Chinese Room. I would have thought anyone who even has a chance at stumbling this deep into a rabbit hole is familiar with the thought experiment but I just had a brilliantly inspiring phone call with the most brilliant artist of our generation and they had never heard of it. I met Noam Chomsky once though when I was visiting MIT to consider it for school. I told him I was a fan of his work. He grunted and said he was glad young people care or something like that. Or maybe not, I can't remember what I told people the last time I repeated this lie. But I've told it too many times for it not to be true.
> The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences.[1] Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]
> The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
This is mostly bullshit, but I think a useful framework for thinking about ways that maybe language absolutely can be an approximate truth vehicle for something it could never understand, but instead is the illiterate translator. While that sounds like it supports your argument, I think a more keen observer would note that this leaves the door open for a very specific situation that can produce perfect communication. If the pathways in between meat-detectors and meaning neurons are either identical or perfectly complementary or perfectly out of phase with each other, then they CAN recreate the same electrical impulses that, when filtered through the universal hardware and software of human brains, produce the perfect transmitted meaning experience. If we can imagine this case, however, it breaks your theory.
hi. thank you for your comment. i see from your profile you came out as sort of trans today. congrats! i have no personal issues with gender but idk cool for you i guess. anyway, when you transitioned why didn't you transition to someone who's fun at parties and happy with their life? i would have just done that instead of doing whatever you just did here in my book im trying to write. kindly, unsubscribe and discontinue whatever drugs you are taking. or start taking better ones or something