The Death of My Phone Calls
Chapter 2 of the "book" you and I are working on. Remember: this is also a video game. It's almost sort of done but not quite. maybe come back later and see if it changes. i sure will.
this is not a chapter of the book nor is it a level of the game - not yet anyway. but it will be. here’s the base ingredients. or maybe they’re the final output - the thoughts i want people to have when they read the chapter or play the level. here is the thesis: there is a strange tension between the desire to exclusively communicate over the phone and the equally strong desire to exclusively communicate via text.
on one hand, communication in a space where miscommunication feels less common is so attractive. you can hear tone, you can backchannel, you can ask clarification, you can do all the paralanguage that text lacks. mr grambel really knocked it out of the park with that one.
but texting has its own paralanguage, a concept i’m sure real linguists have explored while i’ve just put on my pretend linguist hat and called it a day. and texts have a permanence, a revisit-ability that adds a very different kind of protection against miscommunications. a future-proofing of sorts. but they also never die, each time you read them the conversation is alive again. this can be both the most beautiful characteristic of a text and also the most soul crushing.
i think both remind us of the crushing weight of language’s shortcomings. a tool made for bartering with grain and sub-standard copper ingots cannot comprehend the nuances of tiktok brain rot, nor the feeling of grief that comes with realizing that the person you fell in love with all those years ago no longer exists and you’re talking to the ghost that stole their body. you think it conveys that feeling, because language is a gaslighter. it is a pretentious circle-jerk of approximate truths that think you can convey a series of electrical impulses with meat sounds.
i think about this every time i have a phone call. it is exhausting.
wow. deep stuff. if you want more of this spam and unfinished ramblings clogging up your inbox subscribe below.
yeah keep trying to click it you silly lizard brained goofball. maybe the next one will work who knows
Wow. Deep stuff. If you want more of this spam and unfinished ramblings of a maniac clogging up your inbox subscribe below.
IF YOU ARE NOT PART OF THE STORY PLEASE USE THE PHRASE NOT PART OF THE STORY IN ALL CAPS AT THE TOP OF YOUR COMMENT FOR CLARITY THANK YOU.
I think you're dead wrong on your primary point, the assertion that language is designed to be a way to convert sounds to thoughts. The language doesn't need to do that at all. Your brain doesn't respond to meat sounds - it responds to its own electrical impulses that it generates in response to other electrical impulses that, yes, are triggered by meat sounds hitting meat detectors. However, it's a Chinese Room. I would have thought anyone who even has a chance at stumbling this deep into a rabbit hole is familiar with the thought experiment but I just had a brilliantly inspiring phone call with the most brilliant artist of our generation and they had never heard of it. I met Noam Chomsky once though when I was visiting MIT to consider it for school. I told him I was a fan of his work. He grunted and said he was glad young people care or something like that. Or maybe not, I can't remember what I told people the last time I repeated this lie. But I've told it too many times for it not to be true.
> The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in a 1980 paper by the philosopher John Searle entitled "Minds, Brains, and Programs" and published in the journal Behavioral and Brain Sciences.[1] Before Searle, similar arguments had been presented by figures including Gottfried Wilhelm Leibniz (1714), Anatoly Dneprov (1961), Lawrence Davis (1974) and Ned Block (1978). Searle's version has been widely discussed in the years since.[2] The centerpiece of Searle's argument is a thought experiment known as the Chinese room.[3]
> The thought experiment starts by placing a computer that can perfectly converse in Chinese in one room, and a human that only knows English in another, with a door separating them. Chinese characters are written and placed on a piece of paper underneath the door, and the computer can reply fluently, slipping the reply underneath the door. The human is then given English instructions which replicate the instructions and function of the computer program to converse in Chinese. The human follows the instructions and the two rooms can perfectly communicate in Chinese, but the human still does not actually understand the characters, merely following instructions to converse. Searle states that both the computer and human are doing identical tasks, following instructions without truly understanding or "thinking".
This is mostly bullshit, but I think a useful framework for thinking about ways that maybe language absolutely can be an approximate truth vehicle for something it could never understand, but instead is the illiterate translator. While that sounds like it supports your argument, I think a more keen observer would note that this leaves the door open for a very specific situation that can produce perfect communication. If the pathways in between meat-detectors and meaning neurons are either identical or perfectly complementary or perfectly out of phase with each other, then they CAN recreate the same electrical impulses that, when filtered through the universal hardware and software of human brains, produce the perfect transmitted meaning experience. If we can imagine this case, however, it breaks your theory.