No dumb questions: Are there unthinkable thoughts?
I recently found myself on a Wikipedia page entitled
List of animals by number of neurons. I was surprised to learn that starfish have about 500 neurons, which is only double the neurons of the microscopic
tardigrade. Is that even enough neurons to have a sensation of thought? What
is the sensation of thought? In any case, I figured, those poor things will never experience even a fraction of the thoughts that a human can have in the span of a single day. That left me with a nagging question, though.
Are there thoughts that no human will ever be able to think?
On one hand, thoughts are usually characterized (somewhat anthropocentrically) by the fact that someone, you know, thought them. I mean, a starfish definitely has no idea of the thoughts that lay beyond its simple echinodermic existence, so how could we?
Call me overconfident, but I have more faith in humans. With our self-awareness and power for abstraction, I think we can chip away at some different angles of this question and gain a vague intuition for what lies beyond. It's like looking at a black hole; there's literally nothing there to see– no photons to reach your eyes– but by observing all the stuff around it, we know it's there, and that's pretty cool.
So with that said, let's think the unthinkable!
I asked a few people this question and the first thing that usually came up were thoughts that are so unspeakably terrible that our brain resists our attempts to think them. After all, that's the typical meaning of the word "unthinkable". But for something that's supposedly unthinkable, these kinds of thoughts seem rather... I don't know, thinkable?
Moving on, another popular topic was language, and for good reason. Language symbolizes our thoughts in a very deep way; there's a pretty well-researched effect that the language you use affects
the way you perceive the world. In the study I linked, researchers found that participants would describe the same event differently depending on the grammatical constraints of the language they used (in this case, English or German).
So are "German thoughts" unthinkable to a non-German speaker?
The more you think about this, the less tractable it gets. For one thing, I'm sure the variation within groups is smaller than the variation between them; in other words, if you could somehow measure the average "similarity in thought" between an Anglophone and a Germanophone, you could find two English speakers whose thought patterns are much more dissimilar.
But to me, this line of reasoning leads to a pretty reductionist conclusion: that
every one else's thoughts are "unthinkable" because the only way to truly experience their exact sensation of thought would be to have their same brain and memories. This isn't very interesting because it's more "unknowable" than "unthinkable". Rather than fall down the rabbit hole / dead end that is the
hard problem of consciousness, let's take a step back and think more holistically for a second.
We're obviously dealing with a very thorny and layered question, but just to start somewhere, let's start with something we've all experienced: experiences. While I don't claim to know the exact relationship between experiences and thoughts, allow me to fulfill my quota of "one controversial opinion per essay" by asserting that there is a relationship. Jokes aside, one could argue that all thoughts are rooted, somehow, in prior experience. Immediately after writing that sentence, I fell down a very deep rabbit hole called "Would a brain that never received any stimulus experience thought?" which I'm just going to have to put aside for now. It's still safe to say that most of your thoughts draw from your experiences, "real" or imagined.
Let's take an apple, the poster child of thinkable objects. Thinking about an apple is pretty easy. If you're a visual thinker, you can just picture an apple. Visual thinking isn't the only way to go, though. If it was, we'd have to argue that people with
aphantasia can't think of things, which is a loser take. Personally, if you asked me to think of an apple, the experience is more like a vague
concept of an apple being brought to the forefront of my mind, primed to answer questions like "what does one taste like?" or "could you build a house out of them?". I've also had plenty of experiences with apples, so it's equally easy to think about apples rolling down hills, people bobbing for apples, and so on.
It goes a step further; I've never seen a blue apple, in person or otherwise, but it's just as easy to conceptualize a blue apple. Why? I've seen blue things, and I've seen apples, and I've seen things be different colors, so bada-bing, bada-boom, blue apple.
We can take this yet another step further, beyond the realm of things I haven't experienced to the realm of things no human could possibly experience, like a cow jumping over the moon or a person walking through a wall. Impossible to think? Not in the slightest; I just did, and you did as well. I can picture a cow jumping over the moon because I've seen cows, animals jumping, and the moon. Even if I hadn't seen characters clipping through walls in video games, I could draw from my experience of liquids to imagine someone walking through a wall as if it were a vertical pool of viscous wall-like liquid.
This feels like
second nature to us (because it is), but it's actually extremely deep. Our capability to imagine impossibilities by freely combining concepts is one of our greatest strengths. In fact, without it, we wouldn't have the rich language that sets us apart from other animals. As Professor Dr. Eric Reuland writes in his paper
Language and imagination: Evolutionary explorations:
Given [our linguistic tools], we have recursive combinability and principles enabling the interpretation of the structures produced. The interpretation rules are insensitive to plausibility or implausibility, sense or nonsense. They as easily combine brown with bear as square with circle or white (or black) with hole. A stone may jump, a mountain may hear. In short what we have is imagination unleashed.
So, if our brain is capable of combining concepts to create essentially infinitely many imagined realities, what could possibly stop us?
Let's travel 10,000 years back in time to 8,000 BC and find a "bustling" agricultural civilization nestled in the
Fertile Crescent. You think, well, here are a bunch of humans who have brains essentially identical to ours (evolutionarily speaking) and are capable of full-fledged spoken language. Now, let's just wait for one of them to randomly synthesize enough thoughts to stumble upon the modern-day design of the internet. Just to make it a little more tractible, say we come up with a description of the internet that feels sufficiently accurate while being
theoretically possible to grasp by a person with 8,000 BC technology. I made an attempt for the sake of illustration; read it if you want, but it's mostly a proof of concept.
The internet for the Stone Age polymath
First, we must understand the idea of code. A code is essentially a way of using symbols to represent something. Once you understand this, you can invent written language, which uses an alphabet (a set of symbols) to represent spoken language. Similarly with numbers to represent the quantity of things. Now, consider a message which says erase all the letter "e"s in this message. If you were to follow this instruction, you would modify the symbols in the message and wind up with ras all th lttr ""s in this mssag. This illustrates the point that symbols can be both "just" symbols and (once interpreted) instructions for modifying symbols.
Once you grasp this concept, you can begin to conceptualize a Turing Machine (which is better described
elsewhere, but which doesn't require any fancy mathematics or material science to imagine). Once you've got the Turing Machine down, you can start to imagine that you have a physical material that allows you to represent symbols; for example, a magical material that can be precisely controlled to have two states: an "off" and "on" state. We'll call one chunk of this material a
bit. Just like the "e" example, these bits can be used both as data (for example, to represent a message in written language) and instructions (for example, a procedure for moving a message from the bits its currently stored on to another set of bits). Building on this contept, you can arrive at the idea of a
programmable computer.
Now, let's say you have a computer, and your friend has a computer in another distant village. You have a message stored on your computer that you want to send to your friend. You could connect your computers with a long wire, which is made of a material that can turn "off" or "on", just like bits. Since your computers are programmable, you can write instructions to take the message on your computer and flash the wire "on" and "off" to "read" out the message on your computer. Your friend's computer would have a similar program to "read" the wire and copy the pattern of "off"s and "on"s onto its own bits. Now, in essence, you have the internet!
Obviously, there are tweaks that could be made. For example, I left out the idea of keyboards and monitors, and I didn't mention how wires can be replaced with wireless technology, but the core idea is there. This description also doesn't make any mention of electricity, which I think is acceptable because the exact mechanics of electricity is low-level enough to be an implementation detail of computers and the internet.
I'm not sure if I even have to say this, but the odds that a Neolithic person living in Neolithic times could make these conceptual breakthroughs is astronomically unlikely, and I'm even tempted to claim that it's categorically impossible. That, even if you simulated infinitely many neolithic agricultural societies, there wouldn't be one human that could conceptualize the idea of the internet in this way. I don't have any hard facts to back up that stronger claim, but if you don't believe me, try it and get back to me.
I mean, the second step in that sequence– recognizing that symbols can be used to represent spoken language– took about another six thousand years from the Stone Age to work out. While we can't say for sure that no one had the idea of written language, six thousand years without evidence of writing sure says something.
Ideas follow the laws of natural selection, and as such, complex concepts just take time and generations to pop into existence. Just as humans didn't suddenly pop into existence in a vacuum, but evolved over millions of years of natural selection, the thought of an internet made of computers was developed gradually over thousands of years, each generation of great minds standing atop the shoulders of the society that came before.
I think about Isaac Newton a lot in the context of being ahead of one's time. Newton's genius was pretty hard to understate. Over the course of the eight decades of his life, he:
Generalized the binomial theorem from positive integers to all real numbers
Invented the reflecting telescope
Invented calculus
Published Principia Mathematica, which laid out the laws of motion and the idea of universal gravitation
Advanced statistical analysis with his method of least squares
Made the first attempt to reconcile the wave- and particle-like properties of light
Anticipated the idea of an electric field
Refined the scientific method
And a whole lot more that I probably forgot. But for all his genius, he probably couldn't have been able to formulate quantum field theory, or general relativity, or predict the existence of black holes, or prove Fermat's Last Theorem. The stage was set, so to speak, for Newton to help push humanity from the Scientific Revolution into the Enlightenment, but the time was not right for modern physics or ring theory. Genius only gets you so far. So, assuming humans continue to exist in 10,000 years (which I highly doubt), the concepts familiar to our descendants in the year 22025 would be truly unthinkable to us now, with our current understanding of the universe.
That sums up one potential answer to the question of, "Are there unthinkable thoughts?", which is, "Yes, if the concepts require too much evolution from current concepts". Or, more poetically, "Yes, if getting there would exceed the speed of thought". Now, let's take a look at a different angle: experiences that our brains just aren't wired to imagine.
Most computers we're familiar with have both hardware and software. The hardware includes the physical components that literally make up the computer, like the CPU, RAM, disk, motherboard, and peripherals like the monitor and speakers. We can also write out sets of instructions, or programs, that tell the hardware what to do. These programs constitute the software of the computer. In this way, two computers with the same hardware can be programmed to do extremely different things.
In addition to distinguishing between hardware and software, you could identify multiple layers of software in a computer, ranging from "low-level" (close to the hardware) to "high-level" (abstracted from the hardware). The lowest layer would probably be the assembler, which turns assembly language into 1s and 0s that the CPU can execute. The next lowest level software above that would be the compiler, which turns human-readable code into assembly, and somewhere above that, we'd find the operating system, which is the most user-friendly way to interface with the computer.
These higher levels of software make the hardware easier to work with at the expense of flexibility. For example, the operating system won't let you write so many files that you overwrite the memory containing the instructions for the operating system itself, even though the hardware technically permits it.
The brain can be understood similarly. The "hardware" of our brain would be our neurons, as well as the laws of physics that govern the chemistry that governs those neurons. Dependent on that, we have various (fuzzy) layers of mental "software". Obviously, these different mental levels of cognition can't be teased apart as easily as the CPU from the motherboard, but some brain functions certainly seem lower-level than others.
For example, there are parts of your brain (notably the brain stem) that are responsible for regulating your heart rate and breathing rate. These are very low-level functions. You can become conscious of your breathing and choose to hold your breath longer than average ("you" referring to the part of your brain responsible for your high-level sense of self and agency), but ultimately, your brain stem has the final say and will take over to prevent you from dying of hypoxia.
Cutting to the chase: while you can change the software of your brain through conscious thought and experiences, there's also mental hardware, which is much less malleable. Just like you can't program a regular computer to be a quantum computer, you can't think your way into, say, conceptualizing the fourth dimension.
Let's elaborate on that.
Consider your eyes. The eye gets a (more or less) two-dimensional image of the world. I say that because all our brain has to work with are neural signals from a field of cones that get stimulated by different wavelengths of light. Unlike radar, where you send and receive a signal and measure how long the round-trip took, the cones in our eyes have no idea how long it took for light to reach it. In other words, there's no direct depth measurement. To make matters even more two-dimensional, our eyes rest only a few inches apart on the plane of our face, meaning we can usually only see at most half of an object at one given time.
So, if we've never seen every side of an object at once, why do we experience the world as a three-dimensional place with three-dimensional things, instead of a two-dimensional screen of moving pictures? It goes beyond the fact that we have two eyes; the visual processing part of our brain is hard-wired to use certain visual cues to construct a three-dimensional model of the world from these excited cones. This automatic "lifting" of visual information to three dimensions allows us to look at a video, or an image, or even a drawing, and perceive depth with no effort necessary. Quite the opposite: it's basically impossible to not see things in three dimensions. That said, let's try to see things in four dimensions.
We can actually learn a lot of things about the fourth dimension just by making connections between lower dimensions. I'll go through a few of them.
As three dimensional beings, we can look at a 2D object and see all its sides at once, including the inside of the object. It's all "laid out in front of us". In our three-dimensional world, a single viewpoint is restricted to seeing at most half of an object at a time, and the inside is hidden from view. In the fourth dimension, then, we could look at a 3D object and easily see all of its sides at once, as well as its contents. Picture a box in front of you, with things inside. If you were a fourth-dimensional being, you could see all six sides of the box, as well as the contents of the box. You could also see every side of each item in the box, and their insides. It would all be laid out in front of you.
We can discover some additional details by generalizing familiar geometry. For instance, the amount of space taken up by a line is just the length of the line. The amount of space taken up by a square is its area, which is the side length squared. Similarly, the volume of a cube is the edge length cubed. Generalizing further, the hypervolume of a tesseract (a four-dimensional cube) would be the edge length raised to the fourth power.
Here's another fact: a line has 2 sides, a square has 4, and a cube has 6 faces, so a tesseract would have 8 "faces", or cells (as they're called).
One more: if you picked two endpoints of a line (there's only one choice), they would be separated by one edge, namely, the line itself. On a square, the most edges you could find between any two vertices is two. On a cube, that number is three. Consequently, on a tesseract, the most number of edges separating two vertices would be four.

switch_access_shortcutNote that the tesseract shown here is a 3d projection of a 4d object
Surely, given all this information, you could picture a tesseract in all its four-dimensional glory?
Clearly, no amount of describing the properties of a fourth spatial dimension would enable us to
actually conceptualize it with the same level of vividness that we can grasp the third dimension. Although some people have dedicated a lot of time to the pursuit of fourth-dimensional intuition, I would argue that these people are dancing
around four-dimensional thoughts, as closely as possible without actually experiencing it. You could, like I just did, read through the
Wikipedia page for Tesseract and go like "uh huh, yep, makes sense", but actual four-dimensional thoughts would still remain just out of reach.
This is categorically different from things like cows jumping over the moon, which we also have no experience of. It's not just that we've never experienced the fourth dimension, it's that the hardware of our brain– which went all-in on the third dimension– is working directly against us. You could make a similar argument for other sensations that humans don't experience, like seeing infrared / ultraviolet light, or sensing magnetic fields. But I think higher dimensions are a much more satisfying candidate for "unthinkable thoughts" because they're so close to our regular thoughts, and at the same time, so far away that the word "far" doesn't even apply.
Addendum
After writing this essay and coming back to this section, I've softened my position a little on the subject of whether higher dimensions are possible to fully conceptualize. I think my main point mostly stands: that your brain has never perceived anything four-dimensional (spatially), which makes it fundamentally different from the other "impossible" scenarios we can construct in our heads. But we do have experience with time, which isn't a bad analogy for four spatial dimensions, and I think it's probably possible to amass so much intuition about the fourth dimension that it feels like second nature. So, pick your definition of "thought".
I'd like to call your attention to a wonderful little principle called the pigeonhole principle. If you're not aware of it, this is essentially what it says:
Pigeonhole principle: Imagine you have N things and M containers. If N > M, then at least one container will have more than one thing in it. Similarly, if N < M, then at least one container will have no things in it.
You could prove this fact from first principles, but there's really no need, since it's so obviously true. With this simple principle, you can prove that there must be two people in London who have the same number of hairs on their head, or that, in a situation where people are shaking hands, there will always be at least two people who have shaken the same number of hands.
Let's go back to our echinodermic friend, the starfish, with its 500 neurons. Could a starfish ever conceptualize the pigeonhole principle? I strongly doubt it. Even if a particular starfish got lots of experience putting things in containers (already doubtful), the idea that it would generalize those experiences to a "principle", with all of its 500 neurons, is fantasy. I don't know where the line is, but it's definitely above 500.
Let's jump way up the neurological pecking order and consider non-human primates. Could a
monkey conceptualize the pigeonhole principle? It feels harder to say "no" right away. I'm certain that a monkey understands the concept of "you can't fit too many things in a container", but the pigeonhole principle deals with an idea more discrete and precise than that. Research suggests that monkeys can reliably distinguish between
two and three, but their representation of numbers beyond four are probably approximate. Monkeys can still
compare bigger numbers, but in accordance with
Weber's Law– they're mostly paying attention to the
ratio between the two sets of objects, not the absolute difference. Putting the two together, monkeys probably don't have a concept of the difference between 50 and 51, which I would argue precludes them from truly grasping the pigeonhole principle.
So, on the scale of "understands the pigeonhole principle", we definitely have a spectrum from starfish to primates to humans. This raises an obvious question: why? Although having a sufficient number of neurons is definitely a prerequisite, it's clearly not the deciding factor. It seems more related to humans' capacity for symbolic thought: the ability to refer to things as concepts, combine concepts freely, abstract patterns from reality, generalize those patterns, and apply them to new contexts. This ability seems pretty unique to humans.
That's definitely a morale booster, and could be taken to mean that humans "won the game" in terms of intelligence, and that we really are capable of conceptualizing any abstract concept or pattern. But– while I can't prove otherwise– I have my suspicions.
In my
formal systems essay, I made a passing comment about how, although we have a pretty much perfect understanding of addition and multiplication by themselves, we don't yet fully understand how they interact with each other. This gets at my meaning of the word "understand".
Levels of understanding of addition and multiplication
Basic understanding: Know how to compute the sum of products and the product of sums.
Medium understanding: Know how to check whether a number is prime.
Advanced understanding: Explain why every number greater than 1 has a unique prime factorization, and why there are an infinite number of primes.
Very advanced understanding: Explain why there are no integer solutions to the equation
a^n + b^n = c^n when
n >= 2 (
Fermat's Last Theorem).
Who would've guessed there could be so much mystery surrounding the interaction of addition and multiplication? Just to illustrate the point, consider the fact that Fermat's Last Theorem was asserted in 1637, but wasn't proven until 358 years later by Andrew Wiles in 1994. Some of the mathematical results Wiles used to construct his proof weren't even discovered until the 1980s (speaking of the "speed of thought").
With that in mind, it feels very possible that we've already proposed conjectures that won't be proven for a thousand years. But what about the extreme? Are there "simple seeming" facts about numbers that we'll never be able to determine?
If you've heard of
Gödel's Incompleteness Theorem (GIT), you're probably thinking this is when I'll introduce it. That was my original plan, but applying GIT to the human brain is always a sketchy affair, and I got about 6,000 words deep when I realized that it wasn't worth it. So, I'll just describe my intuition and leave ol'
Kurt alone for now.
We've discussed starfish and monkeys, but it's time to bring in the beavers– specifically, busy ones. In computer science, the Busy Beaver problem is a challenge involving finding Turing machines that can keep themselves "occupied" for the longest time (the most steps) without running forever. Turing machines vying for the title of Busy Beaver will start out on a blank tape of all 0s, and can only write 0s or 1s. Additionally, Turing machines are broken up into different "weight classes" by their number of internal states. The Busy Beaver with n states is called BB(n). Let's take a look at BB(2), the Turing machine with two internal states which runs for the most number of steps before terminating.
BB(2)
if in state A:
if reading 0: write a 1, move right, and switch to state B
if reading 1: write a 1, move left, and switch to state B
if in state B:
if reading 0: write a 1, move left, and switch to state A
if reading 1: write a 1, move right, and halt
This critter runs for a total of 6 steps, leaving in its wake a tidy little group of four 1s. The busiest Turing machine with 3 internal states, BB(3), keeps itself occupied for 14 steps. BB(4) rages against the dying of the light but expires after 107 steps. Mathematician Allen Brady discovered this Turing machine in 1966, but couldn't prove that it was the fourth Busy Beaver until 1974. The fifth Busy Beaver, though, was a completely different story.
Allen Brady himself was skeptical that BB(5) could even be proved. He wrote:
Nature has probably embedded among the five-state holdout machines one or more problems as illusive as the Goldbach Conjecture. Or, in other terms, there will likely be nonstopping recursive patterns which are beyond our powers of recognition.
But he was wrong. In mid-2024, through the spectular efforts of an organized online community of amateur mathematicians, the fifth busy beaver was proven to run for a mind-boggling 47,176,870 steps. If you'd like to learn more, Quanta Magazine has a
fantastic article about the whole story with all its twists and turns. It's actually one of my favorite pieces of scientific journalism.
We can visualize the outputs of Turing machines by representing the tape in each step of its computation as a sequence of rows, where colored pixels are 1s and black pixels are 0s. Take a look at a small portion of a five-state Turing machine that was shown to run forever (rotated 90 degrees for visibility):

switch_access_shortcutThe patterned chaos of a nonterminating Turing machine. From Quanta Magazine.
I don't know about you, but looking at stuff like this really makes me awe-struck by the beauty of the universe. It looks like an alien world.
So, shall we get on with finding BB(6)? Well, I'm not sure we even have a word that describes how much harder BB(6) is compared to BB(5). To quote from the Quanta article:
Meanwhile, part of the team has moved on to the next beaver. But just four days ago, [two contributors] discovered a barrier for BB(6) that seems insurmountable: a six-rule machine whose halting problem resembles a famously intractable math problem called the Collatz conjecture. [...] "It's conceivable that this is the last busy beaver number that we will ever know."
For that matter, it's been proved that solving BB(27) is as hard as another famously unsolved problem called the
Goldbach conjecture (which was mentioned in passing earlier), and that finding
B(744) would be as hard as proving the
Riemann Hypothesis, widely considered to be the "Holy Grail of Mathematics".
The important point is that finding each successive busy beaver is qualitatively different from (and harder than) any that came before. Each one will require new levels of abstraction to recognize the patterns in the chaos.
This idea of "patterns of chaos" is, I think, a very beautiful one. One could argue that the course of human scientific progress has been the history of finding patterns in chaos, and then noticing the chaos in those patterns, until we once again find a pattern which explains those chaotic patterns of chaos, until we notice the chaos in those patterns, and so on. Can we keep on finding the patterns... forever? If we do, will there be a point where we no longer see chaos, only patterns?
I don't think so. I have no proof, just faith. I think there are patterns in our universe which exist on levels of abstraction that our brain (which, as a reminder, evolved primarily to stay alive and tell other monkeys where the bananas are) will never be able to reach.
To be a little less hand-wavy, take the Riemann Hypothesis (RH). Without getting into the weeds of what it states, a proof that RH is correct would mean that the distribution of primes– which seems irregular up close– is actually very orderly. This is another great example of "chaotic patterns" becoming "patterned chaotic patterns". Sure, we can assume RH is true. But without a proof, no human on this planet understands why prime numbers have to be distributed in the way RH suggests.
I think higher levels of understanding unlock previously impossible, more abstract thoughts. Going back to the monkeys, their inability to precisely understand numbers above four locks them out of truly grasping mathematics. Likewise, not being able to answer some "simple" questions about prime numbers, I think, locks us out of certain high-level thoughts about numbers. And even if we do crack the Riemann Hypothesis, and the Collatz Conjecture, and the Goldbach Conjecture, and the rest– I believe there will always be another level of understanding beyond reach. At some point, we'll run into the limit of the "speed of thought" combined with humanity's finite existence combined with the maximum information density of the brain.
This thought cannot be thunk
Thus concludes my answer to the question of "Are there thoughts that can't be thunk?". Because of the way my brain is wired, this essay turned out more technical than I was initially going for, mostly because I had a hard time convincing myself that other non-technical thoughts had any real limitations that could be described with any level of precision.
To be optimistic, though, the fact that it was so damn difficult to write a compelling argument for any (interesting) limitation of the brain is uplifting in its own way. Don't overthink it. ■