What a Neuron Teaches Us About Computation's Limits
Introduction: When Success Becomes a Prison
Computation has been spectacularly successful. We've used it to simulate galaxies, design aircraft, predict weather patterns, and build technologies that transform civilization. The computational paradigm—breaking systems into discrete components operating on fixed timescales with predetermined coupling rules—has revolutionized physics and engineering.
But this very success has created a dangerous assumption: that computation is the universal language for understanding all complex systems. When we turn computational methods toward life and intelligence, we encounter a fundamental barrier that no amount of computational power or algorithmic sophistication can overcome.
The problem isn't technical—it's conceptual. And it reveals itself through a pattern that should be familiar from the history of philosophy: infinite regress.
The Hidden Requirements of Computation
Before we can compute anything, we must make two crucial decisions that are so automatic we often forget we're making them:
1. Choose a Timescale
Every computational model operates on a clock. Whether it's the nanosecond cycles of a processor or the discrete time steps of a simulation, computation requires us to decide: "What is the fundamental unit of time in this model?"
2. Specify the Coupling
We must pre-define how different parts of the system interact. Which variables influence which other variables? With what delays? Through what functional relationships? This is the frame problem in disguise—we must establish in advance what's relevant to what.
For physics and engineering, these requirements pose no fundamental obstacle. A bridge doesn't rewrite the laws of structural mechanics while you're crossing it. A planet doesn't change how it responds to gravity mid-orbit. The timescale and coupling rules remain fixed, allowing computational models to work beautifully.
But the concepts of life and intelligence are different.
The Multi-Timescale Reality of Living Systems
Consider what actually happens in a living neuron:
Fast Processes (0.1-1000 milliseconds):
- Electrical signals propagate
- Neurotransmitters cross synapses
- Action potentials fire
Medium Processes (1 second to 10 minutes):
- Biochemical cascades modify synaptic strength
- Plasticity mechanisms adjust connection weights
- Short-term learning occurs
Slow Processes (hours to days):
- Structural changes reshape neural architecture
- Gene expression alters cellular properties
- Long-term consolidation rewrites the network
The elements in these timescales don't operate sequentially—they run concurrently and interact constantly. A single spike at time t can simultaneously:
- Trigger an immediate electrical effect (1ms later)
- Initiate a biochemical cascade (30 seconds later)
- Influence gene expression (6 hours later)
This creates what we might call causality smearing: an interaction produces effects across multiple timescales through different pathways, and these effects feed back to influence each other.
The First Problem: Algorithm Drift
Here's where computation starts breaking down. In a traditional computational model, the algorithm remains fixed during execution. You write the code, run it, get results. The program doesn't rewrite itself as it runs.
But in neural systems, viewed with the lense of computation, it seems that the "algorithm" constantly evolves:
Time 0: Algorithm A (baseline state)
Time 10 minutes: Algorithm A' (plasticity has adjusted weights)
Time 24 hours: Algorithm A'' (structure has been modified)The system doesn't just process information—it continuously redesigns its own processing architecture based on what it's processing. Again, viewed with a computational lense, it seems that the "program" is rewriting its own source code while running.
You might think: "Fine, we'll just model elements at multiple timescales!" And indeed, this is what advanced neural models attempt. We create hierarchical systems with fast layers, medium layers, and slow layers.
But this leads directly to the deeper problem.
The Second Problem: Evolving Coupling
Even if we model elements at multiple timescales, we still must specify how they couple to each other. We need rules like:
- How do fast spikes trigger medium-term plasticity?
- How do medium-term changes modulate fast responses?
- How do slow structural modifications affect both?
In computational models, we hard-code these coupling rules. We write functions that define exactly how Layer 1 influences Layer 2, how Layer 2 feeds back to Layer 1, and so on.
But, viewed from the computational perspective, here's what the brain actually does: the coupling rules themselves adapt.
Consider metaplasticity, a well-documented phenomenon:
Initially: LTP (Long Term Potentiation) threshold = 10 Hz
After chronic low activity: LTP threshold = 5 Hz
After chronic high activity: LTP threshold = 20 HzWhat changed? Not just the synaptic weights (medium timescale), but the rule for how spike frequency (fast timescale) triggers weight changes (medium timescale). The coupling function itself evolved.
This means we need meta-rules: rules that govern how the coupling rules change.
The Infinite Regress
Now we see the trap:
Level 1: Model fast processes
Level 2: Model medium processes that adapt based on Level 1
Level 3: Model slow processes that adapt based on Level 2
Level 4: Model meta-processes that adapt the coupling between Levels 1-3
But what governs the coupling at Level 4? We need Level 5 to specify that.
And what governs Level 5? We need Level 6.
And so on, infinitely.
This isn't a technical problem waiting for more sophisticated algorithms. It's a fundamental conceptual limitation: computation requires pre-specified elements at different timescales and coupling rules, but living systems continuously seem to evolve both the elements at different timescales, the timescales themselves and their coupling rules.
To model this computationally, we need meta-rules. But those meta-rules themselves seem to evolve, requiring meta-meta-rules. And those require meta-meta-meta-rules. We fall into infinite regress.
How Biology Escapes the Trap
The brain doesn't solve this problem through pure logic or computation. It escapes infinite regress through three complementary mechanisms:
1. Constraints Through Physical Embodiment
Biological constraints naturally limit how timescales emerge and couple. Molecular diffusion rates, protein synthesis times, and cellular growth rates provide physical boundaries that don't need to be pre-specified—they emerge the constraints that we call "physics of living matter".
2. Evolutionary Pre-Wiring
Some coupling rules are genetically encoded, providing stopping points in the regress. These aren't arbitrary choices—they're couplings sculpted by millions of years of evolution. Evolution that, at much longer timescales, changing the coupling. But this is material for another essay!
3. Environmental Feedback
The environment closes the loop. Success and failure interacting with the environment provide the ultimate evaluation criterion, allowing the system to discover effective coupling rules without needing infinite meta-levels.
The brain uses all three simultaneously. It's not running a computation—it's a constrained (physical) system embedded in an environment, with evolutionary constraints, that discovers its own organizational principles through interaction with the environment.
Why This Matters: Computation's Proper Domain
This analysis reveals why computation works brilliantly in some domains and fails fundamentally in others:
Computation Succeeds When:
- Systems can be modeled ti operate on a single dominant timescale (or clearly separated timescales)
- Coupling rules remain fixed during the process being modeled
- The frame problem can be solved by pre-specifying relevance relationships
- We're analyzing systems, not systems that analyze themselves
Examples: Classical physics, structural engineering, electronic circuits, chemical reactions, orbital mechanics
Computation Fails When:
- Components are created at multiple timescales and interact continuously and non-hierarchically, but heterarchically
- Coupling rules themselves evolve during operation
- The system generates its own frames of reference
- We're dealing with autonomous creativity
Examples: Living systems, genuine intelligence, embryological development, evolutionary innovation
The Concept of Autonomous Creativity
What unites life and intelligence—and what computation cannot capture—is autonomous creativity: the capacity to generate genuinely novel organizational principles, not just novel combinations of existing elements.
A living cell doesn't just process inputs according to fixed rules. It continuously reorganizes its own processing architecture based on its history, its current state, and its environment. It creates new molecular machines, new regulatory networks, new responses that couldn't be predicted from its prior state.
Intelligence, similarly, doesn't just manipulate existing concepts. It generates new concepts, new ways of organizing experience, new frameworks for understanding that transcend the conceptual space it started with. When you have a genuine insight, you're not recombining existing ideas—you're creating a new way of seeing that makes previously impossible thoughts thinkable.
This creative capacity operates across multiple timescales simultaneously, with the coupling between timescales itself being part of what's created. It's not executing a program—from a computational perspective, it is as if the system is writing programs that write programs that write programs ...
The Path Forward: Beyond Computation
Recognizing computation's limitation isn't a rejection of formal reasoning or mathematical precision. It's a recognition that we need different conceptual frameworks for different kinds of concepts.
For physics and engineering—where components are fixed within fixed timescales and coupling is stable—computation is perfect and should continue to be our primary tool.
For life and intelligence—where autonomous creativity appears to generate new components, new timescales and new coupling rules—we need a fundamentally different approach. Not better computation, but a framework that can accommodate:
- Processes that generate new components in their own timescales rather than operating on pre-specified clocks
- Systems that discover their own coupling rules rather than executing predetermined ones
- Creativity that produces genuinely novel organizational principles
- Autonomous generation of meaning and relevance rather than externally imposed frames
This is where approaches like Geneosophy become essential. Rather than trying (and fail) to understand the concepts of life and intelligence through the computational framework, we need frameworks that can comprehend the generative processes from which computational models themselves arise—the primordial creative capacity that makes all formalization possible but which cannot itself be formalized computationally.
Conclusion: Knowing When to Stop Computing
The most sophisticated move in understanding complex systems isn't always to build better models. Sometimes it's recognizing when your conceptual framework has reached its natural boundary.
Computation has taught us extraordinary things about the universe. But its very structure—requiring fixed components at fixed timescales and pre-specified coupling—makes it fundamentally inadequate for phenomena characterized by autonomous creativity.
The infinite regress we encounter when trying to understand living systems or intelligence computationally isn't a bug to be fixed with more levels of abstraction. It's a signpost pointing beyond computation itself, toward the recognition that some aspects of reality require fundamentally different conceptual tools.
Life and intelligence cannot be understood in terms of really complex computations. They're expressions of a creative capacity that generates the very possibility of computation—along with everything else. To understand them, we must investigate that generative source directly, not through ever more elaborate computational proxies that inevitably miss what matters most.
The question isn't "How can we compute life?" It's "What kind of conceptual framework can comprehend a process that appears to continuously create its own computational principles?" That's the question Geneosophy is designed to answer.