Coordination never existed
2026-02
What the industry calls "coordination" is, in most cases, human compensation disguised as architecture.
This is not a criticism of the engineers who built these systems. It is an observation about the axiom that underlies them.
What we call coordination
When service A must wait for B to be ready before acting, we say they "coordinate". When Kafka rebalances its consumer groups after a failure, we say the cluster "reorganizes". When a Kubernetes orchestrator restarts a fallen pod, we say the system "self-heals".
Let's look at what is actually happening.
A waits for B because a human wrote a healthcheck, a retry loop, an exponential backoff. Kafka rebalances because a human defined partitions, consumer groups, and reassignment rules. Kubernetes restarts because a human wrote a runbook that was translated into a YAML manifest.
"Coordination" is not an emergent property of the system. It is the result of human intent encoded across layers of configuration, operation, and discipline.
The stack as accumulated compensation
A classical distributed architecture looks like this: a broker to transport messages, a database to persist state, a cache to compensate for database latency, an orchestrator to compensate for service failures, an observability stack to compensate for the opacity of the whole, an ops team to compensate for what observability doesn't cover.
Each layer compensates for the limits of the previous one. This is not resilience. It is structural debt made operational.
Kafka is a particularly revealing example. It is a remarkably well-designed tool for what it does: transporting data streams durably and at scale. But Kafka does not guarantee semantic coherence. It does not know what the messages it transports mean. The semantics — intent, causality, idempotency — are left to the teams using it. They are the ones "coordinating". Kafka transports.
Why it holds anyway
These systems work. That is undeniable. Thousands of companies run architectures of this kind in production, at scale, for years.
They work because the teams operating them are skilled, disciplined, and available. Because runbooks are maintained. Because alerts are calibrated. Because someone answers at 3am.
This is not a human failing. It is an axiom failing. The implicit axiom of these architectures is: coherence will be maintained by the combined effort of structure and humans. Human effort is not an optional supplement. It is structurally required.
A question of axiom, not tooling
The problem is not Kafka. Not Kubernetes. Not service meshes. These tools correctly solve the problems they were designed to solve, within the axiom that produced them.
That axiom is this: a distributed system is a set of services exchanging messages, whose coherence is guaranteed by external coordination.
What I am exploring with Sphaera is an axiomatic rupture: a distributed system is a set of sovereign entities whose coherence emerges from local constraints, without central coordination, without global truth, without arbiter.
In this model, there is nothing to coordinate. Each entity observes locally what it depends on. It exists if conditions are met. It disappears if they are not. Coherence is not maintained: it is derived.
What this changes
When resilience is structural rather than operational, the cost changes in nature. You no longer pay in permanent human discipline. You pay in initial axiomatic rigor.
This shift is not without demands. Designing without a central coordinator requires thinking differently — not in terms of flows to orchestrate, but in terms of invariants to respect.
Coordination has not disappeared. It never existed as a property of the system. It was always a human projection onto a structure that needed it.
Changing the axiom means designing a structure that does not.