Almost Working — From Markets to Molecules, the Same Problem Appears
For most of my career in markets, I have been interested in a very simple question:
Why do systems fail suddenly, even when everything appears to be working?
In finance, we see this repeatedly.
Models hold. Correlations behave. Volatility is contained.
And then, without warning, something gives way.
It is rarely a gradual deterioration.
It is a transition.
A boundary is crossed.
The Same Pattern, Different Domain
Over the past few decades, I have been working on ideas around wave behaviour, collapse, and irreversibility — initially through quantum systems, and later through macroeconomic modelling.
The intuition is straightforward:
Systems evolve smoothly… until they don’t.
History matters.
Context matters.
And once certain thresholds are crossed, outcomes become irreversible.
In physics, this appears as collapse.
In markets, as regime change.
In macroeconomics, as crises.
Recently, I began to see the exact same structure in a completely different field:
AI-driven drug discovery.
AI Is Improving Everything — Except the Hardest Part
There is no question that artificial intelligence has improved the early stages of drug discovery:
Faster screening
Better molecular predictions
Improved protein structure modelling
In many ways, it is working exactly as intended.
But if you step back and look at outcomes — especially at the level that matters — a familiar pattern appears.
Late-stage failures remain stubbornly high.
Drugs that look promising… stop working.
Or worse, they cross into toxicity.
Not gradually.
But suddenly.
This Should Feel Familiar
If you come from markets, this is not surprising.
Most failures do not come from models being completely wrong.
They come from crossing boundaries the model was never designed to see.
Liquidity disappears
Feedback loops amplify
Correlations break
Regimes shift
The system moves from one state to another.
Irreversibly.
The Core Mismatch
Modern AI systems are exceptionally good at:
Pattern recognition
Interpolation
Optimisation within known regimes
But both markets and biology are governed by something else:
Thresholds
Hysteresis (memory of past states)
Context dependence
Irreversibility
This is where the gap lies.
AI improves the parts that are already smooth.
But the real problem sits at the edges — where systems commit to an outcome.
From Decoherence to Decision
In quantum terms, one might describe this as a form of collapse.
In macroeconomics, it appears as regime transition.
In biology, it shows up as:
sudden toxicity
loss of efficacy
unexpected system-wide responses
Different language. Same structure.
The system is not just evolving — it is deciding.
And once that decision is made, it cannot be undone.
Why More Compute Is Not the Answer
There is a growing belief that scaling AI — more GPUs, larger models, more data — will eventually solve this.
It will certainly help.
But it addresses the wrong layer of the problem.
More compute improves resolution within the same landscape.
It does not reveal where the cliffs are.
You can map the terrain in greater detail —
but still miss the edge.
A Different Way to Think About It
If this framing is correct, then the objective changes.
Whether in markets or drug discovery, the goal is not just:
“Find what works.”
It is:
“Understand where things stop working — and why.”
This leads to a different kind of system design:
Focus on boundary conditions, not just central cases
Model extreme but plausible scenarios
Identify points of instability, not just points of optimisation
Accept that some outcomes are fundamentally irreversible
In investment terms, this is not about maximizing return.
It is about managing collapse risk.
Why This Matters for Investors
The implications are not theoretical.
They are capital allocation decisions.
Industries are investing billions into AI infrastructure, particularly in areas like drug discovery.
These investments will produce results —
but not always in the way expected.
Without a shift in how these systems are used, we risk generating:
more candidates
more signals
more apparent success
…that still fail at the same point.
The bottleneck has not moved.
Closing Thought
Across physics, markets, and biology, one idea keeps resurfacing:
The world is not purely continuous.
It is structured around transitions —
points where systems stop evolving and start committing.
Understanding those points is harder than improving predictions.
But it is also where the real edge lies.
These ideas are explored more fully in my recent work on artificial intelligence and drug discovery:
Almost Working: Why Artificial Intelligence Fails at the Threshold of Drug Discovery
https://a.co/d/02CvZifw
The book examines this same structural problem in a different domain — showing how even the most advanced AI systems struggle not with prediction, but with irreversibility, thresholds, and decision boundaries inside living systems.
Different field.
Same underlying logic.