In this post, I’ll elaborate on this.
But first, lets start with a quote…
“Philosophy consists of speculations about matters where exact knowledge is not yet possible.”
Lord Bertrand Russell

Preamble on Computation
If you ask 10 people with a fair bit of technical acumen what the definition of computation is, you might get a few different answers.
To me, computation is the observation of a process attempting to solve an arbitrarily complex equation — in the physical (atoms) or digital (bits) worlds.
According to the Church-Turing thesis, anything that can be computed in the physical world can also be computed on a Turing machine (a hypothetical computer with infinite memory and capability).
It is important to understand that when Alan Turing came up with this theory, he was conceptualizing an infinitely long tape-based physical computer — this was decades before computers were remotely able to do anything advanced like render photorealistic graphics in real-time, store petabytes of data, automate large scale NLP let alone cryptographically secure global communications for billions of people. Quite an incredible feat of imagination.
As technology has started to deliver amazing results with neural network computers (which are fundamentally non-deterministic in nature, as opposed to all forms of deterministic computation which have dominated pretty much all of computing until the last few years), we are entering a new era of computation … one where we do not actually precisely know how the equations (image generation, text / signal sequence prediction, etc) are being solved.
In this manner, the rising dominance of neural network computation is gradually changing how we constitute our theoretical frameworks for reasoning about the evolution of digital computation.
Previously, with deterministic systems, we could concretely run the “software is eating the world” thesis quite easily … take a given industry with manual and offline processes, digitize that in some form of CRUD app with a database, and viola — vertical SaaS lottery bingo.
But applying this same heuristic to “AI” (aka neural network based computers) as they continue to exponentially ascend the capability curve to whatever AGI / ASI means, I’m at quite a loss for what we can expect to happen beyond mere automation.
…Surely, we can expect transformation, at a minimum. But transformation into what?
Tokenizability of Computation
I’m a big fan of blockchain computers (decentralized digital ledgers with cryptographic guarantees that ensure immutability, verifiably and transparency of transactions).
Beyond the benefits of upgrading all deterministic digital computing systems by re-architecting them on a blockchain, I believe far greater impact lies in constructing blockchains that are purpose-built for handling and scaling neural network based non-deterministic computation.
Very counter-intuitively, blockchains are fundamentally deterministic in that they require such semantics to make their cryptographic guarantees … yet, the world is moving toward all computation being done by specialized and general purpose non-deterministic neural nets of arbitrary complexity (large and small).
I believe that anything that can be computed can and will be tokenized on a blockchain. Perhaps the highest value efforts the worlds of digital engineering and math / physics can undertake are building systems that form the union of deterministic and non-deterministic systems in new and powerful ways.
Bittensor is the most exciting project in this area. I continue to be a big fan and believer.
