When the model becomes the chip
A Canadian startup is etching neural networks directly into silicon. The implications go further than the benchmarks suggest.
Studying what follows.
Most analysis of AI focuses on capability. We focus on consequence: what happens to industries, institutions, and infrastructure when adoption moves faster than understanding. We evaluate what is already underway, model the trajectories that follow, and identify the gaps between what is reported and what is real.
Our work covers three areas. We map how AI is altering industry structure: where value is concentrating, who holds leverage, and which roles and business models survive contact with automation. We analyse decision architecture: where automated systems are taking on consequential decisions, which transitions hold up under scrutiny, and which introduce risk that surfaces late. And we quantify the material cost of compute at scale: the energy, water, and emissions embedded in training and inference, and the persistent gap between reported efficiency gains and actual consumption.
We work with organisations that need to act on these questions. The ones facing capital, policy, or structural decisions who need sharper evidence before they commit.
A Canadian startup is etching neural networks directly into silicon. The implications go further than the benchmarks suggest.
Datacenter operators report efficiency gains. The aggregate numbers tell a different story.
Most AI strategies optimise for replacing human expertise. The higher-value target is accelerating how people develop expertise. When judgment is what makes AI useful, the real bottleneck is learning, not automation.
Generative video is improving faster than the models built to detect it. The more resilient approach works upstream: cryptographic attestation at the hardware level, proving footage was optically captured rather than computationally produced.
Inference Research grew out of a background that spans particle physics, climate science, and carbon dioxide removal. The thread connecting these fields is the same one that runs through this work: complex systems behave in ways that reward careful observation over confident prediction.
The research group exists because the conversation about AI's real-world impact needs more rigour and fewer press releases. We bring a physical-sciences perspective to questions that are too often framed as purely technical or purely political.
We're always interested in hard problems.
hello@inference-research.com