The missing licence

You need a licence to operate a forklift. You do not need one to deploy a recommendation algorithm that shapes the information diet of billions, or to run engagement systems whose effect on adolescent mental health the platform measured and chose not to disclose. Every previous technology that closed this gap did so after the damage was visible. AI is the first where we can see it coming.

A forklift operator in a UK warehouse requires formal certification under the Provision and Use of Work Equipment Regulations 1998. The training takes several days, includes both theory and practical assessment, and must be refreshed periodically. The recommendation algorithm that determines what two billion people see when they open their phones each morning was built and deployed by teams with no mandatory competency requirement of any kind.

The pattern of licensing physical-world tasks follows a consistent logic. You need a licence to drive a car, to wire a house, to fly a plane, to sign off on the structural integrity of a building, to audit a company's accounts. In every case, the requirement exists because the consequences of getting it wrong exceed what the individual can put right. A bad driver kills passengers. A bad electrician burns down a house. A bad pilot takes 200 people with them. Every licensing requirement on the books exists because someone demonstrated the cost of not having one.

The digital world never developed an equivalent. Social media companies deploy recommendation algorithms that shape the information diet of billions, moderate the speech environment of entire countries, and have been documented by their own internal research to harm the mental health of adolescents. The teams that build these systems hold no mandatory qualification in psychology, information science, or public health. The people who configure content moderation policy for populations larger than most countries are not required to demonstrate competency in any of the domains their decisions affect. Nobody seems to find this remarkable.

The forklift principle

The history of physical-world licensing is a history of disasters. The Titanic sank in 1912 and the first International Convention for the Safety of Life at Sea followed in 1914. Structural engineering codes and railway regulations have the same origin. Aviation is the clearest case: a string of fatal crashes in the 1920s produced the Air Commerce Act of 1926, the first federal pilot certification in the United States. Before that, anyone who could buy a biplane could fly one. The sequence is always the same: people die, and the regulatory response includes mandatory proof that practitioners know what they are doing.

The speed of response correlates with the visibility of the harm. A bridge collapse makes the evening news and the inquiry starts within weeks. A software failure that causes equivalent financial damage to thousands of people produces a settlement, maybe a fine, and no change to who is allowed to do the work. The Equifax breach in 2017 exposed the personal data of 147 million people. The company paid a $700 million settlement. Nobody was required to demonstrate competency as a condition of handling that data, before or after.

The pattern always resolves the same way: when something can cause harm beyond what the person responsible can put right, proof of competency follows. Sometimes quickly. Usually not. But it follows.

What the digital world skipped

Some digital activities would have triggered licensing requirements decades ago if they produced visible, physical harm. Building recommendation algorithms that determine the information environment of entire countries. Designing engagement systems whose effect on adolescent mental health was measured, documented internally, and not disclosed. Deploying content moderation policies that decide what counts as acceptable speech for billions of users, often written by teams with no training in the societies those policies govern. Each of these can cause harm at a scale that dwarfs what a forklift operator or electrician could inflict. None requires a licence.

The gap persists because of a structural asymmetry in how physical and digital harm generates regulatory pressure. Physical harm is proximate, visible, and attributable to an identifiable person. Digital harm diffuses across billions of screens, accumulates statistically, and rarely produces the single attributable event that licensing requirements have historically needed as a trigger. Facebook's own researchers documented that Instagram was worsening body image issues among teenage girls. Some findings were internally contested; the company resisted external disclosure and acted with a slowness disproportionate to what its own data showed. The algorithms that amplified the harmful content were designed, deployed, and optimised by people whose competency to do so was never formally assessed, and never has been since.

The asymmetry is not absolute. The Molly Russell inquest, where a UK coroner explicitly named content from Instagram and Pinterest as contributing to a teenager's death, showed that visible, attributed digital harm can occur and can produce policy response. But the threshold is far higher than in the physical world, and when it is met the response tends to be narrow and platform-specific rather than systemic. The UK Online Safety Act and the EU Digital Services Act represent genuine regulatory movement, but both impose organisational duty-of-care obligations rather than individual practitioner competency requirements. A platform must have processes; the person who designs, configures, or deploys the system that causes the harm must demonstrate nothing. This is structurally weaker: corporate liability without individual qualification means the competency of the person making the consequential decision remains unexamined.

The result is a competency vacuum. Product teams at social media companies deploy engagement algorithms that reach populations larger than most countries. Content moderation decisions affecting the speech environment of entire democracies are made by teams with no training in political science, psychology, or the specific cultures they are governing. TikTok's recommendation engine, which determines the media diet of over a billion users, many of them minors, was built by engineers optimising for watch time, not wellbeing. What these people can affect and what they are required to know bear almost no relation to each other. The industry outgrew the regulatory instincts that govern every other domain of consequential activity, and nobody thought to apply them.

The expertise inversion

AI changes the nature of the gap. Previous social media harms still required someone to write the code, design the algorithm, or configure the content policy. The barrier to causing harm at scale was that you needed technical expertise to build the thing in the first place. AI removes that barrier. It lets platforms generate, curate, and personalise content at a speed and scale that outstrips any human capacity for oversight, while the requirement for demonstrated competency in the people overseeing these systems has not changed at all.

Previously, the reach of harmful content was bounded by the mechanics of distribution. A misleading post could go viral, but someone had to write it and the algorithm had to pick it up. AI breaks this relationship. Generative AI lets bad actors produce disinformation at industrial scale, indistinguishable in quality from legitimate content. Social media platforms that already struggled to moderate human-generated content are now expected to manage a volume of synthetic content they are structurally incapable of reviewing. The people configuring these moderation systems cannot tell whether the AI-generated content they are failing to catch is coordinated state propaganda or a teenager experimenting. The gap between what these systems produce and what the people overseeing them understand is where the risk now lives. It widens every time the models improve.

When a forklift operator causes an accident, there is a clear chain of responsibility: the operator, their training provider, their employer. When social media causes harm, accountability disperses. The algorithm recommended the content. The product team optimised for engagement. The moderation team was understaffed. The executives chose not to act on the internal research. Good luck working out which decision in that chain was the one that mattered. And the harm is not local. A forklift drops a pallet on one person. A recommendation algorithm that amplifies self-harm content to vulnerable teenagers operates across every country the platform serves, simultaneously, before anyone notices.

The counter-case

Certification can gatekeep. Software's extraordinary productivity happened partly because there was no guild system controlling entry. A 19-year-old built Facebook in a dormitory without a licence, a professional qualification, or anyone's permission. That same platform was later identified by a United Nations fact-finding mission as having played a "determining role" in inciting violence against the Rohingya in Myanmar. The trade-off is real, and the people arguing for licensing rarely grapple with it honestly.

There is also a definitional problem. What would AI competency even mean when the technology changes materially every few months? Voluntary certifications exist. CompTIA offers an AI+ credential. AWS, Google Cloud, and others offer AI and machine learning certifications. The International Association of Privacy Professionals has launched an AI Governance Professional certification. None are mandatory. All self-select for the already conscientious: the people who seek out training are not, by and large, the people who most need it. The gap in competency is concentrated in the population that does not pursue certification, and voluntary frameworks do not reach them.

What a licence would actually look like

Any useful framework would be context-specific. The physical world already works this way. A UK driving licence distinguishes between categories: B for cars, C1 for medium goods vehicles, D for buses, additional endorsements for hazardous materials. The competency requirement scales with what can go wrong. An equivalent for AI would look similar: deploying a chatbot for restaurant reservations is a different activity from deploying a system that generates legal documents or underwrites insurance policies.

The EU AI Act's risk-tier framework is the closest existing analogue. It categorises AI systems into unacceptable, high, limited, and minimal risk, with different regulatory obligations at each level. But it regulates systems and organisations, not individual users. A company must have AI governance procedures. An individual deploying AI in a high-risk context must demonstrate nothing. The gap between "the company must have governance" and "the person must demonstrate competency" is the gap between building regulations and requiring the electrician to be qualified. You need both. One without the other leaves a hole large enough to cause the kind of harm that eventually triggers the very regulation the industry is lobbying against.

The physical world worked this out over centuries, and it was never quick about it. AI is different in one way that matters here. It amplifies what untrained people can do while hiding what trained people would know not to do. A forklift makes a trained operator more productive. It does not make an untrained person look trained. A social media platform enhanced with generative AI does exactly that: it lets anyone produce and distribute content at a quality and scale that was previously the domain of professional media organisations, while requiring none of the editorial judgment, ethical training, or accountability that those organisations developed over decades. That has always been enough, in the end, to trigger licensing. The only question is whether it happens before the serious damage or after.

There is one further dynamic that makes the timeline more urgent. The same companies that built these systems without accountability are now racing to reduce their engineering headcount by replacing teams of developers with AI coding agents. The economic case is straightforward: fewer engineers, faster iteration, lower cost. But the accountability case runs in the opposite direction. The traditional defence against individual liability was always diffusion: no single developer owned the recommendation system, it emerged from hundreds of decisions across large teams, and tracing harm to a person was genuinely difficult. AI-assisted development dissolves that defence. A solo developer, or a small team, using AI agents to build and deploy a system that reaches hundreds of millions of users has more direct ownership over its behaviour than any individual in the distributed engineering organisations that preceded them. The causal chain shortens precisely as the reach expands. An industry that is voluntarily concentrating individual agency over consequential systems while lobbying against individual competency requirements is making the argument for licensing on behalf of its critics. The only question is whether regulators will notice before the first case makes it impossible to ignore.

← Back to Writing