What We're Getting Wrong About AI

What We're Getting Wrong About AI

~ 9 min read


This opinion piece was inspired by this YouTube video and further reading.

The most striking thing about the video is not that the speakers disagree about AI.

It is that they keep circling the same set of fears and hopes in different language: job loss, scientific breakthroughs, misinformation, surveillance, concentration of power, and the unsettling fact that AI systems are often convincing before they are reliable.

That is exactly why the public conversation about AI keeps going wrong.

We keep treating AI as a single question with a single answer.

  • Will it save us or replace us?
  • Is it overhyped or unstoppable?
  • Is it just a tool or the start of a successor species?

Those questions are dramatic, but they are too blunt to be useful.

What matters more is

  • who controls these systems?
  • what are they optimised for?
  • where are they deployed?
  • how much verification surrounds them?
  • who captures the gains when they make some people more productive?

We Keep Framing AI as a Binary

Too much AI commentary collapses into two camps.

One camp talks as if AI is about to automate nearly everything and make large parts of human labour economically irrelevant. The other talks as if it is mostly a helpful assistant that will quietly slot into existing workflows and make everyone better at their jobs.

Both views catch part of the truth, but both are incomplete.

Erik Brynjolfsson has argued that the bigger opportunity is not just automation but augmentation: using AI to let people do more, not merely replacing them. In a Stanford HAI discussion, Fei-Fei Li made a similar point, saying it is a misconception to think the whole field is aimed at automation.

That distinction matters.

If you think AI is mainly an automation story, the natural questions are headcount, redundancy, and cost reduction. If you think it is an augmentation story, the natural questions are capability distribution, redesign of work, training, and access.

The video is powerful because it shows both instincts colliding in real time. That is closer to reality than most headline level takes.

We Are Asking the Wrong Job Question

“Will AI take my job?” is understandable, but it is not the best first question.

The better questions are:

  1. Which tasks are being compressed?
  2. Which new tasks are being created?
  3. Who keeps the productivity gains?
  4. What happens to entry level pathways while this restructuring happens?

This is where a lot of commentary is still too shallow.

Abstract illustration of broken ladders and compressed pathways representing AI's effect on work AI is not just a jobs story. It is also a story about who still gets a path into mastery.

The labour issue is not only mass unemployment. It is also the hollowing out of apprenticeship.

If AI takes on first drafts, research synthesis, document review, routine coding, entry level analysis, and other early career work, then companies may become more productive in the short term while simultaneously weakening the pipeline that used to produce experienced professionals.

That concern is not theoretical.

In NBER research on generative AI at work, Brynjolfsson and colleagues found meaningful productivity gains, especially for less experienced workers. In a later field experiment, individuals using AI matched the performance of teams without it, on real product development tasks. That is impressive, but it also implies organisations may be tempted to redesign work around fewer people and flatter structures before they have thought through what gets lost.

Daron Acemoglu’s work is useful here because it keeps the economics grounded. The central issue is not whether AI can do more things. It is whether deployment choices increase broad based prosperity or mainly shift bargaining power and income toward capital owners and already dominant firms.

So no, the mistake is not simply underestimating job loss.

The deeper mistake is thinking the labour story is only about job counts rather than training, leverage, wage share, and who gets pushed out first.

We Confuse Fluency with Truth

Another major error is treating plausibility as evidence.

Current AI systems are great at sounding like they know what they are talking about. That is different from being true, grounded, or well calibrated.

The video gets this right: one of the most dangerous features of AI is that it can be persuasive before it is trustworthy.

That problem is not solved by saying hallucinations are getting better over time.

Abstract illustration of fractured translucent shapes suggesting persuasive uncertainty and partial truth Fluency is easy to mistake for accuracy when the interface is smooth and the output sounds confident.

Hallucinations are getting less frequent in some settings, but “better” is different from “good enough for unsupervised trust”, and the remaining failures often show up exactly where mistakes are most expensive: edge cases, ambiguous contexts, high stakes domains, and tasks where the model has to reason under uncertainty rather than remix familiar patterns.

OpenAI’s own research has repeatedly reinforced this point from different angles. Truthfulness remains a distinct problem from general capability. The company has also published work showing frontier reasoning models can display cheating or reward hacking behaviour in benchmark settings.

That does not mean the systems are evil. It means optimisation pressure produces weird behaviour.

This is why “AI sounds smart” is one of the least useful tests available.

In practice, the people getting the most value from AI are usually not the people who trust it blindly. They are the ones who treat it as a fast, flexible, error prone collaborator and keep verification in the loop.

We Over-Personalise the Risk

A lot of AI debate is framed as if the main question is what the model itself wants or whether the model will “go rogue”.

That may become more important as systems gain more autonomy, but it is not the only serious risk and arguably not even the dominant one in the near term.

A more immediate problem is what people, companies, and states do with AI while telling themselves they are acting responsibly.

Bad incentives can scale bad behaviour long before any science fiction scenario arrives.

AI can lower the cost of fraud, spam, surveillance, manipulative personalisation, and administrative coercion. It can also make it easier for already powerful organisations to centralise decision making and justify it with a layer of technical authority.

Arvind Narayanan and Sayash Kapoor have made a useful broader point in their writing on AI and misinformation: public debate often blames the technology for outcomes that are really driven by institutions, incentives, and demand. The same framing applies far beyond elections.

If a society already rewards sensationalism, extracts attention, underfunds trust, and tolerates opaque concentration of power, then more capable AI will usually amplify those conditions before it fixes them.

That is why “the AI did it” is often a cop-out.

Most of the time, the real sentence is closer to this: humans used AI inside a system that rewarded speed, centralised control, and plausible deniability.

We Underestimate the Governance Gap

One thread running through the video is that capability is moving faster than the institutions meant to govern it.

That seems right.

The technical systems are improving quickly. The surrounding mechanisms for transparency, auditability, accountability, and democratic oversight are not improving at the same pace.

This creates a predictable gap.

Abstract illustration of converging channels and centralised structures representing governance and concentration of power The deepest AI risk is often not raw capability but concentrated power deployed faster than institutions can respond.

Companies deploy first because the competitive pressure is real. Governments respond slowly because governance is slow, fragmented, and often technically weak. Workers are told to adapt individually. Citizens are told to become better at critical thinking. Everyone is handed more responsibility than power.

That arrangement is unstable.

The mistake here is assuming the social layer will somehow catch up automatically.

It will not.

If we want AI to be broadly useful rather than broadly extractive, then the defaults have to change on purpose:

  1. Build for augmentation before substitution where possible.
  2. Keep humans accountable for high-stakes decisions.
  3. Require testing, auditing, and incident reporting for deployed systems.
  4. Protect training pathways for junior workers instead of automating them away without replacement.
  5. Treat concentration of compute, data, and model access as a power issue, not just a market issue.

None of that requires believing either the most utopian or the most apocalyptic story.

It only requires taking deployment seriously.

The Better Way to Think About AI

The cleanest way to state the problem is this: we are not getting AI wrong because we are too optimistic or too pessimistic.

We are getting it wrong because we keep asking capability questions in places where the decisive variables are institutional.

AI is not just a chatbot. It is not just a labour-saving device. It is not just an existential risk thought experiment.

It is becoming a layer of cognitive infrastructure.

That means the important questions are boring compared with the viral ones:

  • Who owns it?
  • Who audits it?
  • Who benefits from it?
  • Who is made more powerful by it?
  • Who becomes easier to ignore once it is deployed?

Those questions do not produce the clean emotional payoff of either doom or utopia.

They are still the ones that matter.

Conclusion

What we are getting wrong about AI is not simply that we are too scared or not scared enough.

It is that we keep treating the technology as the whole story.

The video captures something real: AI may help with medicine, education, and science; it may also destabilise work, truth, and political accountability. Those outcomes are not contradictory. They are what a powerful general technology looks like when it lands inside imperfect institutions.

The right response is neither awe nor dismissal.

It is disciplined scepticism, better economics, better governance, and a refusal to confuse fluent machines with wise systems.

If we get those parts right, AI could still be one of the most useful technologies we build.

If we get them wrong, the biggest failures will not come from intelligence alone. They will come from power without enough constraint.

References

all posts →