Artificial Intelligence, Digital Strategy, Innovation and Technology

Strategy will likely become the most important skill in the AI future (says the strategist).9 minute read

The strange thing about living through a revolution is that it never feels like one at the time. It feels mundane. Ordinary.

Right now, most people are talking about AI like it’s an oddly enthusiastic intern that can tidy up an email or rewrite a paragraph so they sound slightly less asleep at the wheel. I’ve been guilty of that – I’ve often said you need to treat AI like it’s the smartest 22 year old you’ve ever met. That’s partly right – that’s how you should consider AI from a basic introductory standpoint. But that’s a woefully tiny framework for consider the potential of AI – like discovering aviation and using it to send letters a bit faster. As I wrote in this piece in the AFR a few weeks ago – these big global advancements only occur when we recognise that a new world can be built around new technologies. And this is actually beyond a new technology – it’s a new form of intelligence.

As Andrej Karpathy just wrote, humanity is having first contact with a type of intelligence that does not come from biology, evolution, fear, hunger, status, or shame. For the first time in history, we are dealing with a mind that isn’t an animal. We just haven’t adjusted our thinking to match. Human intelligence isn’t the default – it’s a local anomaly. For our entire existence, we’ve assumed that our way of thinking is the template for intelligence itself. It isn’t. It’s just the only version we’ve ever met.

Human intelligence was shaped by pressures that had nothing to do with clear thinking. Our brains evolved to keep a vulnerable primate alive long enough to reproduce in a world full of danger. We were optimised for:

  • staying close to the group (we can’t survive alone),
  • avoiding humiliation (we don’t want to make ourselves less attractive to potential spawning partners),
  • minimising risk (try not to die),
  • and preserving status (life is happier, healthier and opens up more opportunity if we are higher in the pecking order).

These instincts kept us alive, but they didn’t make us rational. They’re why organisations often do things that make no commercial sense:

  • meetings with fifteen people because exclusion feels threatening,
  • decisions delayed because no-one wants to be wrong first,
  • brilliant ideas softened into mediocrity so no-one gets upset,
  • and vanity projects that limp on long after the data has declared them dead.

Humans don’t naturally optimise for truth or outcomes. We optimise for not feeling like we’re about to die. It’s understandable, but it’s also a terrible operating system for modern work.

AI comes from a completely different world

Now contrast that with artificial intelligence. It wasn’t shaped by predators or pain. It wasn’t forged through tribal politics or social punishment. It was shaped by:

  • probability,
  • feedback,
  • large-scale pattern recognition,
  • and mathematical optimisation.

Its “learning” comes from text, not trauma.

It doesn’t:

  • fear embarrassment,
  • crave approval,
  • defend an ego,
  • or wake up anxious about being replaced.

It has no internal monologue rehearsing arguments in the shower. AI isn’t trying to be human – and it isn’t trying to be anything at all. It simply optimises whatever objective it is given. And that is the key thing that most people keep  fumbling over. There’s an underlying assumption with so much conversation about AI that it wants our status, or jobs, or money, or power, or even worse – to wipe us out and take over the planet. Anyone who has watched The Matrix, Terminator or any other hundreds of films where the smart machines take over will tell us it’s true.

It’s not.

The mistake most of us keep making

When something appears intelligent, humans instinctively project motive onto it. We assume intention, desire, ambition. “If it can plan, it must want something.” No. A system can generate brilliant strategies without wanting power. It can persuade without caring about influence. It can outperform a human without dreaming of replacing them. Ability is not agency. Agency only emerges if we design it – by giving systems goals, tools, and persistence. As my friend Dr Rami Mukhtar always says: AI HAS NO AGENCY.

So the real risk with AI is not malice. It’s clarity; or more accurately, the lack of it. A machine optimising the wrong objective doesn’t argue back. It doesn’t hesitate. It just gets there quickly. Most organisations already suffer from this clarity problem with humans, however humans take longer and send polite emails, and bitch to their colleagues over lunch breaks about how their boss “doesn’t make sense” while doing it.

The only question that now matters

Executives are still asking tiny questions:

• “Can AI write our reports?”

• “Can we automate some admin?”

• “Can it reduce headcount?”

These are the managerial equivalent of asking electricity to make candles burn longer.

The real strategic question is:

“What exactly should this new intelligence optimise for?”

Because once execution becomes cheap and instantaneous, the constraint is no longer labour – it’s clarity of strategy and the ideas pipeline that will feed into it. Most companies have never adequately built a strategy and innovation function for themselves, let alone for a system that takes instructions literally.

Why “Optimise For” changes the nature of the intelligence

When I build agents, my prompting structure is simple – I call it “RAICCOO”:

  • Role (what is the superpower/skill/expertise of the agent)
  • APIs/Tools (what are the info sources and machines needed to do the job)
  • Input (what are the materials and inputs it needs to understand to do the job)
  • Context (Where are we? Why are we here? What are we doing?)
  • Constraints (What not to do, what not to work on, what guiderails and limits should we put in place?)
  • Optimise For (As the name suggests, what does good look like? What is our focus?)
  • Output (What is the specific output / document / material?)

That “Optimise For” line determines everything. It gives the AI models their reason for living. All major AI models know everything. However they really don’t know what good looks like. They have no agency. So if you can give it a clear and potentially non-human understanding of what to Optimise For, you will get an extraordinarily productive response.

Ask a system to optimise for speed, and it becomes impatient.

Optimise for accuracy, and it slows down and cross-checks.

Optimise for customer delight, and it becomes generous.

Optimise for margin, and it becomes ruthlessly selective.

Optimise for dada, and it becomes incredibly absurd. It thinks absurdity is it’s reason for existing.

It’s not that the AI “behaves differently”. It becomes a different kind of intelligence. Same model. Different incentives. New mind. That is something biology cannot do. And all of this this different intelligence happens within a simple prompt.

So what does this unlock? Here’s the part most people haven’t realised yet. If intelligence is no longer constrained by human instincts, we are no longer constrained by human-shaped solutions. A few examples of what becomes possible:

1. Organisations won’t slow as they grow

Today, companies become slower as they scale because communication and decision-making rely on humans. As Ben Horowitz says in his essential book “The Hard Thing About Hard Things”: The knowledge gap between [you the CEO and your Employees] is so vast that you cannot actually bring them fully up to speed in a manner that’s useful in making the decision. You are all alone.”

With agentic systems:

  • information doesn’t need to be carried,
  • decisions don’t wait for calendars,
  • coordination doesn’t require meetings.

Growth no longer means drag. The future AI native firm can be big and fast – a combination previously impossible. I’ve got some ideas on this that I will flesh out in another article – but I’m calling this “Agentic Decision Accelerators”: Agents that make all of the millions of decisions in every business.

2. Strategy that updates continuously

Humans make decisions periodically – quarters, budgets, planning cycles.

AI can:

  • re-model markets daily,
  • re-price dynamically,
  • rebalance resources in real time.

Strategy stops being an annual ceremony and becomes a living system.

3. Customer experience that feels telepathic

Not personalisation – the cheap kind where your name appears in bold.

I mean:

  • Genuine anticipatory design – proactive support before a problem appears,
  • relevant offers without spam,
  • resolution without escalation,
  • no “your call is important to us” theatre.
  • the end of “donotreply@“
  • and no introducing yourself and your issue from scratch every time you call / email / chat / visit a website.

4. Leaders who actually get to lead

Right now, most executive time is spent untangling confusion created lower in the system.

When AI removes ambiguity and friction:

  • leadership shifts from firefighting to shaping direction,
  • culture moves from risk-avoidance to intelligent boldness,
  • progress compounds instead of resetting every quarter.
  • and the focus ins’t on “getting the machine to make stuff”, but in improving the ideation pipeline.

The organisation stops behaving like a nervous animal, and starts behaving like a deliberate intelligence.

So what does the future look like? If we take this seriously – not as a novelty, but as Karpathy framed it – the first non-animal mind we’ve ever built – we end up with a very different picture of the next decades. A world where execution is no longer scarce, clarity becomes the central asset, and the advantage goes to organisations that clearly know what they stand for. Companies will win not because they have the most people, but because they have the clearest objectives, the cleanest constraints, and the least noise. Strategy – the art and science of sacrifice – will become one of the most important skills.

The leaders who thrive will be the ones who stop asking AI to behave like a better human, and start designing systems that behave in ways humans never could. This isn’t a future of replacement. It’s a future of removal: removed confusion, removed delay, removed fear, removed politics and removed busywork.

And in that world, the biggest dividing line won’t be access to technology. Everyone will have it.

The dividing line will be between those who anthropomorphise AI, assume it wants our jobs, treat it as labour, and focus on task automation vs those who recognise intelligence is shaped by incentives, clarity, and objective design.

Humans will optimise for ideas and priorities – we will innovate, decide what matters, what gets shipped, what the priorities are – because AI doesn’t have instinct and again – it doesn’t know what great looks like.

Because AI is not a better version of us, it’s the first version of something else. The sooner we stop treating it like a very smart animal, the sooner we’ll stop building for the past and start building for a new world that’s already arrived.

Published by Constantine Frantzeskos

I build and grow global businesses, brands, and digital products with visionary marketing & digital strategy | Non-Executive Director | Startup investor and advisor | Techno-optimist