
By this time next year, someone in your organisation will have been quietly outperformed by a chatbot. HR won’t send a card.
2026 won’t complete the AI disruption – that’s a decade-long project at minimum. But it will accelerate beyond what most organisations, and most careers, are prepared for. We’re standing at the foothills, uncertain of the journey to the summit, and most of us are still wearing flip-flops.
I’ve spent twenty years watching technology reshape financial services and the pattern is always the same: the change arrives slower than the hype suggests, but faster than the incumbents prepare for.
So here are ten predictions for 2026 – not prophecies, but probabilities. And because anxiety without action is just suffering with extra steps, I’ve paired each with a Stoic strategy. Marcus Aurelius didn’t have to deal with ChatGPT or my dear friend Claude, but he’d have handled it better than most LinkedIn influencers.
1. AI Agents Will Start Doing Your “Busy” Work
That status report you spent 40 minutes crafting? An AI will draft it in 12 seconds. Those meeting notes nobody reads? Automated. The expense reconciliation that makes you question your life choices? Gone.
The irony is exquisite: we spent years complaining about administrative burden – now we need to prove we’re worth more than it.
In banking and financial services, this will hit hard and fast. Regulatory reporting, client summaries, transaction monitoring narratives – the unglamorous scaffolding of compliance-heavy industries – will increasingly be drafted by machines and approved by humans. The question becomes: what do you do with the time you get back?
The Stoic Play: Stop hiding in busywork. It’s comfortable because it’s measurable and low-risk, but it’s borrowed time. Audit your week honestly. If a task doesn’t require your judgment, your presence, or your relationships – someone (or something) else will be doing it soon. The Stoics urged us to guard our time fiercely. Seneca wrote that we’re quite miserly with our money but remarkably generous with our hours. Invest in the work only you can do.
2. Org Charts Will Start to Look Like Spotify Playlists
Fluid. Personalised. Constantly reshuffling.
Traditional hierarchies were designed for industrial-era predictability. They can’t iterate at AI speed. In 2026, more organisations will experiment with smaller, project-based teams that form around problems and dissolve when solved. Your job title will matter less than your last three contributions.
This isn’t entirely new, banks have talked about “agile transformation” for years, usually while preserving every committee and governance layer that existed before. But economic pressure has a way of accelerating what PowerPoint decks couldn’t. When AI makes certain functions dramatically cheaper, the baroque org structures built around them start looking like expensive nostalgia.
The Stoic Play: Focus on what you control. Epictetus divided the world into things within our control (our judgments, actions, attitudes) and things outside it (job titles, reorganisations, what your skip-level thinks of you). Titles are rented; skills are owned. Build a reputation and a network that doesn’t need your current org chart to validate it. When the music stops, you want to be someone people call, not someone people “restructure.”
3. Your Expertise Will Have an Expiry Date (and It’s Getting Shorter)
Five years ago, “knows blockchain” was a differentiator. Now it’s a line on a CV nobody reads. The same fate awaits most of what you know today.
I’ve watched entire specialisms rise and fall in financial services: COBOL programmers were gods, then legacy liabilities, then surprisingly valuable again when nobody else remembered how the core banking systems worked. But the cycle is compressing. What you learned last year is depreciating while you read this sentence.
In 2026, the half-life of technical expertise will shorten visibly. The knowledge that makes you valuable in January may be table stakes by September.
The Stoic Play: Amor fati – love the change. The Stoics didn’t resent the impermanence of things; they embraced it as the nature of reality. Make learning your product, not your preparation. The person who learns fastest wins, not the person who knew most in 2019. Build systems for continuous learning: protected time, curated sources, deliberate practice. Treat your expertise like software – it requires constant updates or it becomes a vulnerability.
4. The “AI-Augmented” Employee Will Become the Baseline
In 2025, using AI tools well was a competitive advantage. By the end of 2026, it’ll be a minimum expectation – like knowing how to use email or not replying-all to 3,000 people.
The employee who can prompt effectively, validate AI outputs critically, and integrate machine-generated work into human workflows won’t be special. They’ll be standard. The employee who can’t, or won’t, will be carrying an invisible handicap.
This has implications for hiring, for training, and for performance management. “AI literacy” will stop being a line item on innovation roadmaps and start appearing in job descriptions and competency frameworks.
The Stoic Play: Waste no time arguing about what a good person should do; be one. The Stoics were practical philosophers – they valued action over debate. Don’t wait for your organisation’s official AI training programme (it’s probably 18 months away and will be out of date on arrival). Teach yourself now. Build your own toolkit. The best time to become competent was six months ago; the second-best time is this afternoon.
5. Middle Management Will Face an Identity Crisis
Here’s an uncomfortable truth: a significant portion of middle management exists to aggregate information upwards and translate strategy downwards. AI is increasingly excellent at both.
When an executive can ask an AI to summarise project status across twelve workstreams, pull risk indicators, and draft the board update – what exactly is the layer in between doing? When a team can receive strategic context directly through well-designed AI interfaces, what’s the value of the human relay?
This isn’t a prediction that middle managers will vanish. But 2026 will be the year many organisations start asking pointed questions about the layer, and middle managers themselves will start asking what their actual value proposition is. The answer exists, but it’s not “information routing.”
The Stoic Play: Know thyself. If your value is synthesising information, you have a problem. If your value is making decisions under uncertainty, developing people, navigating politics, and creating clarity from ambiguity – you have a future. The Stoics believed in rigorous self-examination. Take inventory. Where do you actually add value versus where do you just add process? Be honest. Then double down on the former.
6. Junior Roles Will Be Hollowed Out First
This is the prediction nobody wants to make at dinner parties, but here it is: entry-level knowledge work is disproportionately exposed to AI automation.
The traditional model – hire graduates, give them routine tasks, let them learn by osmosis while doing the work nobody else wants – breaks down when the routine tasks evaporate. Research, summarisation, first-draft analysis, data gathering: this was the curriculum for junior bankers, junior lawyers, junior consultants. It was boring, but it was educational.
In 2026, organisations will grapple with a genuine dilemma: AI can do this work faster and cheaper, but if we automate it entirely, how do we develop the next generation? How does anyone learn the craft?
The Stoic Play: For those entering the workforce: don’t compete with AI on AI’s terms. You won’t win a productivity contest against a machine. Instead, invest aggressively in the skills that remain stubbornly human: stakeholder management, persuasion, navigating ambiguity, building relationships, understanding context that isn’t written down. Volunteer for the messy, political, human problems. That’s where the learning is now.
For those managing juniors: this is a genuine ethical and practical challenge. The Stoics valued mentorship. Think deliberately about how you’ll develop people when the traditional apprenticeship model no longer applies.
7. The Compliance Function Will Eat Itself (Then Rebuild)
In regulated industries like banking, compliance has grown into a substantial bureaucracy – partly because regulations genuinely increased, partly because it was easier to add headcount than to fix underlying processes.
AI will simultaneously make compliance easier (automated monitoring, intelligent document review, real-time reporting) and harder (new risks around AI governance, model explainability, and algorithmic bias). The compliance function of 2027 will look nothing like the compliance function of 2024.
In 2026, the uncomfortable transition begins. Compliance professionals who understand AI governance will be in high demand. Those whose value was purely manual review and box-ticking will find their roles consolidated.
The Stoic Play: The obstacle is the way. This was Marcus Aurelius’s core insight – the thing that seems to block your path is actually your path. If you’re in compliance, the disruption is your opportunity. Become the person who understands how to govern AI systems, how to audit algorithmic decisions, how to translate regulatory requirements into technical controls. The field isn’t shrinking; it’s transforming. Be on the right side of the transformation.
8. Remote Work Debates Will Become Quaint
We’ve spent five years arguing about whether people should be in the office three days or four days or “whenever there’s free pizza.” By late 2026, this will seem like arguing about the deck chair arrangement on a ship that’s being redesigned.
The more interesting question isn’t where humans work – it’s which work humans do. As AI handles more asynchronous, independent tasks, the remaining human work may become more inherently collaborative, more improvisational, more dependent on real-time interaction. Or it might not. The honest answer is we don’t know yet.
But the remote work debate in its current form – essentially an argument about trust and surveillance dressed up as a discussion about productivity – will be overtaken by more fundamental questions about the nature of work itself.
The Stoic Play: Don’t get attached to a position on the remote work debate. It’s a proxy war for other concerns, and the terrain is about to shift anyway. Focus on what’s constant: your ability to add value, wherever you happen to be sitting. The Stoics were remarkably portable – Epictetus was enslaved, Marcus Aurelius was emperor, and their philosophy worked in both contexts. Make your contribution similarly location-agnostic.
9. Internal AI Governance Will Become a Career
Most organisations are currently in the “move fast and hope nothing catches fire” phase of AI adoption. Employees are using ChatGPT, Claude, Copilot, and a dozen other tools with varying degrees of official sanction. Data is flowing in directions that would make the information security team weep if they knew about it.
This won’t last. Regulators are circling. Reputational risks are accumulating. In 2026, “AI governance” will emerge as a genuine function – not just a slide in someone’s risk framework, but actual teams with actual budgets and actual authority.
This will create career opportunities for people who can bridge the technical and the procedural: understanding what AI systems actually do, what risks they create, and how to implement proportionate controls without suffocating innovation entirely.
The Stoic Play: If you’re looking for a growth area, this is one. The Stoics were systematic thinkers – they built frameworks for ethical reasoning and practical action. That’s exactly what AI governance needs: people who can create sensible structures for managing genuinely novel risks. It’s not glamorous. Neither was cybersecurity twenty years ago.
10. The Meaning Question Will Get Louder
Here’s the prediction that doesn’t fit neatly into a skills framework or a career strategy: as AI takes over more of what we do, more people will ask why they’re doing what remains.
If the routine work disappears, and the remaining work is genuinely challenging, creative, and human – that might be wonderful. But it might also be exhausting. Not everyone wants to operate at the top of their cognitive range all day. Some of the “busy work” was also, quietly, a break.
And if AI can do more and more, at some point the question becomes: what is this job for? What am I for? These aren’t questions most performance management systems are designed to answer.
In 2026, I suspect we’ll see more open discussion about meaning, purpose, and identity in the context of work – not as HR-flavoured wellness initiatives, but as genuine philosophical grappling with a changing world.
The Stoic Play: This is where Stoicism really shines. The philosophy was built precisely for this question: how do you live a meaningful life in a world you don’t control?
The Stoic answer is that meaning comes from virtue – from being a good person, acting with integrity, contributing to your community, fulfilling your roles well. External outcomes (including career success) are “preferred indifferents” – nice to have, but not the source of a good life.
If AI changes what we do, it doesn’t have to change who we are. Start thinking now about what matters to you beyond your job title and your deliverables. Build a life that doesn’t depend on your economic productivity for its meaning. This is good Stoic practice regardless of what happens with AI.
Preparing, Not Predicting
I’ll be wrong about some of this. Predictions are humbling that way. Maybe AI progress will plateau. Maybe regulation will slam the brakes on. Maybe we’ll look back at 2025 and laugh at our breathless extrapolations.
But the Stoics didn’t try to predict the future. They prepared for it before it demanded action. They practised adversity when times were good. They built resilience before they needed it.
That’s still the advantage. Not knowing what’s coming, but being ready to respond well when it arrives.
The disruption is real. The uncertainty is real. But so is your capacity to adapt, to learn, to find meaning in whatever comes next. The Stoics faced plagues, wars, exile, and the collapse of empires with equanimity. We can probably handle a few chatbots.