From hype to hybrid reality
The honeymoon phase with AI is officially over.
Across New Zealand, the last 12–18 months have looked a lot like a gold rush. Every vendor had a demo. Every leadership team had a pilot. Every board deck had an “Artificial Intelligence roadmap” slide, usually with an arrow that pointed confidently up and to the right.
And then reality arrived. Cost climbed. Cyber security and risk teams asked harder questions. End users… politely ignored the shiny tool and went back to the spreadsheet that actually runs the business.
If that sounds familiar, you’re not behind. You’re right on time.
We’re entering what I’d call the Great AI Reset: a moment where organisations stop chasing novelty and start building hybrid, production-grade Artificial Intelligence programs that can survive CFO scrutiny, security reviews, and the daily indifference of busy employees, while still delivering value in the real world.
The reset isn’t a failure. It’s strategic recalibration.
Why is the reset happening now?
1. The goalposts have moved
Over the last year, rapid shifts in architecture, product capability, and pricing models have changed which AI use cases are viable. Some projects that looked too complex or risky are suddenly on the cards because implementation is simpler and more flexible.
At the same time, other “easy wins” no longer stack up once you factor in licensing restructures, tighter usage limits, and the true cost of operating at scale. In a reset year, the smartest move isn’t to double down blindly. It’s to reassess what’s possible now and make better business decisions and decision-making based on today’s costs, risks, and user readiness.
That’s why a reset starts with one question: If we assessed our use cases under today’s architecture and pricing reality, would we make the same choices?
2. The cost of “magic” is showing up on the ledger
In pilot land, AI spend looks harmless. A few licences. Some API usage. A small cloud bill.
In production, it becomes an economic system driven by variable consumption and hard-to-predict usage patterns. Token-based pricing (plus GPU-heavy workloads where relevant) can turn budgets into a moving target, especially when demand spikes or usage spreads across teams faster than
governance can keep up, and when compute
power, electricity, and
energy consumption becomes part of the conversation.
Deloitte, for instance, has been blunt about how token-based costs can behave unpredictably and why organisations need stronger discipline around visibility and control.
The issue isn’t that AI is “too expensive.” The issue is that many organisations can’t reliably answer basic questions like:
- What does an average interaction cost?
- What’s the cost per workflow, per team, per customer outcome?
- Which usage is productive and which is just curiosity at scale?
If you can’t predict the cost of a prompt, it’s hard to scale a solution with confidence or maintain it responsibly once it’s live. Remember that token-based costs can behave unpredictably and organisations need stronger discipline around visibility and control.
3. The unbundling trap is real (and it’s messing with procurement)
A year ago, many AI capabilities arrived bundled into broader platforms: collaboration suites, security stacks,
cloud services.
Now we’re seeing fragmentation. Features that used to be “included” are being tiered, capped, or sold as add-ons. Usage ceilings that seemed generous in a sales pitch can get hit within weeks once adoption spreads beyond a small pilot group.
And when those ceilings get hit, the business response is rarely polite: “Why did this stop working?” quickly becomes “Why are we paying for this at all?”
In some companies, the surprise isn’t just caps. It’s that use cases once seen as simple and cost-effective can become expensive overnight when licensing is restructured or “standard” features shift into premium tiers, changing the whole ROI story.
This is the part where AI programs either mature… or get quietly defunded.
4. The adoption gap is swallowing ROI
Here’s the uncomfortable truth: most AI value doesn’t come from impressive demos. It comes from boring integration into systems and processes, the computer-based workflows people already use, day after day.
Recent research on MIT work has pointed to a huge drop-off between pilots and measurable impact, often because the tools aren’t embedded into how people actually do their jobs.
Gartner has also warned that a meaningful share of Generative AI projects will be abandoned after proof of concept because of issues such as unclear business value, escalating costs, and inadequate controls.
And
RAND’s research notes that, by some estimates, over 80% of
AI projects fail, highlighting how hard it is to translate potential into outcomes.
The common thread across these perspectives isn’t “AI doesn’t work”. It’s that organisations treat AI like a product rollout when it behaves more like an operating model change — one that touches people, governance, and the way the organisation chooses to operate.
What 2026 will reward (and what it will punish)
If 2024 was the year of experimentation and 2025 was the year of reckoning, 2026 will reward discipline.
Not “discipline” as in red tape. Discipline as in:
- Clear priorities
- Measurable outcomes
- Controlled spend
- Responsible risk management
- Adoption built into the work, not bolted on top
The winners of 2026 won’t be the organisations with the most models. They’ll be the ones with the most disciplined ones.
So how do you run a reset that actually sets you up to relaunch?
Get in touch
Talk to us today to optimise your operations.
It’s time to take stock (without losing momentum)
A reset doesn’t mean stopping. It means getting honest about what’s working, what’s fluff, and what needs rebuilding so it can scale.
Here’s a practical approach leaders can use to reassess, re-prioritise, and relaunch an AI plan for 2026.
Step 1: Audit your proofs-of-concept (PoCs) like a portfolio
Most organisations have a “pilot graveyard”: promising experiments that never found a path to production.
Start by inventorying every AI PoC and putting each one into one of four buckets:
1. Proven value (scale candidates)
- Solved a real friction point
- Has measurable impact (save time, reduce errors, improve cycle time, protect revenue)
- Has a clear owner and operating rhythm.
2. Promising but incomplete (refactor candidates)
- The use case is valid
- Data, integration, or controls weren’t ready
- Adoption didn’t stick because it wasn’t built into the workflow
3. Interesting but non-essential (park it)
- “Cool” but not urgent
- Not aligned to strategic priorities
- Would compete for scarce engineering/change capacity
- Park it if the business case has become uncertain due to pricing or licensing changes, then revisit once you can model the ongoing cost per workflow
4. Not worth it (retire)
- Worth pursuing now as tech, needs, or other factors have changed that may make it successful (i.e. user literacy)
- Changes in tech or new product availability has now made this use case viable or more cost-effective
5. Needs further validation
- “Cool” but not urgent
- Worth pursuing now as tech, needs, or other factors have changed that may make it successful (i.e. user literacy)
- Changes in tech or new product availability has now made this use case viable or more cost-effective
Critically, score them using today’s conditions, not the assumptions you had when the pilot started (model capability, architecture options, licensing, and unit cost). A useful test question here: If we turned this off tomorrow, would any frontline team complain? If nobody notices, that’s not a solution, it’s a science project.
One caveat: Don’t retire a use case just because it failed last year. If the blocker was cost, model limits, integration friction, or low user confidence, it may be worth a quick re-check. New architectures, new product releases, and higher AI literacy can change the outcome and open up new opportunities.
Equally, don’t keep a pilot alive on momentum alone. If licensing restructures or new usage ceilings have changed the economics, it may be smarter to retire it and redirect resources to higher-value solutions.
“We didn’t have an AI problem. We had a prioritisation problem. Once we treated pilots like a portfolio, the path forward got a lot clearer.” says Raji H, Canon Business Services,
Step 2: Demystify spend with AI FinOps
AI FinOps is where the Great Reset becomes real. It’s the shift from “we’ll keep an eye on it” to “we can forecast, control, and optimise this like any other critical workload.”
The
FinOps Foundation’s work on FinOps for AI highlights practical levers like usage limits, throttling, anomaly detection, and token optimisation because with token-priced services, small changes in usage patterns can materially change cost.
For leadership teams, the goal isn’t to become token accountants. The goal is to make AI spend legible:
- Unit economics: Cost per document processed, cost per case handled, cost per customer interaction
- Guardrails: Budgets, alerts, usage policies, and role-based access aligned to risk
- Chargeback/showback: Clarity on which teams are consuming what and whether it’s delivering measurable value
- Optimisation: Prompt re-engineering, retrieval design, and model selection tuned for cost-to-outcome
One practical example is search and knowledge retrieval. Classic RAG (retrieval augmented generation) can get costly when
data sits across multiple systems, with inconsistent permissions and messy content. You’re paying for ingestion, embeddings, storage, orchestration, and repeated retrieval across a sprawling network of sources.
Newer patterns such as agentic RAG can simplify implementation by creating a more usable semantic layer across disparate repositories, reducing manual integration effort and helping teams stay connected to the right knowledge at the right time with fewer hand-offs between people, tools, and devices.
Meanwhile, some “simple” use cases are getting harder to justify: when licensing models change, per-user costs rise, or usage ceilings tighten, the unit economics can flip, and what once looked like a quick productivity win can turn into a budget leak.
The point isn’t that one approach is “better”. It’s that new architectures can change the cost-to-value equation, sometimes dramatically.
In practice, this is also where platform choices matter. A fragmented AI estate makes it harder to measure and govern spend. A coherent architecture makes it easier to manage both cost and risk.
“I don’t need perfect forecasting. I need to know we’re not signing up for an open-ended meter.” says Raji.
Step 3: Re-prioritise for utility over novelty
AI roadmaps often overweight “content generation” because it’s easy to demo. But the deeper economic value in many organisations sits in process-heavy work:
- Finance ops
- Procurement
- Service management
- Claims and case processing
- Compliance and reporting
- HR operations
- Customer support triage
A simple 2026 prioritisation filter
Prioritise use cases that:
- Reduce cycle time in a critical workflow
- Cut rework and errors
- Improve compliance or auditability
- Protect revenue (e.g., churn prevention, faster onboarding, fewer service failures)
- Improve staff capacity in constrained teams
De-prioritise use cases that:
- Produce outputs without a downstream decision or action
- Rely on end-users “remembering” to use a separate AI tool
- Don’t have a clear KPI owner
- Introduce compliance/security risks without a proportional benefit
This is where “hybrid reality” matters. The best programs mix:
- Automation (for repeatable tasks)
- Assistance (for complex judgment work)
- Governance (so it stays safe and compliant)
- Humans in the loop (because accountability doesn’t automate itself)
Step 4: Design for adoption, not applause
AI value doesn’t appear when you deploy a tool. It appears when behaviour changes.
Adoption comes from three things:
1) Workflow integration
If the AI sits outside the systems people live in, adoption will stall. Integrate into the case platform, the desk workflow (like the service desk), the document system, the finance workflow, wherever work is already happening, including on mobile devices where relevant.
2) Role-based enablement
Different teams need different patterns:
- Frontline staff need “next best action” support, not a blank chat box
- Managers need visibility and controls, not novel features
- Risk and security teams need audit trails and policy enforcement
3) Change management that respects reality
People don’t resist AI because they “fear the future”. They resist because:
It adds steps
- It feels unreliable
- It creates new compliance anxieties
- It doesn’t match how they’re measured
Make it easier to do the right thing than the old thing. That’s the adoption game.
Step 5: Get serious about agentic workflows (with guardrails)
Everyone’s talking about agentic AI—systems that can take actions, not just generate text. The promise is real, but so are the risks.
Agentic patterns are a good example of the reset in action. The architecture is maturing fast, but the pricing and risk profile can change the business case overnight.
Gartner has warned that costs, unclear measurable value, or inadequate risk controls may lead to the cancellation of a significant portion of agentic AI projects.
So, the move for 2026 is not “go agentic everywhere.” It’s:
- Choose a small number of high-value workflows
- Define the boundaries of autonomy
- Enforce approvals for sensitive actions
- Build monitoring and auditability from day one
A sensible pattern is agentic-by-exception:
- AI drafts, recommends, routes, and prepares actions
- Humans approve high-risk decisions
- Automation executes low-risk, high-volume actions under policy
That’s where hybrid reality becomes a strength: you get speed and control.
Where Canon Business Services ANZ can help
A Great AI Reset is not just a technology reset. It’s an operating reset across cost, governance, adoption, and execution.
Canon Business Services ANZ (CBS) typically supports organisations through this shift by helping to:
- Rationalise and modernise the AI foundation (cloud, identity, data platforms, integration patterns)
- Build practical governance (risk controls, auditability, security-by-design)
- Implement AI FinOps discipline (visibility, forecasting, guardrails, optimisation)
- Redesign workflows for real adoption (business analysis, process mapping, human-in-the-loop design)
- Operationalise outcomes through managed services and continuous improvement, so AI doesn’t stall after go-live
“The goal isn’t to chase the newest model. It’s to build a repeatable way to deliver value, securely, predictably, and at scale.”
A practical relaunch checklist for 2026
If you want a simple way to pressure-test your 2026 plan, ask:
- Do we have a clear inventory of pilots and a decision on each one?
- Can we explain AI spend in unit costs tied to business outcomes?
- Have we prioritised workflows where impact is measurable?
- Are solutions integrated into how work happens day to day?
- Do we have governance that matches the risk of the use case?
If you can answer “yes” to most of these, you’re not just doing AI. You’re running an AI program.
The reset is the competitive advantage
Plenty of organisations will spend 2026 quietly backing away from AI because the first wave didn’t come deliver. Others will give up or spend more money driven by hype or FOMO.
That’s understandable, and it’s also an opportunity.
A disciplined reset is how leaders turn early experimentation into sustainable advantage. The shift from tokens to tangible value is where real differentiation starts.
If your organisation is reassessing its AI roadmap for 2026, especially around cost control, governance, and workflow adoption, Canon Business Services ANZ can help you take stock, re-prioritise, and relaunch with confidence.
Question to leave you with: Are you building AI that looks good in a demo… or AI that survives contact with your business and drives real business outcomes?