What an AI Readiness Assessment Actually Reveals, And Why Leaders Are Surprised
- 1 day ago
- 4 min read

The first thing an AI readiness assessment tends to surface is not a technology problem. It is a process problem that has been running for years with nobody looking directly at it.
That is the finding most GBS leaders do not expect. They go in focused on AI capability, tools, platforms, and use cases. What comes back is a diagnosis of the operational foundation those tools would have to run on. And in most organisations, that foundation has more cracks in it than anyone had mapped.
The case for running the assessment in the first place was made here. Where we focus on what the assessment actually finds once you commit to it, and why those findings are harder to sit with than leaders expect.
Where the funnel breaks
The AGOS-Roland Berger GBS in the Digital & AI Era report (2025) tracked where GBS organisations sit across the AI adoption curve. 74% have a declared AI vision. 37% have moved to structured training programmes. 40% report AI is meaningfully freeing employees for more strategic work. 20% measure outcomes systematically.
That is not a technology adoption curve. It is an attrition curve. At each stage, organisations that said they were moving drop out. The assessment shows exactly where, and why.
What each dimension actually finds
A structured assessment covers four areas. Each one produces findings that are harder to sit with than leaders expect going in.
Process maturity. The intake process you think runs the same way across five markets does not. Manila approves invoices over USD 5,000 manually because of a fraud incident in 2019 that nobody documented. Bangkok runs a workaround in Excel because the system field for vendor master data was never standardised. Kuala Lumpur exits the workflow at step 6 about 12% of the time and routes back to the requester through email. None of this is in the process documentation. All of it has to be untangled before AI sits on top of it. The assessment finds these workarounds because it follows the actual transaction trail, not the SOP.
Data quality. The vendor master file has 47,000 records. About 8,000 are duplicates created when the SAP migration in 2022 brought in two regional ERPs without a deduplication pass. Cost centre coding is consistent across Finance but inconsistent between Finance and Procurement, which means spend analytics has been running on a 14% margin of error for three years. Nobody flagged it because nobody was running spend analytics before this year. AI will surface it the first time it runs. The Hackett Group's 2025 Key Issue Study puts GBS workloads up 11% against budget growth of only 7%, which is part of why this never got cleaned up. Capacity for data hygiene work has been the first thing cut for five years running.
Talent capability. Your AP team can tell you within 30 seconds whether an invoice looks wrong. They cannot tell you whether an AI system flagged the right invoice for the right reason, which is a different judgement built on a different skill. Roland Berger's data puts 37% of GBS organisations at the structured training stage. The other 63% are running AI with people who have been told the tool is reliable and have no framework for deciding when it is not.
Governance clarity. Ask who signs off when an AI system rejects a vendor payment. In most organisations, the answer is the team lead. Ask the team lead, and they will tell you they assumed it was the controller. Ask the controller, and they will tell you they assumed the system had been certified before it went live. The decision to deploy was made. The decision about who owns it was not. This is the finding that ends programmes when something goes wrong, because the post-mortem cannot find a name to put against the failure.
Why it gets delayed
There are three reasons, and none of them are irrational.
Documented findings create documented accountability. A known concern about data quality is manageable. A written finding that three of five core processes have significant data integrity issues requires someone to own a remediation plan with a timeline.
The findings often do not match what has already been reported upward. If the last HQ update described an AI programme in progress, and the assessment shows early-stage readiness across most dimensions, that is a conversation someone has to have. Most leaders would rather have it with better news.
Running a real diagnostic takes focus. Hackett's numbers make the capacity pressure plain: workloads up 11%, budgets up 7%. At that ratio, finding the space to pause and assess feels like it costs something.
These pressures are real. The problem is they do not change what is underneath. Organisations that skip the assessment do not avoid the findings. They meet them later, mid-implementation, after budget is committed, when the cost of correcting course is higher and the HQ conversation is harder.
What to do before the next investment decision
Not the vendor conversation. Not the business case. Not the pilot scope.
Map your current operations across the four dimensions: process stability, data quality, talent readiness, governance clarity. Score each one without rounding up. Where the scores are low, those are not weaknesses to manage around. They are pre-conditions. The work that has to happen before AI delivers anything worth measuring.
The Roland Berger funnel shows where organisations lose momentum. Most of it happens in the gap between declared vision and honest baseline. The assessment is how you close it.
AGOS Asia runs structured AI readiness assessments for GBS organisations across Southeast Asia. Built to find what gets missed, not confirm what is already assumed. Start at agosasia.com.
.png)


Comments