
Ask Hassan: Where Should Your AI Roadmap Actually Start?
Ask Hassan: Where Should Your AI Roadmap Actually Start?


Powering the Future with AI
Key Takeaways

Start before the model by preparing data, infrastructure, and governance so AI can perform reliably in production.

Let the business problem lead so AI is deployed to solve a real operational or policy need rather than showcase technology.

Design for your operating model since government AI requires strict in-environment control while enterprise AI can accept more speed and risk.

Prove value continuously by tying every AI use case to clear ROI metrics such as cost reduction, productivity, or service quality.
About Hassan Abu Sheikh
Hassan Abu Sheikh is Director of Products & Co-Founder at CNTXT AI, where he guides AI implementations. Hassan specializes in product strategy, ROI modeling, and AI roadmap development for enterprises and government agencies.
Building better AI systems takes the right approach
FAQ
We see most companies thinking that the AI journey starts at the end—with the application, and they don't factor in the pre-workings that need to be put in place so that the AI delivers what the business expects.
A lot of the time they want to go straight to the result. But if you don't prepare your data and infrastructure upfront, whatever AI you deploy won't function the way you expect it to. That's where most projects fail to meet expectations.
You have to start at the beginning. Getting the setup right before anything else.
There's also too much focus on the technology and its capabilities rather than the business problem itself. Everyone asks, "How can we use AI?" instead of "What business problem are we solving?"
That's why you see great demos that never make it into production.
Key Insight: Infrastructure refers to the foundational systems, servers, databases, and networks that support data and AI operations.
Well, companies like CNTXT AI is where we come in, and we actually sit and work with you on your data assessment, aligning what your data quality is and what your business expectations and desired AI functionalities need to be.
The 4-Layer Readiness Assessment
Let's say you could go and start with:
- Infrastructure
- Data
- Application layer
- Deliverable
So again, does your infrastructure allow for you to collect enough data to drive insight for AI?
And if that infrastructure is there, then is the data in an organized, annotated, and labeled layer so that we can pull the correct data and enough data to then use it to ask questions, engage with, and interact with?
If it is, then we can start looking at the applications themselves that are going to be used in the business to then analyze your data or deliver what the AI business wants.
Example: Predictive Maintenance
Could be anything from, as an example, if we're doing maintenance logs:
- All those logs positioned in a data center
- Are those logs annotated, labeled, can they be read?
- So can AI actually pull enough data so that it could give you predictive maintenance analytics of how long products last, when they need to get repaired?
And to the degree that we could train it so that you could have a camera running on specific items so that you could see if it's worn or not.
The only way you're going to define that is if you have enough data to define what is a worn item or what is an unworn item, and then share the longevity of that item to schedule your maintenance.
Annotated and labeled data means information that's been tagged with context so AI can recognize patterns, like marking images as "damaged" or "intact." Predictive maintenance uses AI to forecast when machines will fail, helping prevent downtime.
So again, an example is if you don't have enough data, you start looking at things like synthetic data, where we take a smaller data set and we use AI to replicate more data, so that you're populating your data with what should be tools for your AI to continuously learn.
Vice versa, smart AI.
So the more you engage with it, the more data you're adding to it, the more intelligent it becomes.
Hence, we look at a very common example: ChatGPT, it's a continuously learning tool, but that learning only comes from the data that it's learning from.
So the data strategy is a necessity for any factual, delivering AI that's going to use real examples and not hallucinate and give you the answers you want within your business.
Key Insight: Synthetic data is AI-generated information that mimics real data, used when real-world samples are limited or sensitive. Hallucination in AI refers to when a model generates false or made-up information.
So a government roadmap will have a lot of governance, information governance around it.
Depending on how, let's say, tight the department is, some won't allow any AI to be run outside their own infrastructure. So you start relying heavily on what's inside that setup, the internal infra, the data it holds, and the systems already in place.
Enterprise vs. Government: Key Differences
Now, enterprise is very different. They can connect into open LLMs and run their internal data alongside global data to get what you'd call a baseline across industries, regions, or even worldwide.
Governments don't allow that open LLM connection. They stay inside their own environment for compliance and data security reasons.
So it's not that one is better than the other—it's just different operating models.
Enterprise moves faster, plays with flexibility, and takes on more risk. Government agencies move slower, focus on governance, but when they act, they do it at scale because they're sitting on massive, high-quality datasets.
So depending on the maturity of the organization, I know that sounds cliché, but some organizations are still, let's say, running on on-premise old Oracle EBS solutions.
Those types of environments need to take a more granular approach on how to modernize their systems so that when they deploy their AI, they get the best out of what they can get.
Whereas some other organizations, and again, using Oracle as an example, they're already on Fusion. They already have cloud data storage, they've migrated their data, it all sits in the cloud, it's all formatted there. So we're either API connecting or we're building on top of that cloud layer.
The Maturity-Based Approach
So it really depends on the maturity. And I think the strategy they should look at is again: What does the business want as a result? What are you trying to achieve?
Are you trying to achieve a RAG agent, or are you trying to achieve an analytical platform?
From there, you work backwards.
If it's a RAG agent:
- It's plug and play
- It pulls data from your data sets
If it's an analytical platform:
- You need to understand how much data you have
- How valuable or structured that data is before you even start
CNTXT Solutions for Different Maturity Levels
For organizations with legacy systems:
- CNTXT Data Services: Data cataloging, quality monitoring, lineage tracking, and bilingual annotation
For cloud-ready organizations:
- CNTXT Munsit: Arabic-first RAG platform with data contracts, feedback loops, and domain test sets
- CNTXT Shipable AI: Sovereign AI deployment with BYOK, in-region data residency, and full governance
On-premise systems are software and data storage kept within an organization's own infrastructure instead of in the cloud. RAG agent (Retrieval-Augmented Generation) means the AI first pulls real data from trusted sources, then uses that information to generate accurate answers.
Me personally, and at CNTXT AI , I am doing the ROI for every agent I'm positioning for my customers.
Example: Call Center ROI
Let's say we're doing a call center. I need to look at the ROI based on your current headcount. If you want to reduce it, if you want more leads, those are the ROI points, and they tie straight to your current expenses.
From there, we start identifying:
- How many calls the agent can take
- How long it runs
- And remember, it works 24/7
We work on the ROI together with the customer because, again, a lot of them don't have the knowledge sets that I have, or that we have as a business.
They haven't seen the results we've already delivered in the market, so part of the job is showing them what that impact looks like in numbers, real ROI tied to business operations.
ROI Metrics for AI Agents
So I think one of the most important things we're going to see now is AI becoming a lot more human interactive.
I think it's going to move from being question-based to conversational.
And again, an example is ChatGPT. Right now it's very much ask and answer. That's it.
But in the next three years or less, I think you'll see a lot more interactive and engaging AI systems that will talk to you, reply to you, and engage with you on a much better level than what ChatGPT, Deepseek, or any of these LLMs do right now.
Emerging Trends in UAE/KSA
Arabic Conversational AI:
- Dialect-aware voice assistants (Gulf, Levantine, Egyptian, Maghrebi)
- Bilingual Arabic-English agents for customer service
- Real-time translation and transcription
Sovereign AI:
- In-region hosting and BYOK (Bring Your Own Key)
- Compliance with ADGM, PDPL, NCA requirements
- Data residency and audit trails
Agentic AI:
- Autonomous agents that can take actions, not just answer questions
- Multi-step reasoning and planning
- Integration with enterprise systems (ERP, CRM, HRM)
I'd say two things.
1. Start with the Problem, Not the Demo
Don't fall into the trap of chasing big demos that look good but don't bring real business value.Start with the problem you're trying to solve, see where AI fits in, measure the value it can bring, and then build the right setup around it so it lasts.That's where you win long term.
2. Choose the Right Partner
Do your due diligence and speak to the right companies. A company that's going to sit with you, understand what the business wants, and guide you on how to actually get there.
Because without the right guidance, going and finding an analytical AI company isn't going to give you results. You'll end up with a shiny tool for ExCo, but a poor deliverable inside the business, and not even a proper AI one at that.
















