
A few years ago, someone asked me what I thought about AI. I gave the kind of answer most professionals give when they haven’t really thought about something: vague, mildly optimistic, and carefully non-committal. I knew the word. I had read a few headlines. I had a general sense that something significant was happening somewhere, involving people who understood things I didn’t.
I did not think it had much to do with me.
That was a mistake. Not a catastrophic one — I course-corrected eventually. But looking back, I can see exactly what was happening: I was doing what many experienced professionals do when a new technology arrives. I was waiting for it to become relevant before engaging with it. What I didn’t understand is that by the time something feels relevant, you are already behind.
My path into AI was not direct. It came at the end of a longer journey through project management, IT service management, and business analytics. After years working in change management and ITSM — earning ITIL4 certifications, a PRINCE2 Practitioner, and eventually an MBA in Business Analytics — I found myself increasingly drawn to the question of what data actually means for how organizations make decisions.
That curiosity led me to courses in Data Analysis and Business Analysis. I learned to work with Excel at a level I hadn’t before, then Python, SQL, Tableau, Power BI. Each tool was useful in itself, but the bigger shift was in how I started thinking. Data analysis changes your relationship with assumptions. You stop accepting impressions as evidence. You start asking: what do the numbers actually say?
From there, AI felt less like a leap and more like a natural continuation. The tools had changed. The underlying questions — how do we make sense of complex information, how do we make better decisions, how do we manage change in organizations — were the same ones I had been asking for years.
The first surprise was how accessible the conceptual side of AI actually is. I had assumed that understanding AI required a mathematics degree or years of programming experience. That assumption was wrong. The core ideas — what machine learning is trying to do, how large language models work at a high level, what AI is good at and where it fails — are learnable by any intelligent adult willing to invest serious time.
The second surprise was more uncomfortable: how quickly AI is changing what competence looks like in almost every professional field. This is not a gradual shift. It is happening fast enough that skills which were valuable two years ago are being re-evaluated, and roles that didn’t exist eighteen months ago are now appearing in job listings.
The question is no longer whether AI will affect your work. It is whether you will understand it well enough to shape how it does.
The third surprise — and the most important one — was how much my existing background turned out to matter. Not despite being non-technical, but because of it.
If you had told me that years of studying IT service management frameworks would help me navigate the AI transition, I would have been skeptical. ITIL4 is, on the surface, a framework for managing IT services — incident management, change enablement, service design. Not exactly the obvious preparation for thinking about large language models.
But working through ITIL4 — and particularly through certifications like Direct, Plan and Improve and Digital & IT Strategy — I internalized something that turns out to be essential for understanding AI adoption: technology transformations almost always fail for human reasons, not technical ones.
ITIL4’s guiding principles — focus on value, start where you are, progress iteratively with feedback, think and work holistically — are not principles about technology. They are principles about how change actually works in real organizations with real people. I had spent years applying them to ERP implementations and IT service transitions. Now I was watching organizations struggle with AI adoption, and the failure patterns were identical.
Resistance to change. Lack of clarity about what value the new capability was supposed to deliver. Attempts to automate broken processes rather than fix them first. A tendency to treat the technology as the solution rather than as a tool in service of a solution.
The people problem is always bigger than the technology problem. This was true for SAP S/4HANA implementations. It is true for AI. And understanding this — really understanding it, not just as a phrase but as something you have seen play out repeatedly — is one of the most valuable things a professional can bring to an organization trying to figure out what to do with AI.
I understand the hesitation. Learning something genuinely new takes time, and time is the one resource most working professionals feel they do not have enough of. There is also a particular kind of discomfort in being a beginner again after years of hard-won expertise. It can feel undignified. You know so much in your field, and suddenly you are back to not understanding basic terms.
I have felt all of this. I still feel it sometimes. But I have learned to read that discomfort differently. It is not a sign that I am in the wrong place. It is a sign that I am learning something real.
The alternative — waiting until AI feels more settled, more finished, more approachable — is not actually safer. It is just a different kind of risk, one that compounds quietly over time. The people and organizations that will navigate the AI transition best are not the ones who waited for certainty. They are the ones who started engaging while things were still uncertain, when there was still space to learn through experimentation rather than crisis.
If you are a professional in management, change, operations, analysis, or any other non-technical field, here is what I would suggest:
Start with concepts, not tools. Before you worry about which AI tool to use, understand what AI is actually doing. What is a language model? What are hallucinations and why do they happen? A solid conceptual foundation makes every practical step easier.
Use AI daily. There is no substitute for working with these tools directly. Ask an AI assistant to help you with something real — a document, a problem, a decision. Notice where it helps and where it fails. That practical familiarity teaches you things no course can.
Connect AI to what you already know. The most useful frame for a non-engineer is not ‘how does this technology work’ but ‘what does this mean for the work I already do?’ If you have a background in change management, process design, stakeholder engagement — those skills are not obsolete. They are the bridge.
Accept being a beginner again. This is harder than it sounds after years of expertise. But the willingness to not-know for a while is what makes eventual understanding possible. I had to relearn this. Most adults do.
I started my self-development journey in a glass factory, motivated by refusal. I am continuing it now, motivated by curiosity. The circumstances are different. The underlying impulse is exactly the same: the world does not stand still, and neither should I.
If the most important shift in your professional world is happening right now — and it is — the question worth sitting with is this:
What is your plan for understanding it — and when does that plan start?
Not tomorrow. Not when things settle down.
The factory is still running. The question is whether you are the one standing at the machine, or the one who decided to do something about it.