There is a quiet risk spreading through computer screens and workplaces. It usually arrives disguised as usefulness and productivity.
A soulless but perfectly polished proposal or call summary lands in seconds. A strategy draft appears before the meeting starts. An email exchange synthesised into coherent-sounding notes. The output is efficient, fluent, fast, and often good enough to keep things moving.
That is exactly why we need to pay attention!
One of the biggest risks of AI in today’s work is not only that it can be wrong. It is that we risk handing over too much of our thinking in the name of speed. We simply stop wrestling with the question because the answer arrives already formed. We skip the important pause in which doubt and human judgment normally enter.
We all agree that the AI power is not be contested. It can help us scan information, detect patterns, generate options, and get to a first draft far more quickly than most of us can do on our own. But the real question is not whether AI is useful. It is which parts of our thinking we are becoming willing to surrender.
This is where cognitive sovereignty matters…
Put simply, cognitive sovereignty is the ability to remain in charge of your own cognitive workload. It means keeping hold of your attention, your judgment, your memory, your sense of what is credible, and your capacity to decide what you think, rather than drifting into whatever the machine produces first. It is not about rejecting AI. It is about staying the author of your own reasoning while using it.
That matters because fluent language is not the same as understanding, sensing, mattering or having skin in the game.
AI does not “know”. It predicts likely answers. Sometimes those answers are useful. Sometimes they are shallow, biased, or simply false. And because they are delivered with confidence and polish, they can be accepted too quickly. The danger is not only misinformation. It is the gradual weakening of your discernment.
Humans are biased too, of course. But careless AI use does not remove bias. It layers machine-generated distortions on top of human assumptions, then wraps the result in language that sounds finished. What looks like clarity can often be compression. What looks like insight can simply be plausibility.
This is why, in this new AI era, critical thinking is no longer a soft skill sitting politely at the edge of work. It is becoming the core infrastructure of our human operating system.
The problem is that most organisations reward speed more visibly than judgment. Fast output is easy to measure. Discernment and ethics are harder to see and appreciate. Yet the most important forms of thinking often happen at a different tempo. They require hesitation, comparison, context, and sometimes the courage to say: "This sounds right, but I am not convinced yet”.
That is why we need intentional friction in how we work with AI.
-
A pause before accepting the draft.
-
A question about what sources sit underneath a polished output.
-
A check for what perspective is missing.
-
A challenge to the framing, not just the conclusion.
-
A moment to ask whose interests, assumptions, or worldview are quietly built into the answer.
This matters especially for leaders. AI can help leaders make sense of complexity, model scenarios, and surface options. But it cannot decide what matters most. It cannot judge which trade-offs are ethical, which risks are acceptable, or what kind of culture these decisions will create over time. Those are not computational questions. They are human ones…
Used well, AI can strengthen thinking. It can test assumptions, widen perspective, expose blind spots, and offer counterarguments we might not have considered. Used poorly, it becomes a cognitive crutch: a way to avoid the slower work of reflection while maintaining the appearance of intelligence.
That is why critical thinking with AI is not just about fact-checking after the output appears. It is about preserving the conditions for independent thought in the first place.
Because in a world full of synthetic fluency, the real advantage is not speed alone. It is the ability to remain discerning. To notice what you are accepting. To question what you are repeating. To stay awake to what is being shaped in you, while you are busy shaping work around you.
That is cognitive sovereignty.
Not resisting AI.
Not worshipping it.
But using it without giving away the inner ground of judgment.
Sabryna Alsfasser
Business Leader & Facilitator, Hyper Island
Want to go deeper?
Want to go deeper? Watch the webinar recording on Future Foresight & Critical Thinking with AI with Tim Lucas and Sabryna Alsfasser. In the session, they explore how foresight and critical thinking can help leaders make better decisions in complexity, and how AI can be used to widen perspective without replacing human judgment.
WANT TO GET NOTIFIED ABOUT OUR EVENTS?
Join our mailing list for tips, events, and opportunities.