Apr 6 / Ricky Tam

How to Have the AI Conversation at Work Without Feeling Exposed

A vintage compass resting on soft purple fabric, symbolising finding your own direction

Introduction

There is a specific discomfort that many experienced professionals recognise but rarely name: the feeling that everyone else at the meeting seems to know more about AI than they do. That the question might be directed at them. That the wrong answer will reveal something about their relevance.

The result is often silence — or, worse, a form of performance. Nodding along. Using the language without the understanding underneath it. Volunteering enthusiasm you do not quite feel, or defaulting to scepticism as a kind of professional cover.

What is missing in most workplaces is not information about AI. It is a way to have an honest, grounded conversation about it — one that does not require you to be an expert, does not expose you to unnecessary risk, and does not compromise your professional credibility in the process.

This article is about how to have that conversation.
Empty space, drag to resize

Why the conversation feels dangerous

The discomfort around AI conversations at work is not simply about not knowing things. It is about the particular vulnerability of being seen not to know something that feels like it should already be part of your professional competence.

Research published in the Journal of Applied Psychology (Parker & Bindl, 2017) identifies what they call proactive motivation — the internal assessment of whether speaking up is worth the perceived risk. For senior professionals, the risk calculation is often asymmetric: there is relatively little to gain from admitting uncertainty about AI, but potentially a great deal to lose from being seen as behind the curve.

This dynamic creates a silence problem. Organisations where nobody feels safe to admit confusion about AI are organisations where confusion accumulates unaddressed — and where the people who do speak up are often those with the least experience, not the most.

The first thing to understand about the AI conversation at work is that the discomfort you feel is structural, not personal. It is not evidence that you are behind. It is evidence that your organisation has not yet created a safe enough environment for honest engagement with this topic. That is useful to know, because it shifts the question from what is wrong with me? to what is actually happening here?
1 Parker, S.K. & Bindl, U.K. (2017). Proactivity at work: Making things happen in organisations. Journal of Applied Psychology. Routledge. | Edmondson, A.C. (2019). The Fearless Organisation: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.

"The discomfort you feel is structural, not personal. It is not evidence that you are behind."

"Admitting uncertainty about AI positions your uncertainty as discernment rather than ignorance. And it invites a two-way conversation rather than placing yourself in a position of being assessed."

There are broadly three kinds of AI conversations at work that professionals consistently avoid, each for different reasons.
Empty space, drag to resize

Three conversations that keep getting avoided

The upward conversation

Talking honestly with a manager or senior leader about where you are with AI. The risk here feels highest. Many professionals worry that admitting confusion will be read as a signal of diminishing value. In practice, the opposite is more often true: leaders who are themselves navigating AI uncertainty tend to find honest engagement from direct reports more reassuring than confident performance.

The lateral conversation

The conversation with colleagues and peers. This one is underestimated. The unspoken assumption in many teams is that everyone else has figured out more than you have. In reality, most people in most teams are at approximately the same point — uncertain, curious, and reluctant to go first. Going first in a lateral conversation, carefully, often unlocks something for the whole team.

The inward conversation

The conversation with yourself about what you actually think. This is often the most neglected. Many professionals have not yet articulated, even privately, what they believe about AI's relevance to their specific work. Without that internal clarity, the external conversation becomes reactive rather than grounded.
2 This distinction — between genuine relevance and noise — is the core of what The Sorting Method is designed to do. Not all AI anxiety is about the same thing, and not all AI tools have equal relevance to your specific role. A senior professional who has done this sorting work is in a much stronger position in any AI conversation than one who has not.
Empty space, drag to resize

A framework for each conversation

For the upward conversation

The most effective approach is to separate observation from conclusion. Instead of saying I'm not sure I understand AI well enough, which invites evaluation, try something closer to: I've been thinking about how AI applies to what my team does. There are areas where I can see clear applications and areas where I'm still working out the relevance. I'd like to understand better where leadership sees the priority.

This does several things. It signals that you are engaged and thinking. It positions your uncertainty as discernment rather than ignorance. And it invites a two-way conversation rather than placing yourself in a position of being assessed.

For the lateral conversation

With colleagues, the most generative opener is often the most direct: I'll be honest — I've been trying to get my head around which of these tools is actually useful for what we do. What have you found?

This works because it models honesty without self-deprecation, and it immediately makes the conversation useful rather than performative. People almost universally respond better to genuine questions than to rehearsed positions.

For the inward conversation

The question worth sitting with is not how much do I know about AI? but which specific parts of my work do I think AI is genuinely relevant to, and which parts do I think are being overstated?

This distinction — between genuine relevance and noise — is the core of what The Sorting Method is designed to do. Not all AI anxiety is about the same thing, and not all AI tools have equal relevance to your specific role. A senior professional who has done this sorting work is in a much stronger position in any AI conversation than one who has not.

Once you have a framework for each conversation, the next practical challenge is knowing what to say when you genuinely do not know — which is where most professionals stall.
Empty space, drag to resize

What to say when you genuinely don't know

The most common specific fear is being asked a direct question about an AI tool or capability and not knowing the answer. This is worth preparing for explicitly.

The instinct for many professionals is either to deflect — I'll look into that — or to overclaim — Yes, I'm familiar with that — and then manage the consequences privately. Neither serves you well over time.

A more durable position is what might be called the honest frame: I haven't used that specifically, but I've been looking at [adjacent thing] and my thinking on it is [actual position]. This response does three things: it is truthful, it signals active engagement rather than passivity, and it moves the conversation toward substance rather than status.

The research is consistent on this point. A study by Edmondson (2019) on psychological safety in teams found that admissions of uncertainty from senior members, when framed constructively, significantly increased team-level information sharing — the opposite of what most professionals fear.
Empty space, drag to resize

One experiment worth trying this week

If the AI conversation at work has been something you have been navigating around rather than through, a useful small action is this: in the next team meeting where AI comes up, ask one genuine question instead of offering a position.

Not what do you think about AI? — which is too broad — but something specific to the conversation already happening. Which part of the workflow do you think this would change most? Or: Has anyone actually tried using this for that task?

A genuine question, asked without performance, almost always advances the conversation further than a confident statement. And it has the advantage of being something you can do this week, without needing to have resolved everything else first.

This is the spirit of The Small Action Lab: not a plan, but an experiment. Real enough to learn something from. Small enough to start.
Empty space, drag to resize

A note on the bigger pattern

The AI conversation at work is not primarily a conversation about technology. It is a conversation about competence, relevance, and whether you belong in the room. Those questions were already present before AI arrived — AI has simply made them harder to avoid.

That is not necessarily a bad thing. The professionals who navigate this period most effectively are not the ones who resolved their anxiety first. They are the ones who learned to keep working alongside it — asking better questions, contributing from genuine curiosity, and being honest when they do not know something.

The conversation about AI at work is, at its best, a professional development opportunity in disguise. Not because AI is exciting, but because navigating ambiguity with honesty and skill is exactly what distinguishes experienced professionals from those who simply look confident.

Navigating ambiguity with honesty and skill is exactly what distinguishes experienced professionals from those who simply look confident.

For a more structured way to build this kind of grounded confidence with AI tools, the AI with Calm programme is built for professionals who want a clear, calm starting point — without wading through content designed for people at a different career stage.

About the creator

Ricky is the creator of Embracing Imperfection Academy, a digital education platform for professionals navigating perfectionism, anxiety, burnout, and life transitions.

A former Hong Kong professional now based in the UK, Ricky brings lived experience of high-pressure careers, cultural transition, and the quiet work of building a calmer life. His work is evidence-based, anti-hustle, and always grounded in the belief that calm is a competitive advantage — including in the age of AI.

Embracing Imperfection Academy offers courses, resources, and a membership community for professionals ready to navigate disruption with clarity rather than panic.

Ricky, creator — Embracing Imperfection Academy

Explore our Courses

Frequently Asked Questions

Is it normal to feel anxious about AI conversations at work, even if you are senior?

Yes, and the anxiety is more common at senior level than most people admit. The discomfort is not about a lack of intelligence or capability — it is about the specific vulnerability of being seen not to know something that feels like it should already be part of your professional competence. Research on proactive motivation in the workplace consistently shows that senior professionals face an asymmetric risk calculation: admitting uncertainty feels costly, while the benefit of honesty is less immediately visible. Recognising that this discomfort is structural — built into how most organisations have handled AI — rather than personal, is usually the first and most useful step.

What if I genuinely don't know much about AI? Will that be obvious?

It is usually more obvious when someone is overclaiming than when someone is honest about where they are. Most senior professionals in most organisations are at approximately the same point — uncertain, curious, and trying to calibrate what actually matters for their specific role. The professionals who tend to lose credibility are those who confidently repeat AI talking points without being able to connect them to real work, not those who ask genuine questions. A thoughtful admission of uncertainty, framed as active engagement rather than passivity, consistently lands better than it feels like it will.

How do I avoid sounding either dismissive of AI or naively enthusiastic about it?

The most sustainable position is specificity. Instead of expressing a general view on AI — which tends to pull you toward one of the two poles you want to avoid — stay close to the actual work. Which specific tasks in your role do you think AI is relevant to? Which parts do you think are being overstated? A professional who has done this sorting work can speak with genuine authority, because they are talking about their own context rather than performing a position on a broad trend. Vague enthusiasm and vague scepticism are both forms of avoiding the specific question. Specificity is the alternative to both.

What should I say if I am asked directly about an AI tool I have never used?

Be truthful, and be active. I haven't used that specifically, but I've been looking at [something adjacent] and my thinking on it is [your actual view]. This response does three things at once: it is honest about your specific experience, it signals that you are engaged rather than passive, and it moves the conversation toward substance. What undermines credibility is not saying you haven't used something — it is either pretending you have when you haven't, or offering no position at all. The honest frame is almost always more durable than the alternative.

How do I raise AI honestly with my manager without it reflecting badly on me?

Separate observation from conclusion. The version of this conversation that tends to land badly is one that sounds like a confession — I'm not sure I understand this well enough. The version that tends to work is one that positions your uncertainty as discernment: I've been thinking about how AI applies to what my team does. There are areas where I can see clear applications and areas where I'm still working out the relevance. I'd value understanding where leadership sees the priority. This signals engagement, frames your question as professional calibration rather than deficit, and invites a two-way conversation rather than an evaluation.

What if my team is further ahead with AI than I am?

This is more common than it is acknowledged, and it is more manageable than it feels. The most effective thing a senior professional can do in this situation is ask genuinely rather than perform authority. Teams respond much better to a leader who says I'd like to understand what you've been doing with this — walk me through it than to one who either ignores the gap or overclaims familiarity they don't have. At senior level, the most valuable AI competency is often not personal tool use but the ability to understand, guide, and govern how your team is using AI. That capability starts with honest curiosity, not pretended expertise.

Is there a risk that being honest about AI uncertainty makes me seem less competent overall?

The research suggests the opposite risk is more significant. A study by Edmondson on psychological safety in teams found that admissions of uncertainty from senior members, framed constructively, significantly increased team-level information sharing and trust. The professionals who tend to lose credibility over time are those who maintain a performance of confidence that others can see through — not those who engage honestly with what they know and don't know. The AI conversation specifically is one where most people around you are also uncertain. Being the person who models honest engagement rather than performance usually creates more trust, not less.

Want to think more clearly about AI and your career?

The Compass Letter is a fortnightly note for professionals navigating AI disruption without the panic. Each issue offers one evidence-based perspective and one practical starting point — nothing more.

Join the early access waitlist for the AI Anxiety Reset programme
Thank you!
Write your awesome label here.
Write your awesome label here.

Also exploring UK settlement?

Life in the UK: 20-Day Calm Sprint — for professionals preparing for UK settlement with calm confidence.

References

  • Parker, S.K. & Bindl, U.K. (2017). Proactivity at work: Making things happen in organisations. Journal of Applied Psychology. Routledge.
  • Edmondson, A.C. (2019). The Fearless Organisation: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.
Created with