
Introduction
Why the conversation feels dangerous
Three conversations that keep getting avoided
The upward conversation
The lateral conversation
The inward conversation
A framework for each conversation
For the upward conversation
For the lateral conversation
For the inward conversation
What to say when you genuinely don't know
One experiment worth trying this week
A note on the bigger pattern
About the creator
Ricky is the creator of Embracing Imperfection Academy, a digital education platform for professionals navigating perfectionism, anxiety, burnout, and life transitions.
A former Hong Kong professional now based in the UK, Ricky brings lived experience of high-pressure careers, cultural transition, and the quiet work of building a calmer life. His work is evidence-based, anti-hustle, and always grounded in the belief that calm is a competitive advantage — including in the age of AI.
A former Hong Kong professional now based in the UK, Ricky brings lived experience of high-pressure careers, cultural transition, and the quiet work of building a calmer life. His work is evidence-based, anti-hustle, and always grounded in the belief that calm is a competitive advantage — including in the age of AI.
Embracing Imperfection Academy offers courses, resources, and a membership community for professionals ready to navigate disruption with clarity rather than panic.
Explore our Courses
Frequently Asked Questions
Is it normal to feel anxious about AI conversations at work, even if you are senior?
Yes, and the anxiety is more common at senior level than most people admit. The discomfort is not about a lack of intelligence or capability — it is about the specific vulnerability of being seen not to know something that feels like it should already be part of your professional competence. Research on proactive motivation in the workplace consistently shows that senior professionals face an asymmetric risk calculation: admitting uncertainty feels costly, while the benefit of honesty is less immediately visible. Recognising that this discomfort is structural — built into how most organisations have handled AI — rather than personal, is usually the first and most useful step.
What if I genuinely don't know much about AI? Will that be obvious?
It is usually more obvious when someone is overclaiming than when someone is honest about where they are. Most senior professionals in most organisations are at approximately the same point — uncertain, curious, and trying to calibrate what actually matters for their specific role. The professionals who tend to lose credibility are those who confidently repeat AI talking points without being able to connect them to real work, not those who ask genuine questions. A thoughtful admission of uncertainty, framed as active engagement rather than passivity, consistently lands better than it feels like it will.
How do I avoid sounding either dismissive of AI or naively enthusiastic about it?
The most sustainable position is specificity. Instead of expressing a general view on AI — which tends to pull you toward one of the two poles you want to avoid — stay close to the actual work. Which specific tasks in your role do you think AI is relevant to? Which parts do you think are being overstated? A professional who has done this sorting work can speak with genuine authority, because they are talking about their own context rather than performing a position on a broad trend. Vague enthusiasm and vague scepticism are both forms of avoiding the specific question. Specificity is the alternative to both.
What should I say if I am asked directly about an AI tool I have never used?
Be truthful, and be active. I haven't used that specifically, but I've been looking at [something adjacent] and my thinking on it is [your actual view]. This response does three things at once: it is honest about your specific experience, it signals that you are engaged rather than passive, and it moves the conversation toward substance. What undermines credibility is not saying you haven't used something — it is either pretending you have when you haven't, or offering no position at all. The honest frame is almost always more durable than the alternative.
How do I raise AI honestly with my manager without it reflecting badly on me?
Separate observation from conclusion. The version of this conversation that tends to land badly is one that sounds like a confession — I'm not sure I understand this well enough. The version that tends to work is one that positions your uncertainty as discernment: I've been thinking about how AI applies to what my team does. There are areas where I can see clear applications and areas where I'm still working out the relevance. I'd value understanding where leadership sees the priority. This signals engagement, frames your question as professional calibration rather than deficit, and invites a two-way conversation rather than an evaluation.
What if my team is further ahead with AI than I am?
This is more common than it is acknowledged, and it is more manageable than it feels. The most effective thing a senior professional can do in this situation is ask genuinely rather than perform authority. Teams respond much better to a leader who says I'd like to understand what you've been doing with this — walk me through it than to one who either ignores the gap or overclaims familiarity they don't have. At senior level, the most valuable AI competency is often not personal tool use but the ability to understand, guide, and govern how your team is using AI. That capability starts with honest curiosity, not pretended expertise.
Is there a risk that being honest about AI uncertainty makes me seem less competent overall?
The research suggests the opposite risk is more significant. A study by Edmondson on psychological safety in teams found that admissions of uncertainty from senior members, framed constructively, significantly increased team-level information sharing and trust. The professionals who tend to lose credibility over time are those who maintain a performance of confidence that others can see through — not those who engage honestly with what they know and don't know. The AI conversation specifically is one where most people around you are also uncertain. Being the person who models honest engagement rather than performance usually creates more trust, not less.
Want to think more clearly about AI and your career?
The Compass Letter is a fortnightly note for professionals navigating AI disruption without the panic. Each issue offers one evidence-based perspective and one practical starting point — nothing more.
Join the early access waitlist for the AI Anxiety Reset programme
Join the early access waitlist for the AI Anxiety Reset programme
Write your awesome label here.
Write your awesome label here.
Also exploring UK settlement?
Life in the UK: 20-Day Calm Sprint — for professionals preparing for UK settlement with calm confidence.
References
- Parker, S.K. & Bindl, U.K. (2017). Proactivity at work: Making things happen in organisations. Journal of Applied Psychology. Routledge.
- Edmondson, A.C. (2019). The Fearless Organisation: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.
