Should Organizations Publish an AI Strategy for Employees and Stakeholders?
- FT Consulting Partners
- Feb 26
- 6 min read

Research-based perspective (February 26, 2026)
Why This Question Matters Now
AI is no longer experimental. It is embedded in everyday work, customer interactions, and decisions that affect people's lives. That creates a straightforward expectation from employees, regulators, customers, and boards: tell us how you are using AI, what you will not do, and how you will keep it safe and fair.
A useful reference point is Immigration, Refugees and Citizenship Canada (IRCC), which published a departmental AI strategy that functions as both a public pledge and a practical roadmap. In plain terms, IRCC explains what AI is for, where it will be used, how risks will be managed, and how trust will be protected.
This article examines what organizations can learn from that approach, where it helps, where it can backfire, and what HR leaders can do to build a strategy stakeholders actually trust.
What IRCC Chose to Share Publicly
IRCC's strategy is not a technical document. It is a governance and trust document written for employees and the public. The key elements are worth examining.
A clear purpose for AI. IRCC positions AI as a tool to improve service delivery, efficiency, and program integrity, while protecting privacy and security. It explicitly links AI use to public benefit and trust.
A simple adoption framework with risk tiers. IRCC distinguishes between different levels of AI usage, from everyday administrative support to program operations, and signals that some higher-risk applications have "no current ambition for adoption." Employees can tell the difference between productivity support and decision influence, and a strategy should reflect that.
Plain-language principles. IRCC commits to five operating principles: human-centered and accountable, transparent and explainable, fair and equitable, secure and privacy-protecting, and valid and reliable. These are not values statements. They are testable commitments.
Operational priorities, not just values. IRCC names concrete actions including creating a centre of expertise, strengthening governance, building an AI-ready workforce, and developing an engagement strategy. That is the difference between a poster and a strategy.
Are Other Organizations Doing This?
Yes, across several patterns. The most credible organizations choose the approach that matches their risk profile and stakeholder expectations.
Public sector governance-first strategies. The Government of Canada has published an AI Strategy for the federal public service alongside a public register of AI uses across federal institutions. Canada's Algorithmic Impact Assessment (AIA) tool standardizes how institutions screen and mitigate risk in automated decision systems.
Standards-aligned frameworks. Many organizations publish strategies that map to recognized frameworks, giving stakeholders a reference point they can verify. The NIST AI Risk Management Framework defines characteristics of trustworthy AI including validity, safety, security, accountability, transparency, and bias management. The OECD AI Principles and ISO/IEC 42001 (AI management systems) are also increasingly used as baseline standards.
Private sector responsible AI commitments. Microsoft, Google, OpenAI, and Anthropic all publish AI principles, governance frameworks, or safety reports. Some commit to recurring risk reporting and external review. One important reality check: publishing commitments does not guarantee strong practice. Several high-profile companies have faced employee concern precisely when published principles did not match internal decisions.
Does Publishing a Strategy Actually Build Confidence?
It can, but only if the strategy is credible and operational. Stakeholders trust what they can verify.
Research on employee-facing AI transparency shows that it can increase trust and reduce threat perceptions, particularly when employees understand the context and purpose of the AI being used. Broader research on explainability reinforces that vague promises do not move the needle. What matters is a structured approach to defining what explainability means in practice for each use case.
The Edelman 2025 Trust Barometer found that people are more willing to adopt AI when they are informed and when they trust it. Knowledge, clarity, and governance are not optional layers. They are the foundation for adoption.
The National Institute of Standards and Technology (NIST) frames it this way: transparency supports accountability, and accountability underpins trustworthiness. That sequence matters.
Stakeholders interpret the following as confidence-building:
Clear boundaries around what AI will and will not be used for
Defined human oversight and accountability
Risk controls including testing, monitoring, and incident response
Fairness and privacy commitments backed by process
Training and AI literacy so employees can operate with confidence
Transparency mechanisms such as registers, reporting cycles, and review cadences
The Pros and Cons of Publishing
Benefits
Publishing a strategy signals that leadership is not managing AI in the dark, especially where AI touches people decisions or sensitive data. It speeds up adoption because employees know what is approved and where to get support. It forces alignment across Legal, HR, Privacy, Security, IT, and the business, which reduces fragmented decisions and unmanaged vendor risk. It strengthens employer brand in a market where trust is fragile. And it positions the organization ahead of regulatory transparency obligations that are increasing in multiple jurisdictions.
Risks
Overpromising is the most common trap. If the strategy reads like marketing and the reality does not match, trust drops quickly. Public commitments can also become evidence in disputes if practices fall short, for example in bias claims, privacy incidents, or labour relations matters. Too much operational detail can expose system vulnerabilities or competitive methods. And if the strategy is not maintained as AI evolves, it becomes outdated and employees stop believing it.
A Practical Playbook for HR leaders
HR is well-positioned to lead the people and governance layer of any AI strategy because AI directly changes work, skills, roles, and trust.
Step 1: Build a clear inventory first. Know where AI is already in use, including tools embedded in your HRIS, ATS, learning platforms, and productivity suites. Map what decisions AI influences and what data it touches. Only publish what you can stand behind.
Step 2: Classify use by risk and impact. A simple three-tier model works. Productivity support is low risk. Operational decision support is medium risk. People-impacting decisions are high risk and require the most governance.
Step 3: Define non-negotiables. Examples include no emotion recognition for employees, no fully automated decisions for hiring or termination, no use of private employee data for model training without explicit controls, and clear rules for generative AI in HR communications. Red lines should anticipate where regulation is heading, particularly in jurisdictions with emerging AI workplace obligations.
Step 4: Build governance people can understand. Minimum elements include who is accountable, how solutions are reviewed and approved, required testing standards, human oversight rules, and vendor and third-party requirements. Aligning to a recognized framework such as NIST's AI RMF reduces internal debate about what "good" looks like.
Step 5: Publish a strategy that reads like a trust contract. The most credible strategies include purpose and outcomes, plain-language principles, use-case boundaries and risk tiers, a data and privacy posture, workforce commitments around training and job impact, a transparency mechanism, and a clear channel for employees to raise concerns.
Step 6: Build AI literacy as a workforce expectation. A strategy fails if employees cannot apply it. Training should cover what tools are approved, what data cannot enter public AI tools, how to validate AI outputs, when to escalate, and how AI intersects with fairness, privacy, and security. This is also becoming a compliance expectation in several regions.
Step 7: Prove it with continuous measurement. Audit samples of AI-supported HR decisions. Monitor for bias and performance drift. Track employee feedback, grievance signals, vendor performance, and incident logs. Report internally on a regular cadence and, where appropriate, share external summaries.
When a Public Strategy Makes the Most Sense
Consider publishing externally if AI affects hiring, performance, scheduling, compliance, safety, or customer eligibility outcomes. The case is also strong if you operate across multiple jurisdictions, rely heavily on third-party AI tools, have a workforce already using generative AI informally, or are preparing for enterprise transformation and need employee trust as a foundation.
If you are not ready for a full public release, publish internally first. Release an external version that focuses on governance, principles, and boundaries.
FT Consulting Partners Point of View
Publishing an AI strategy is not about optics. It is a governance move that strengthens trust, speeds up adoption, and reduces risk. But it must be operational, measurable, and maintained.
The organizations that earn lasting trust are the ones that can answer four questions clearly and consistently: What are we using AI for? What are we not doing? Who is accountable? And how are we protecting people?
By: Franklina Tawiah, People Transformation Consultant, Principal
Selected References
IRCC, Artificial Intelligence Strategy (February 2026)
Treasury Board of Canada Secretariat, Canada launches first register of AI uses in federal government (November 2025)
Government of Canada, Algorithmic Impact Assessment tool (updated January 2026)
NIST, AI Risk Management Framework (AI RMF 1.0)
OECD, Recommendation of the Council on Artificial Intelligence
ISO/IEC 42001:2023, AI management systems
Yu et al. (2023), Employees' Appraisals and Trust of Artificial Intelligence (PMC)
Balasubramaniam et al. (2023), Transparency and Explainability of AI Systems (ScienceDirect)
Edelman, 2025 Trust Barometer and AI Trust Flash Poll
Microsoft, Responsible AI Principles; Google, AI Principles and Responsible AI Progress Report; OpenAI, Preparedness Framework; Anthropic, Responsible Scaling Policy v3
.png)



Comments