Insights Business| SaaS| Technology Preparing Your Organisation for AI: Skills Development, Shadow AI Management, and Change Leadership for Tech Teams
Business
|
SaaS
|
Technology
Jan 1, 2026

Preparing Your Organisation for AI: Skills Development, Shadow AI Management, and Change Leadership for Tech Teams

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic AI Organisational Readiness and Change Management

You’ve probably got AI initiatives running in your tech team right now. Maybe a few pilots, some experimentation with coding assistants, perhaps a chatbot proof-of-concept. But if you’re like 66% of companies, you’re stuck there—unable to move beyond the experimentation phase.

Here’s what’s actually happening: 70% of AI projects fail, and it’s not the technology that’s the problem. It’s people and processes. Skills gaps. Shadow AI creating security risks. Organisational silos blocking adoption.

And there’s another problem lurking underneath: 90% of executives don’t completely understand their team’s AI skills. Meanwhile, 39% of employees are using unauthorised free AI tools, and 52% are actively hiding their usage from leadership.

This article gives you a structured approach to fix those problems. You’ll get a readiness assessment framework, role-specific skills development plans with 3/6/12 month milestones, shadow AI governance strategies, and change management tactics that actually work for technical teams. This organisational readiness work forms a critical foundation for any strategic AI adoption decision.

What Is Organisational Readiness for AI Adoption?

Organisational readiness is your capability to adopt, implement, and scale AI across people, processes, data, infrastructure, and governance. It’s not just checking if you have the right compute resources. It’s about workforce skills, cultural alignment, leadership support, and your capacity to manage change.

Why does this matter? Because less than 10% of companies are truly AI-ready across all these dimensions. And 57% of organisations estimate their data isn’t AI-ready, which is a significant barrier before you even get to skills and culture.

Readiness assessment has six core elements:

Strategy alignment: Does your AI vision actually connect to business goals? Are executives genuinely committed beyond the initial enthusiasm phase?

Data quality: Can your data actually support AI workloads? Only 12% of organisations have sufficient data quality for AI.

Infrastructure evaluation: Do you have the compute, integration capabilities, and security controls to run AI at scale?

Talent capabilities: Does your team have the skills needed? Can they learn what’s missing?

Governance policies: Do you have approval pathways, risk frameworks, and compliance processes?

Cultural readiness: Does your organisation encourage experimentation, maintain psychological safety, and support cross-functional collaboration?

For SMB tech companies (50-500 employees), the common readiness gaps are resource constraints, competing priorities, and skill concentration. You might have one or two people who understand AI deeply, but that knowledge hasn’t spread across engineering, product, security, and executive teams.

Frameworks like Deloitte’s aiRMF, Microsoft’s Cloud Adoption Framework, and the CDAO maturity model provide structured templates for consistent assessment.

Why Do 70% of AI Projects Fail Due to People and Process Issues?

Projects fail because organisations focus on technology selection whilst neglecting workforce skills, change management, and cultural readiness. You pick a model, spin up infrastructure, and assume people will figure it out. They don’t.

The numbers tell the story: 75% of organisations pause AI projects due to skills gaps. 34% cite culture as a barrier. 30% struggle with organisational silos. And only 23% of leaders report well-developed AI skills across teams.

Here’s what’s causing the failures:

Insufficient training: Your team doesn’t have the skills to use AI effectively. Integration challenges languish in backlogs whilst everyone focuses on improving accuracy metrics.

Organisational silos: Product, infrastructure, data, and compliance teams operate independently without shared success metrics or coordinated timelines.

Resistance to change: You build sophisticated systems that fail silently when front-line workers distrust outputs.

Inadequate executive sponsorship: Leadership approves budgets but doesn’t actively model AI adoption or address resistance.

The people issues run deep. Fear of job displacement. Lack of psychological safety to experiment and fail. Inadequate communication about AI strategy and benefits.

The projects that succeed? They addressed readiness gaps before major investment. So let’s start with assessment.

How to Assess Your Organisation’s AI Readiness

Start with a multi-dimensional evaluation across strategy, data, infrastructure, workforce, governance, and culture. Use proven assessment tools rather than making up your own scoring system.

Deloitte’s aiRMF covers 10 capability areas with detailed rubrics. Microsoft’s Cloud Adoption Framework provides cloud-specific guidance if you’re running on Azure. The CDAO maturity model offers benchmarking against other organisations.

For SMB tech companies, pick one framework and commit to it. Don’t spend months evaluating assessment tools. Choose based on your existing stack and get started.

Here’s the process:

Preparation: Identify who needs to be involved—engineering leads, product managers, security, executives. Block time on calendars.

Data collection: Run skills gap analysis comparing current competencies to required capabilities. Evaluate infrastructure capacity—data quality, compute resources, integration capabilities, security controls. Measure cultural readiness through surveys.

Analysis: Score each dimension using your chosen framework’s rubric. This gives you a concrete baseline.

Gap identification: Prioritise gaps based on impact and effort. Some gaps block everything else.

Roadmap creation: Set 3/6/12 month milestones for addressing readiness barriers.

The output of this assessment is an actionable roadmap, not a report that sits on a shelf. Once you understand your readiness gaps, you can factor organisational readiness into your strategic AI decision about which approach best fits your team’s capabilities.

How to Build Role-Specific AI Learning Paths for Engineers, PMs, Security, and Executives

Design differentiated curricula targeting distinct competencies for each role. Engineers need prompt engineering and integration skills. PMs need use case identification. Security teams need risk assessment frameworks. Executives need strategic vision and board-level communication.

Apply the 70/20/10 rule: 70% hands-on project work, 20% peer learning and mentoring, 10% formal training courses. This matters because less than one third of organisations spend AI budget on hands-on labs despite evidence that practical experience drives mastery.

3-month milestone: Establish AI literacy baseline across all roles. Engineers complete foundational prompt engineering. PMs practice use case evaluation. Security learns risk assessment. Executives develop strategic planning skills.

6-month milestone: Engineers integrate production AI tools into workflows. PMs build AI-enhanced product roadmaps with success metrics. Security implements governance policies. Executives lead organisational change and communicate AI strategy.

12-month milestone: Engineers optimise AI systems for performance and cost. PMs own AI product features end-to-end. Security integrates AI risk management into operations. Executives articulate AI strategy at board level.

Here’s how to structure this:

For engineers: Start with Microsoft’s free training resources. Run internal prompt engineering workshops. Assign AI-assisted development tasks using tools like GitHub Copilot. As your engineers build technical competence, they’ll need to understand the specific models your organisation has chosen and prepare for RAG and fine-tuning implementations.

For product managers: Focus on use case identification connecting AI capabilities to business problems. Create project-based training with real-world case studies.

For security teams: Build practical labs for evaluating AI tool security and privacy. Develop risk-based governance frameworks. Security expertise becomes critical when you implement enterprise AI governance frameworks.

For executives: Connect AI investments to financial outcomes. Build skills for leading teams through change.

The stats should convince you: 58% of organisations build AI skill development costs into initial budgets. Don’t be in the 42% who struggle with reactive planning. When planning your budget, make sure to calculate training costs and talent acquisition expenses alongside infrastructure and licensing.

Training programme options for SMBs: Udemy Business, Pluralsight, 360Learning. The specific vendor matters less than consistent application of the 70/20/10 rule.

Recognise employees who build AI agents as champions who can coach others. Create internal communities of practice. Make learning visible through weekly showcases.

What Is Shadow AI and How Does It Create Security Risks?

Shadow AI is unauthorised use of artificial intelligence tools by employees outside formal IT oversight and approval processes. 39% of employees use free AI tools at work, 17% use personally paid tools, and 52% actively hide usage from leadership.

The security risks are real: 77% of employees paste data into GenAI prompts, 82% from unmanaged accounts. Sensitive data sharing increased from 10% to over 25% in one year.

Here’s what that looks like in practice:

GDPR/CCPA violations: Employees paste customer personal data into ChatGPT. That data now lives outside your control.

IP leakage: Developers paste proprietary code into Claude for debugging. Your competitive advantage is now in Anthropic’s training data.

Compliance failures: Finance teams use Gemini to analyse sensitive financial projections. You’ve just created an audit trail gap in a regulated environment.

Here’s the paradox: employees use shadow AI to boost productivity and innovation. The usage indicates genuine business need that formal programmes should address, not just ban.

How to Detect and Manage Shadow AI Usage

Detection methods include DNS monitoring for AI service queries, web proxy analysis, DLP tool deployment tracking data transfers, and application integration audits. Varonis and similar tools offer network monitoring capabilities.

But detection is just the first step. You need an education-first approach over enforcement.

Explain security risks: Show teams the data exposure statistics. Make it concrete with examples relevant to your business.

Provide approved alternatives: Offer tools with comparable capabilities that enable safe use. If people are using ChatGPT for code assistance, provide GitHub Copilot or similar approved tools. Make approved tools accessible—fast approval, simple setup, usage guidelines.

Create decision frameworks: Build simple risk assessment tools employees can use to evaluate whether a new AI tool is appropriate. Risk-based policies work better than blanket bans: strict controls for high-risk usage, lighter touch for low-risk experimentation.

Implement BYOA policies: The IEEE Computer Society proposes transparency-based “Bring Your Own AI” approaches emphasising risk assessment over restriction. Employees disclose AI tool usage, IT assesses risk, appropriate controls are applied.

Deploy sandbox environments: Create isolated infrastructure with governed AI tools where teams can experiment safely. Provide approved tool catalogues and experimentation guidelines.

The cultural shift matters more than technical controls. 52% hide AI usage when policies are too restrictive. Your goal is to reduce that percentage by creating safer approved pathways.

How to Create Cross-Functional AI Teams to Break Down Organisational Silos

Assemble multidisciplinary groups combining technical expertise—engineers, data scientists—with business domain knowledge like PMs, compliance, security, and HR. This addresses the 30% silo barrier: ensuring AI solutions meet business needs whilst maintaining technical feasibility and regulatory compliance.

Core team composition: Engineers (2-3 people), PM (1 person), Data scientist (1 person). These are your full-time or majority-time contributors.

Extended members: Security specialist, compliance officer, business analyst. These folks contribute 10-20% time for reviews, approvals, and domain expertise.

Executive sponsor: One executive who actively supports the team, removes blockers, and advocates for AI initiatives in leadership meetings.

Operating model options:

For companies under 200 employees: AI Centre of Excellence works well. Centralise expertise, maintain consistent standards.

For 200-500 employees: Distributed AI champions prove more effective. Faster innovation, business-aligned solutions, local ownership.

Hybrid models: CoE sets standards and governance, champions drive local adoption. Many organisations adopt this as they grow beyond 200 employees.

Silo-breaking tactics:

Establish shared objectives and joint KPIs across functions. When engineers and PMs share success metrics, they collaborate differently.

Create physical or virtual co-location. Regular synchronisation cadences—daily standups, weekly planning, monthly reviews.

Run cross-training sessions. Engineers learn about business constraints. PMs understand technical limitations.

Foster communities and networks internally to combat silo effects. Spotify’s “AI Guild” provides a good model—employees across departments share lessons and discuss projects.

How to Scale from AI Experiments to Production Deployments

Begin with quick wins: select high-impact, low-complexity use cases for early success and momentum building. Not every AI project needs to be transformative. Some can just save time and demonstrate value.

Define success metrics before scaling: Establish baseline measurements. Identify direct value—cost reduction, time savings—and indirect value like employee satisfaction and organisational agility. Without baseline metrics, you can’t prove impact.

Leadership support matters: Executives must use AI tools themselves, share examples publicly, and normalise experimentation and learning from failures.

Implement incremental rollout: Gradual expansion beats big bang deployment. BBVA scaled adoption from 3,000 to 11,000 ChatGPT licences by building a champion network, achieving 83% weekly AI use in 5 months.

Address the 66% stuck in experimentation: Define scaling criteria. Allocate resources specifically for production. Ensure infrastructure readiness. Implement change management for organisation-wide adoption.

Scaling readiness checklist:

Infrastructure capacity: Can your systems handle production load?

Skills coverage: Do enough people understand how to use AI tools effectively?

Governance policies: Are approval pathways, risk frameworks, and compliance processes defined?

Change management preparation: Have you communicated plans and addressed resistance?

Success metrics: Can you measure and report value delivery?

Production deployment phases: Limited pilot (one team, 10-20 people), expanded pilot (multiple teams, 50-100 people), department rollout (entire functional area), organisation-wide deployment.

Timeline expectations: Scaling requires sustained commitment. Quick wins are possible in 3 months but sustainable transformation takes 18-36 months.

Create rituals that sustain learning: weekly showcases, short hackathons, “use case of the week” posts. Recognise and reward teams that create value through AI. Connect results to professional growth and promotion criteria.

What Change Management Frameworks Work Best for AI Adoption?

ADKAR Model from Prosci provides a structured five-stage approach: Awareness of need, Desire to participate, Knowledge of how to change, Ability to implement, Reinforcement to sustain.

Here’s how ADKAR applies to AI adoption:

Awareness: Articulate why AI matters for your business. Connect to competitive pressures, customer expectations, and operational efficiency. Make the business case without being alarmist.

Desire: Demonstrate benefits to individual employees. Address job security concerns directly—show how AI skills lead to career advancement, not displacement. Share success stories from early adopters.

Knowledge: Provide role-specific training using the 70/20/10 model. This stage connects directly to your learning paths for engineers, PMs, security, and executives.

Ability: Move beyond theory to hands-on practice. Real projects using approved AI tools. Sandbox environments for safe experimentation. Peer support from AI champions.

Reinforcement: Celebrate wins publicly. Embed AI skills in promotion criteria. Continue learning programmes beyond initial training. Make AI adoption part of normal operations, not a special initiative.

For most SMB tech companies, stick with ADKAR. It’s well-documented, widely used, and integrates cleanly with skills development programmes.

Success factor: address both emotional and practical aspects simultaneously. Technology adoption fails when focusing only on technical training.

How to Measure ROI and Business Value from AI Initiatives

Establish baseline before AI implementation: document current process times, error rates, costs, revenue metrics for comparison. This baseline makes everything else measurable.

Track direct value: Cost reduction, revenue generation, time savings, error reduction. These are your lagging indicators—they measure outcomes.

Measure indirect value: Employee satisfaction scores, customer experience metrics, organisational agility, innovation acceleration.

Apply AI-specific ROI formulas:

Basic ROI = (Benefits – Costs) / Costs × 100

Productivity Enhancement = (Hours Saved × Hourly Value) / Costs × 100

Leading indicators (predict success): AI literacy rate, skills coverage percentage by role, training completion at 3/6/12 month milestones, approved tool adoption velocity, shadow AI disclosure rate improvement.

Lagging indicators (measure outcomes): Production deployment success rate, time from pilot to production, business KPIs (cost reduction, revenue impact, efficiency gains), employee satisfaction, customer experience metrics.

Build dashboards showing both types. Report leading indicators monthly. Report lagging indicators quarterly.

Track readiness metrics (assessment scores, gap closure), skills metrics (training completion, certification achievement), change metrics (employee sentiment, adoption velocity), scaling metrics (pilots launched, production deployments), and business value metrics (cost savings, revenue impact, efficiency gains).

Report to stakeholders using executive summary format. Connect AI investments to business outcomes leadership cares about.

FAQ Section

What should a new CTO do first when preparing their organisation for AI?

Conduct readiness assessment using a structured framework (Deloitte aiRMF, Microsoft Cloud Adoption Framework) to identify capability gaps before major investment. Prioritise skills gap analysis and cultural readiness evaluation, because these human factors cause 70% of project failures. Establish baseline metrics and create 3/6/12 month roadmap addressing the most pressing gaps first.

How do I convince leadership to invest in AI skills training when budgets are tight?

Build a business case using the stats: 75% of organisations pause AI projects due to skills gaps, which wastes existing AI technology investments. Show that 58% build training into initial AI budget because retrofitting is more expensive. Demonstrate ROI timeline and quick wins approach. Point out the risk: competitors investing in skills whilst you delay creates competitive disadvantage.

Can you explain the biggest mistakes companies make when scaling AI from pilots to production?

Three problems: (1) Skipping readiness assessment and cultural preparation—that’s why 66% are stuck in experimentation, (2) Inadequate change management treating AI as purely technical implementation, (3) Insufficient infrastructure and governance planning causing security and compliance failures. The fourth mistake: not defining success metrics before scaling, making it impossible to demonstrate value and justify continued investment.

What’s the best way to handle employees using unauthorised AI tools without stifling innovation?

Education-first approach over enforcement: explain security risks—77% paste data into GenAI, 82% from unmanaged accounts—then provide approved alternatives and create BYOA (Bring Your Own AI) policies with risk assessment. Implement sandbox environments for safe experimentation within governance guardrails. Address root cause: more than half hide usage because approved tools are inadequate or slow to access. Fix the underlying tool gap rather than just prohibiting usage.

How long does it actually take to get an organisation ready for AI adoption?

Realistic timeline: 3-6 months for foundational readiness—literacy, assessment, governance. Then 6-12 months for skills development and pilot projects. Finally 12-24 months for production deployment and measurable ROI. Don’t trust consultants promising transformation in weeks. The 70% failure rate comes from rushing human and cultural aspects. Quick wins are possible in 3 months but sustainable transformation requires 18-36 month commitment.

What are the warning signs that my AI adoption strategy is failing?

Red flags: (1) More than 6 months in experimentation with no production deployments, (2) High shadow AI usage suggesting approved programmes are inadequate, (3) Skills gaps not closing despite training—check 3/6 month milestones, (4) Cross-functional teams not forming or silos persisting, (5) Executive sponsorship weakening or becoming ceremonial rather than active, (6) Unable to demonstrate ROI or business value from pilots.

How can I build AI skills when my team is already stretched thin with existing workloads?

Apply the 70/20/10 rule: focus on learning through actual work (70%) rather than separate training time. Integrate AI tools into current projects so skill development happens during regular work. Start with high-impact use cases where AI demonstrably saves time—development assistants, code review tools. Time savings fund further learning. Avoid big bang approach: begin with AI champions (10-15% of team) who then mentor others.

What’s the most effective way to overcome resistance to AI change amongst technical teams?

Address both emotional and practical concerns using ADKAR: (1) Awareness—show competitive necessity and business drivers, (2) Desire—demonstrate career advancement tied to AI skills not job displacement, (3) Knowledge—provide role-specific learning paths for engineers, (4) Ability—hands-on labs and real projects, not just theory, (5) Reinforcement—celebrate wins publicly and embed in promotion criteria. Technical teams respect evidence. Show data on productivity gains and peer success stories.

Should we centralise AI expertise in a Centre of Excellence or distribute AI champions across teams?

Depends on company size and structure: CoE works for SMBs under 200 employees with limited AI talent—centralised expertise, consistent standards. Distributed champions work better for 200-500 employees with multiple product lines—faster innovation, business-aligned solutions, local ownership. Many adopt hybrid: CoE sets standards and governance, champions drive local adoption. Don’t get stuck in analysis paralysis. Start with one model, iterate based on what works.

How do I balance AI governance and security with the need to move fast and experiment?

Implement risk-based governance: strict controls for high-risk AI usage—customer data, financial decisions, compliance-sensitive operations. Lighter touch for low-risk experimentation like internal productivity tools and development assistance. Create sandbox environments with governed AI tools where teams can experiment safely without compromising security. Education over enforcement: transparency-based BYOA policies with risk assessment prove more effective than restrictive policies that drive people underground.

What metrics actually matter for measuring AI readiness and adoption success?

Leading indicators (predict success): AI literacy rate across workforce, skills coverage percentage by role, training completion at 3/6/12 month milestones, approved tool adoption velocity, shadow AI disclosure rate improvement. Lagging indicators (measure outcomes): production deployment success rate, time from pilot to production, business KPIs—cost reduction, revenue impact, efficiency gains—employee satisfaction, customer experience metrics. The baseline you established during readiness assessment lets you demonstrate change over time.

How do I transition from developer to change leader when managing AI adoption?

Recognise this is role transformation not just skill addition: shift from technical problem-solving to organisational change leadership. Apply ADKAR to yourself first—build awareness of leadership requirements, develop desire for strategic impact, acquire knowledge through change management frameworks like Prosci and Kotter, practice ability through small change initiatives, reinforce through peer networks—other CTOs, executive coaching. Leverage technical credibility: teams trust CTOs who understand technology, use this foundation to drive cultural transformation. Timeline: expect 6-12 months to develop change leadership confidence.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices
Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Jakarta

JAKARTA

Plaza Indonesia, 5th Level Unit
E021AB
Jl. M.H. Thamrin Kav. 28-30
Jakarta 10350
Indonesia

Plaza Indonesia, 5th Level Unit E021AB, Jl. M.H. Thamrin Kav. 28-30, Jakarta 10350, Indonesia

+62 858-6514-9577

Bandung

BANDUNG

Jl. Banda No. 30
Bandung 40115
Indonesia

Jl. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660