You deployed the platform. The service catalog is populated. The APIs are connected. Leadership celebrated the milestone.
Three months later, you realise only 10% of your developers actually use the thing.
This pattern repeats everywhere. Backstage dominates 67% of the internal developer portal market, yet organisations installing it struggle to get developers to log in.
Your platform probably works fine technically. The problem is you treated it like an infrastructure project when it needed product discipline, change management, and a full cultural transformation.
This article is part of our comprehensive platform engineering analysis, examining why platforms fail despite technical excellence, how organisational structure determines outcomes, and whether mandating platform usage actually works.
Why Do 89 Percent Install Platforms But Only 10 Percent Use Them?
Installation is not the same thing as adoption. Installation means deploying the platform, importing service metadata, connecting authentication. Adoption means developers making it their primary workflow, abandoning old tools, and choosing the platform over alternatives.
Most organisations track the wrong metrics. They count logins, API calls, catalog entries. These vanity metrics don’t tell you if developers changed their behaviour. Only 60% of platforms meet their goals, yet platform producers report 75% success rates whilst consumers report 56%.
Builders think they succeeded because the platform works. Users think they failed because it doesn’t make their work easier. This disconnect has serious implications for the business case, as adoption failure destroys ROI regardless of technical excellence.
Nearly 25% of organisations rely on subjective assessments rather than formal metrics to evaluate platform success. Another 29.6% don’t measure success at all. You can’t improve adoption if you don’t measure it.
When platforms fail to achieve adoption, developers keep using old workflows. Shadow IT proliferates. Teams technically comply with mandates whilst building workarounds. The platform becomes a ticket-based request system rather than self-service automation.
Marcus Crane from Halter puts it bluntly: “The homepage literally says ‘an open-source framework for building developer portals’. It doesn’t say ‘a free developer portal.’ You still have to build the thing.”
Installing Backstage isn’t implementing a platform. It’s starting a project that most organisations abandon halfway through.
The adoption paradox exists because organisations confuse deployment milestones with adoption outcomes. They celebrate when the platform launches but ignore whether developers use it.
Why Do Platform Engineering Initiatives Fail Despite Technical Excellence?
The primary failure pattern is treating platforms as infrastructure projects rather than product initiatives.
Infrastructure projects have endpoints. You deploy them, hand them off, move on. Products require ongoing investment, user research, and iteration.
Platform failures stem from change management issues and organisational misalignment rather than technology gaps. If you built it without understanding developer needs, you solved problems that don’t exist whilst ignoring the ones that do.
The “Field of Dreams” fallacy undermines platforms. Developers won’t voluntarily switch from familiar workflows. Your platform needs to be significantly better to justify the switching cost. This pattern echoes concerns about whether platform engineering repeats DevOps’ cultural mistakes of promising transformation through tooling alone.
Only 27% of platform engineering adopters have fully integrated the three key components: close collaboration between platform engineers and other teams, platform as a product approach, and clear performance metrics.
The other 73% installed tools without the organisational transformation required to make them effective.
Another pattern: rebranding operations teams as “platform engineering” without mindset change. 45.5% of platforms operate dedicated, budgeted teams that remain primarily reactive. They respond to tickets instead of proactively reducing developer toil through automation.
The skill concentration trap creates organisational dependencies. You move all your senior engineers to the platform team, leaving application teams without expertise. Those teams now depend on the platform team for everything, creating bottlenecks.
Day 2 operations neglect undermines platforms over time. Self-hosting Backstage requires 6-12 months before deployment, after which the team moves on. Nobody maintains it. Catalog data goes stale. Developers stop trusting it.
Julio Zynger from Zenjob experienced this: “Once people started relying on Backstage, it really needed to be treated like a production service. It had to be up and reliable all the time, otherwise we literally couldn’t ship or troubleshoot anything.”
Cultural and structural factors trump technical implementation quality. You can build a technically perfect platform that nobody uses because you ignored the organisational prerequisites. Success requires choosing adoption-friendly implementation strategies from the outset.
What Does “Platform as a Product” Actually Mean in Practice?
“Platform as a product” gets cited frequently but rarely defined clearly. Here’s what it actually requires.
You need a platform product manager with explicit responsibility for treating developers as customers. Not a part-time role. A dedicated role focused on understanding developer needs and prioritising improvements accordingly.
A platform as product approach involves treating the developer platform with a clear roadmap, communicated value, and tight feedback loops. You run quarterly user research sessions. You conduct developer interviews. You track satisfaction using Net Promoter Score. You instrument the platform to identify friction points.
Contrast this with platform as infrastructure. Infrastructure measures success by deployment completion. Platform as product measures success by user outcomes. Do developers prefer your platform over alternatives? Is satisfaction improving?
The measure-learn-improve cycle matters. You track time-to-first-deployment for new developers. When that metric regresses, you investigate and fix. You run A/B tests on golden paths to reduce friction. Establishing robust measurement frameworks enables you to diagnose adoption problems before they become systemic failures.
Zeshan Ziya from Axelerant recommends: “Don’t try to pull in every Backstage tool at once. Start with one, roll it out, get feedback, then add the next.”
Ship something small, validate it solves a real problem, iterate based on feedback.
Executive sponsorship determines whether platforms get ongoing investment or starve after launch. 47.4% of platform budgets concentrate in sub-$1M budget ranges, identified as systematically underfunded. Platforms need continuous investment. One-time project budgets don’t work.
How Should You Structure Platform Engineering Teams for Success?
Platforms need dedicated teams. But you shouldn’t strip all your senior engineers from application teams to build the platform team. That’s the skill concentration trap.
Most teams that thrive on Backstage dedicate 3-5 engineers, including at least one comfortable in React/TypeScript. Platform teams need skills beyond infrastructure expertise: product management, technical writing, user experience design.
Platform teams need executive sponsorship and independence from individual application team pressures. The collaboration model should be service provider to customer, not mandate enforcer to subject.
Resourcing requires sustained investment. Alexandr Puzeyev from Tele2 Kazakhstan describes the burden: “Almost all my time goes into keeping the catalog data accurate. There’s no bandwidth left to build plugins.”
First-year costs for self-hosting Backstage exceed $800,000 with three mid-level engineers plus delayed value. Ongoing annual costs require minimum $450,000. Most organisations under-estimate these costs.
Don’t create organisational dependencies by moving all senior engineers to the platform team. Maintain distributed expertise. Platform teams enable rather than replace application team capabilities.
Should Platforms Be Mandated or Voluntary?
63% of platforms are mandated rather than optional. 36.6% of organisations rely on mandates to drive platform adoption. By comparison, 28.2% report intrinsic value naturally pulling users to platforms, whilst 18.3% achieve participatory adoption where users contribute back.
There’s no universal answer. Context determines whether mandates work.
Mandates can work when platforms provide genuinely superior capabilities and the organisation can absorb workflow disruption. They fail when platforms lack maturity or offer poor developer experience.
The failure patterns are predictable. Developers resist imposed standards, with mandates creating “shadow IT or malicious compliance, where teams technically use the platform but hack around it”, according to Dmitry Chuyko. They log into the platform for visibility whilst maintaining their actual workflows elsewhere.
Pasha Finkelshteyn explains why this matters: Mandates “sever the feedback loop between platform builders and users”. When developers are forced to use platforms, they don’t report problems because they’ve built workarounds. The platform stagnates whilst usage statistics look fine.
Voluntary adoption requires platforms to deliver demonstrable value. Pasha Finkelshteyn’s principle: “Speed wins first…quality of life wins second”. When deployment times drop from days to minutes, adoption becomes voluntary.
Kevin Reeuwijk from Spectro Cloud recommends selecting teams with genuine pain points, converting them into champions, and gathering success metrics before broader rollout. Start with pilot programs. Find teams struggling with problems your platform solves.
If you mandated initially and adoption is failing, shifting to voluntary requires making the platform genuinely valuable first. Fix the platform, demonstrate value through pilots, build champions, then transition.
Why Do Developers Resist Using Internal Developer Platforms?
The primary resistance pattern is cognitive load and learning curve. Developers have workflows that work. Your platform requires learning new concepts, interfaces, processes. The switching cost needs to be offset by visible productivity improvements.
Systems requiring specialised language knowledge or excessive complexity limit adoption, according to Donnie Page. If your platform requires understanding Kubernetes, Terraform, and YAML templating to deploy a simple service, developers stick with what they know.
Developers perceive platforms as constraints reducing flexibility. Rory Scott from Duo Security experienced this: “TechDocs works great for the engineers, but when we asked the designers and PMs to learn GitHub and write Markdown, it just wasn’t going to happen. They stuck with Confluence”.
Poor user experience kills adoption. Many platforms built by infrastructure engineers optimise for comprehensiveness over usability. Adam Tester from Deel contrasts approaches: “Backstage feels like it’s built for developers first. The UI, the YAML, the whole mindset. Tools like Cortex look great on a leadership dashboard, but they don’t speak to engineers”.
Trust deficits from reliability or performance issues drive developers to known-stable alternatives. If your platform goes down and blocks deployments, developers remember. They build workarounds to avoid depending on it.
The value perception gap matters most. Platforms must demonstrably save time and reduce toil or developers see no reason to switch.
Golden Paths can build in guardrails that the business requires, such as compliance, security scanning, monitoring, so developers have fewer gates to hurdle. Well-designed golden paths reduce cognitive load. Poorly designed ones increase friction.
What Is the Difference Between a Developer Portal and a Full Platform?
At its core, Backstage is a service catalog — a place where developers can find details about every microservice: docs, ownership, dependencies. That’s a developer portal. It provides discovery and documentation.
A full platform includes comprehensive capabilities: infrastructure provisioning, deployment automation, observability, security guardrails. IDPs provide self-service, on-demand access to infrastructure through custom CLIs and web interfaces.
The confusion is widespread. Organisations mistake deploying a service catalog for implementing a platform. They install Backstage, import service metadata, declare success. Developers log in, look at documentation, then go back to their old workflows because the “platform” doesn’t actually automate anything.
Yeshwanth Shenoy from Okta experienced this: “We already built so much DevEx infra that was highly specific to our company. We tried retrofitting Backstage, but at that point it was just a UI layer and it didn’t seem worth it”.
Teams spend resources on portal UI whilst neglecting underlying self-service automation. This is the “Front End First” anti-pattern. Leadership sees impressive dashboards and assumes the platform is working. Developers see documentation about manual processes and don’t adopt it.
Automation reduces toil. Documentation merely explains manual processes. If your “platform” requires developers to follow 15 manual steps documented in your beautiful portal, you built a documentation site, not a platform.
The diagnostic question: does your “platform” actually automate infrastructure provisioning and deployment, or just document how to do it manually?
What Organisational Factors Determine Platform Success or Failure?
Three factors determine outcomes: culture, structure, and strategy. Technical implementation quality is table stakes. Organisational execution determines success or failure.
Culture means embracing platform as product mindset throughout the organisation. Leadership understands platforms require ongoing investment, not one-time project budgets. Platform teams treat developers as customers. Application teams provide feedback rather than passive compliance.
Structure means dedicated platform teams with product management skills, not rebranded operations teams. 45.5% of platforms operate dedicated, budgeted teams that remain primarily reactive. Only 13.1% have achieved optimised, cross-functional ecosystems.
Strategy means choosing adoption approaches that align with platform maturity and organisational change management capacity. Your strategy should match your platform’s actual value proposition.
Executive sponsorship determines whether platforms get sustained commitment or die from underinvestment. Platforms succeed with multi-year budget commitment. They fail when treated as one-time IT projects.
The measurement framework determines whether you improve or stagnate. 29.6% of organisations don’t measure success at all. Organisations measuring adoption metrics to diagnose problems adjust course. Those tracking only technical metrics don’t notice when adoption fails.
Matt Law from The Warehouse Group demonstrates successful outcomes: “We proved we could deliver a microservice into an environment in about 60 seconds. It used to take four to six weeks”. That’s the impact that drives voluntary adoption.
The success pattern: organisations with product-minded platform teams, voluntary adoption strategies starting with pilot programs, and continuous improvement cycles based on developer feedback achieve high adoption. 71% of leading adopters have significantly accelerated their time to market, compared with 28% of less mature adopters.
Technical excellence is necessary but insufficient. Organisational factors determine whether your technically sound platform achieves adoption or languishes unused.
Assess your organisation’s culture (product versus infrastructure mindset), structure (dedicated team with product skills versus rebranded operations), and strategy (mandate versus voluntary adoption matching platform maturity). Identify gaps. Address them before expanding platform scope. For a complete overview of how these factors connect to investment decisions, positioning debates, and implementation strategies, see our comprehensive platform engineering analysis.
FAQ Section
How do I fix low platform adoption in my organisation?
Start with diagnosis. Survey developers to identify adoption barriers – cognitive load, poor UX, workflow disruption, lack of value.
Then address root causes. If developers don’t see value, instrument the platform to identify and fix friction points. If workflows disrupt existing processes, provide migration paths and training. If trust is broken, focus on reliability and support before pushing adoption.
Consider resetting with pilot programs targeting teams with genuine pain points to generate champions and proof points.
What metrics should platform teams track to measure success?
Track developer outcomes, not platform activity.
Primary metrics: developer adoption rate (daily active users), time to first deployment for new developers, developer satisfaction (NPS), self-service rate (automated vs ticket-based requests).
Secondary metrics: DORA metrics (deployment frequency, lead time, change failure rate, mean time to recover) to demonstrate business impact.
Avoid vanity metrics like logins, API calls, or service catalog entries that don’t indicate genuine adoption or value delivery.
Should we mandate platform usage or make it voluntary?
Context-dependent.
Mandates work when platforms provide genuinely superior capabilities and organisation can absorb workflow disruption. Mandates fail when platforms lack maturity, offer poor developer experience, or organisation has low change management capacity.
Voluntary adoption requires platforms to deliver demonstrable value through superior developer experience. Start with pilot programs targeting teams with pain points to generate champions.
If organisation mandated initially, consider phased transition to voluntary by first making platform genuinely valuable through user research and improvement cycles.
How long does it take to build a successful internal developer platform?
Building the minimum viable platform takes 6-12 months for open-source approaches like Backstage, potentially faster with commercial platforms.
However, reaching meaningful adoption takes longer: 12-18 months to establish golden paths, build developer trust, and demonstrate value. Participatory adoption (developers contributing improvements) typically emerges after 18-24 months.
Organisations treating platforms as ongoing products invest continuously rather than expecting completion. Day 2 operations, maintenance, and continuous improvement require sustained resources beyond initial build.
What is the difference between platform engineering and DevOps?
Platform engineering operationalises DevOps principles through dedicated teams building self-service capabilities. DevOps emphasised culture and collaboration. Platform engineering adds product discipline and automation.
Risk: organisations may rebrand operations teams as “platform engineering” without actual transformation—same people, same mindset, new name.
Genuine platform engineering treats the platform as product, requires product management skills, and measures success through developer adoption and experience rather than just infrastructure uptime.
How many people do I need on a platform team?
Industry patterns suggest platform team size scales with organisation: starting at 2-3 people for smaller organisations (100-200 developers), growing to 5-10 for mid-size (500+ developers), and 15-20+ for large enterprises (1000+ developers).
Required skills: platform engineers (infrastructure automation), platform product managers (user research, roadmap), technical writers (documentation), potentially UX designers.
Critical: avoid skill concentration trap by not stripping all senior engineers from application teams to build platform team.
Can we use Backstage as our entire platform?
Backstage is a developer portal framework, not a complete platform. It provides service catalog, scaffolding, and plugin architecture but requires underlying automation (infrastructure provisioning, deployment pipelines, observability) to deliver actual self-service capabilities.
Organisations installing only Backstage catalog without building automation infrastructure fall into “Front End First” anti-pattern: visible portal without invisible automation that actually reduces developer toil.
Backstage succeeds when integrated with golden paths that automate developer workflows.
What causes malicious compliance with platform mandates?
Malicious compliance emerges when organisations mandate platform usage but platforms lack maturity or provide poor developer experience. Developers technically comply (using platform for visibility/compliance) while maintaining shadow IT or workarounds for actual work.
Root causes: platforms that increase rather than decrease cognitive load, reliability issues driving developers to known-stable alternatives, workflow disruption without compensating productivity gains.
Solution: make platform genuinely valuable before mandating, or shift to voluntary adoption strategy.
How do I get executive sponsorship for platform engineering?
Frame platform engineering as business capability investment rather than IT project.
Quantify current costs: developer time wasted on toil, delayed time to market, security/compliance risks from inconsistent practices.
Project benefits using DORA metrics: improved deployment frequency, reduced lead time, lower change failure rate.
Emphasise platform as product requiring ongoing investment, not one-time project. Provide comparative data: organisations with mature platforms demonstrate measurable productivity improvements. Request committed multi-year budget rather than single project allocation.
What are golden paths and why do they matter for adoption?
Golden paths are standardised, opinionated workflows providing fastest, lowest-friction routes for common developer tasks (deploying code, provisioning infrastructure, accessing observability).
They matter because they reduce cognitive load: developers don’t choose between 47 ways to deploy, they follow one well-supported path.
Successful golden paths balance prescriptiveness (reducing decisions) with flexibility (allowing escape hatches for edge cases). Poor golden paths increase friction and drive shadow IT.
Adoption depends on golden paths demonstrably saving time versus alternative workflows.
How do I measure platform ROI and business value?
Measure developer productivity improvements through DORA metrics (deployment frequency, lead time, change failure rate, mean time to recover).
Track toil reduction: hours saved through automation of previously manual processes. Calculate time to market impact: faster feature delivery from reduced deployment friction.
Measure developer satisfaction (NPS) and adoption rate (active users) as leading indicators. Quantify risk reduction: security vulnerabilities caught by automated guardrails, compliance violations prevented by policy as code.
Compare costs: platform team investment versus productivity gains multiplied across all application developers.
What is the skill concentration trap in platform engineering?
Skill concentration trap occurs when organisations move all senior engineers to platform team, leaving application teams without expertise for complex challenges.
Creates organisational dependency: application teams must wait for platform team for any infrastructure changes. Undermines adoption: junior application developers can’t effectively use sophisticated platforms without senior guidance.
Solution: maintain distributed expertise by keeping senior engineers on application teams while building dedicated platform team with product management and automation skills. Platform teams enable rather than replace application team capabilities.