Microservices promised independent deployment and scaling. What many teams got instead was operational complexity, performance overhead, and endless coordination costs. If you’re spending more time managing infrastructure than building features, you’re not alone – 42% of organisations that adopted microservices are moving back to larger deployable units.
Here’s the thing about microservices – they introduce overhead that only pays off at certain team sizes and organisational structures. If you’ve stabilised at under 15 developers, chances are the operational burden is costing you more than you’re getting back. This guide is part of our comprehensive resource on understanding modern software architecture, where we explore the industry-wide architectural consolidation trend.
This playbook is going to walk you through the migration process using the strangler fig pattern. We’re covering the technical phases, data consolidation, rollback strategies, and how to communicate this to leadership without it sounding like a failure.
How Do You Assess Whether Microservices Consolidation Makes Sense?
Consolidation makes sense when operational complexity costs exceed microservices benefits. That’s the simple version. The hard part is actually quantifying these costs.
Start by looking at your team size. Teams under 15 developers gain little from microservices distribution while the operational burden stays constant. If you’re running 10+ services with 5-10 developers, the maths probably isn’t working in your favour.
Now calculate your infrastructure costs. Add up what you’re spending on multiple databases, service discovery tools, orchestration platforms, and monitoring systems. Grape Up consolidated from 25 to 5 services and reduced cloud infrastructure costs by 82%. One consolidation case study showed AWS costs fell 87% from $18k to $2.4k per month. That’s real money.
Take a look at your deployment overhead. Are deployments actually slower despite having independent services? Are you spending time coordinating releases between teams? Are you debugging distributed transactions more than building features? A DZone study found debugging takes 35% longer in distributed systems.
Check your performance metrics. Network latency between services adds milliseconds at each hop. Serialisation and deserialisation costs add up fast. One consolidation resulted in a 10x performance improvement, with response times dropping from 1.2 seconds to 89 milliseconds.
Here’s a framework for pre-migration assessment:
Count developers and work out how much time they’re spending on infrastructure versus features. Quantify infrastructure spend – databases, Kubernetes clusters, monitoring tools, all of it. Watch for red flags like coordinated deployments across services, difficulty onboarding new developers, and spending more time on infrastructure than features. Measure current end-to-end latency, identify network call overhead, and work out what serialisation is costing you.
If you’re seeing most of these red flags and your numbers support it, consolidation is worth exploring.
What Is the Strangler Fig Pattern and How Do You Apply It to Migration?
The strangler fig pattern incrementally replaces legacy systems by routing traffic between old and new implementations via a proxy layer. It’s named after strangler fig vines that gradually replace host trees in Queensland rainforests. Nature is brutal.
Martin Fowler introduced the pattern for exactly this kind of situation. It enables zero-downtime migration with continuous rollback capability. You build the new system alongside the old one, gradually shift traffic, and eventually retire the old implementation.
The pattern has three phases: Transform, Coexist, and Eliminate.
Transform: Build monolith modules with the same external interfaces as your microservices. Don’t change the API contracts yet. Just reimplement the logic in a consolidated codebase.
Coexist: Deploy a proxy or API gateway that routes a subset of traffic to the monolith while the majority still hits microservices. Start small, maybe 5% of traffic. Monitor everything.
Eliminate: Gradually increase the monolith traffic percentage. When a service is fully replaced, decommission it. Move on to the next service.
The proxy provides the traffic routing capability that makes gradual migration possible. API gateways intercept requests going to your backend and route them either to legacy services or new monolith modules. Your customers don’t know migration is happening. You can test functionality in production before full commitment.
Advantages of strangler fig: Risk reduction through incremental changes, continuous production validation, instant rollback capability, business continuity maintained throughout.
Disadvantages: The proxy becomes a single point of failure temporarily, there’s performance overhead from the routing layer, and you’ve got the complexity of managing dual systems.
Compare this to alternative approaches. Parallel run means higher resource costs but more validation. Branch by abstraction works at the code level rather than infrastructure level. Big-bang rewrites carry high risk and should be avoided. The strangler fig pattern is still your best bet for phased replacement in production systems.
How Do You Identify Which Microservices to Consolidate First?
Prioritise low-risk, high-value services with clear boundaries and minimal external dependencies. You want early wins to build momentum.
Start with services that share domain contexts and talk to each other constantly. If two services are making network calls to each other all the time, they probably belong in the same module. Use distributed tracing to visualise call patterns and identify tightly-coupled service clusters.
Look for services causing the highest operational burden. Which ones fail most frequently? Which require complex deployment procedures? Which create the most on-call alerts?
Map your service dependency graph. Find services with minimal dependencies to the monolith as the major benefit is a fast and independent release cycle.
Avoid starting with services handling sensitive data or core business transactions. You want to warm up with something fairly decoupled. Don’t pick services that have broken transactional boundaries or are too complex for your organisation’s operational maturity.
Choose services owned by a single team to simplify coordination. Multi-team services introduce organisational complexity on top of technical complexity, and you don’t need both.
Here’s a consolidation order strategy:
Start with leaf nodes that have no downstream dependencies – they’re safest to migrate first. Target services with frequent failures or complex deployments to get immediate operational relief. Prioritise services owned by the same team to simplify coordination. Choose simple data models first and save complex database schemas until you’ve got some migrations under your belt.
Use domain-driven design principles to identify bounded contexts that naturally belong together. Find services sharing domain models or requiring frequent schema coordination.
Define success criteria before you start. What does reduced operational complexity look like? Which performance metrics should improve? How will developer experience get simpler?
What Are the Step-by-Step Technical Phases of Migration?
Here’s the complete technical process for migrating a service.
Phase 1: Set up monolith skeleton
Create a monolith with hexagonal architecture and module structure matching service boundaries. Use the ports and adapters pattern) to isolate each module. Set up dependency injection. Build shared infrastructure for logging, monitoring, and configuration.
Don’t try to merge everything into an unstructured codebase. You’re building a modular monolith, not a big ball of mud. Maintain clear boundaries between modules using dependency rules.
Phase 2: Configure API gateway for traffic routing
Set up an API gateway or reverse proxy to route traffic between microservices and monolith. Modern gateway products support declarative routing rules, authentication, rate limiting, and monitoring.
Define routes based on URL patterns or service names. Configure traffic percentage controls for canary deployment. Integrate health checks. Set up monitoring and logging that shows which implementation handled each request.
Phase 3: Migrate first service
Reimplement the business logic in a monolith module. Maintain identical external API contracts initially. Don’t optimise or refactor yet. Just get equivalent functionality working.
Deploy the monolith with the new module alongside your running microservices. Configure the proxy to route a small percentage, maybe 5%, to the monolith. Monitor error rates, latency, and business metrics.
Phase 4: Gradually shift traffic
Use canary deployments to increase traffic gradually: 5% → 10% → 25% → 50% → 75% → 100%. Monitor at each milestone. If metrics are acceptable, move to the next percentage. If not, investigate and fix or roll back.
Feature flags provide runtime control over routing. You can target specific users, cohorts, or geographic regions. This gives you fine-grained control and instant rollback without redeployment.
Monitor end-to-end latency, error rate tracking, business metric validation like conversion rates and transaction success, and resource utilisation. Compare metrics between the microservice and monolith implementations.
Phase 5: Consolidate service database
Migrate the database using shadow writes for validation. Switch reads to the consolidated database only after consistency is validated.
Phase 6: Decommission microservice
After traffic is fully migrated and the database is consolidated, decommission the old service. Verify zero traffic is going to it. Archive the service code and configuration. Remove it from CI/CD pipelines. Delete cloud resources.
Then repeat phases 3-6 for each service until the migration is complete.
InfluxDB provides a real-world example. Their platform team migrated approximately 10 services in 3 months with a 5-person team. They moved from Go to Rust and from microservices to a single monolith. The decision was to reduce overall complexity from infrastructure and development perspectives. The monolith fit their team model: one team, one backend service, one language. For more detailed case studies of how leading companies migrated, including InfluxDB and Amazon Prime Video, see our comprehensive analysis of successful consolidations.
How Do You Handle Data Store Consolidation During Migration?
Data consolidation is the trickiest part of any migration. You need to maintain consistency while running dual systems and provide the ability to roll back if things go wrong.
Use shadow writes to synchronise data between the microservice database and monolith database during transition. The new system performs shadow writes, updating both databases in parallel while continuing to read from the legacy database.
Here’s how shadow writes work: Your application writes to both the microservice database and the monolith database simultaneously. A comparison process checks for data consistency between databases. Discrepancies get logged for investigation. Reads stay on the microservice database until consistency is validated. Once you’re confident data is synchronised, switch reads to the monolith database.
Continue shadow writes during early traffic migration to maintain the option to revert.
You have three database consolidation strategies:
Shared database approach: Single database with separate schemas per former service. This maintains logical separation while consolidating infrastructure.
Database-per-module: Maintain logical separation with physical database consolidation. Multiple databases on the same server, or separate schemas with strict access controls.
Gradual schema merging: Start with separate databases and merge schemas as the migration progresses. This reduces risk but extends the timeline.
For historical data, use an ETL process. Extract from microservice databases, transform schemas to match monolith design, load into the consolidated database. Validate data integrity through checksums and row counts. Maintain an audit trail of the migration process.
Change data capture provides an alternative to dual writes. CDC monitors database transactions in the source system and replicates changes to target databases. This provides eventual consistency without modifying existing transaction patterns. Event adapters consume change events and convert them to the new system’s data model.
The dual write problem exists when a service has to update its database and also send notification to another service about the change. There’s a small probability of the application crashing after committing to the database but before the second operation.
Validate data consistency before switching reads. Run automated comparison queries and sample-based validation for large datasets. Reconcile business metrics and analyse transaction logs.
Plan your production cutover carefully. Schedule a maintenance window if needed. Shift read traffic gradually and monitor data access patterns after cutover.
What Testing and Risk Mitigation Strategies Should You Use?
Testing during migration is different from normal development testing. You’re running two implementations of the same system and need to prove they behave identically.
Implement comprehensive integration testing comparing microservice and monolith outputs for identical inputs. Build a test suite that validates behaviour, not just API contracts.
Shadow testing provides real-world validation without risk. New implementations process production requests in parallel with legacy components without returning results to users. You compare outputs, log discrepancies, and investigate differences.
This is different from shadow writes for databases. Shadow testing is for application logic. Both implementations process the request. Only the legacy implementation’s response goes to the user. The new implementation’s response gets compared for correctness.
Canary deployments are your primary risk mitigation tool. Start with internal users or low-risk cohorts. Increase to 5% of production traffic. Monitor metrics for 24-48 hours. If metrics are good, increment to 25%, then 50%, 75%, and finally 100%.
Automate rollback if metrics degrade. Define rollback triggers based on error rates, latency thresholds, and business metrics. For example: error rate exceeds 0.1% increase, p95 latency increases more than 20%, transaction success rate drops, database connection pool exhaustion, or memory leak detection.
Feature flags enable instant rollback without code deployment. Implement runtime toggles for enabling or disabling monolith routing. Use per-user or per-cohort controls. Clean up flags after migration completes.
Load testing identifies performance issues early. Establish baseline performance before migration and test at each traffic increment.
Set up monitoring and observability properly. Use distributed tracing to track requests across microservices and monolith. Create error rate dashboards with alerting. Track latency percentiles and business metrics like conversion rates.
Draft rollback scripts for every migration phase. Maintain strict version control and test rollback procedures in production-like environments.
How Do You Communicate Architectural Reversal to Leadership?
Communicating a microservices consolidation to leadership requires the right framing and preparation. You’re reversing a decision that probably took significant effort to implement.
Frame consolidation as architectural maturity and course correction – a natural response to changing circumstances and team structure. The microservices decision was reasonable given previous context: expected team growth that didn’t materialise, anticipated scalability needs that turned out differently, or organisational changes that shifted priorities.
Position this as “optimising for current team size and business priorities” or “evolution based on learning and changed circumstances”. Avoid language suggesting the previous architecture was wrong. That just makes people defensive.
Reference industry trends. Nearly half of organisations that adopted microservices are now consolidating. Service mesh adoption declined from 18% in Q3 2023 to 8% in Q3 2025. Companies like InfluxDB and Amazon Prime Video have publicly shared consolidation stories. You’re in good company.
Build a business case with numbers. Quantify infrastructure cost savings from database licenses, orchestration platforms, and monitoring tools. Calculate developer productivity gains from reduced context switching and simplified debugging. Estimate deployment velocity improvements from reduced coordination overhead.
Present the strangler fig pattern as risk mitigation. This proven pattern enables zero-downtime migration with incremental rollout.
Set timeline expectations based on case studies like InfluxDB. Adjust your timeline for service count and team capacity, building in buffer for unexpected challenges.
Address common concerns proactively:
Data safety: Shadow writes, validation procedures, rollback capabilities mean low risk of data loss.
Business continuity: Zero-downtime approach, canary deployments, instant rollback maintain service availability.
Team morale: Frame it as architectural evolution with skill development opportunities. Successfully executing this migration demonstrates sophisticated engineering capabilities.
Future scalability: Modular monolith provides a migration path if you need to distribute again later.
Define measurable success criteria aligned with business objectives: infrastructure cost reduction, deployment frequency improvements, latency and error rate decreases, and on-call incident reduction.
Create communication templates for leadership and engineering teams covering the business case, timeline, risk mitigation, and regular progress updates.
Secure leadership support through clear communication of strategy and expected outcomes. Establish collaborative teams from engineering, operations, and product to address concerns and provide progress updates.
When Should You Abort a Migration and How Do You Roll Back?
Sometimes migrations don’t work out. You need to know when to abort and how to roll back safely.
Abort if business metrics degrade despite multiple remediation attempts. If conversion rates drop, revenue decreases, or user satisfaction declines consistently, and you can’t fix it, abort.
Roll back immediately if data integrity issues are detected or irrecoverable errors occur. Database corruption, data loss, security vulnerabilities, or compliance violations trigger immediate rollback.
Other abort criteria: Performance regressions that can’t be fixed within acceptable timeframes, team capacity insufficient to maintain migration momentum, or cost of migration exceeding projected benefits.
Feature flag rollback is the fastest procedure. Disable monolith routing via feature flag. Verify traffic is 100% back to microservices. Monitor for return to baseline metrics.
For database rollback, you have several strategies:
Point-in-time recovery: Restore database to a snapshot before problematic changes. Replay transaction logs to a specific timestamp. This results in some data loss during the rollback window, so use it carefully.
Blue-green databases: Instant switch from the new database back to the old database. This requires maintaining parallel databases during migration, which costs more but provides zero data loss rollback.
Transactional rollbacks: Leverage database transactions for atomic changes. The database automatically reverts on failures. This only works for single-database operations.
API gateway rollback means updating routing rules to revert traffic to microservices. Deploy configuration changes through your CI/CD pipeline and validate in staging first.
Accept that partial migration can be a valid end state. Some services may not justify consolidation costs. Hybrid architecture can be stable if boundaries are clear and intentional.
Each time you reroute functionality, you should have a rollback plan able to switch back to the legacy implementation quickly.
Conduct a post-abort retrospective if you abandon the migration. Document technical challenges, analyse costs versus estimates, and preserve learnings for future attempts.
How Do You Maintain Team Morale During Architectural Reversal?
Architecture reversals can damage team morale if handled poorly. Engineers might feel like they failed or wasted time building the microservices architecture. Don’t let it get to that point.
Frame consolidation as architectural maturity and learning-driven evolution. Technology decisions are context-dependent. Changing context justifies different approaches. This is normal.
The microservices decision was appropriate for the previous context. Maybe you expected team growth that didn’t happen. Maybe you anticipated scalability needs that turned out differently. All architecture decisions involve trade-offs that shift over time.
Best teams are where everyone has a voice and decisions are made collaboratively. When people feel involved in the process, they’re more invested in success.
Celebrate consolidation as an engineering achievement. Successfully executing the strangler fig pattern requires sophisticated engineering and demonstrates production maturity.
Emphasise skill development opportunities. Your team is learning migration patterns, data consolidation strategies, and risk mitigation expertise applicable to many contexts. These are valuable skills.
Include engineers in assessment and planning phases. Solicit input on migration order and create working groups for specific challenges like data migration or testing strategy.
Avoid the “we were wrong” narrative. Position microservices as appropriate for the previous context. Changed circumstances justify different architecture.
Manage imposter syndrome by sharing case studies of successful companies consolidating. Celebrate learning and adaptation as core engineering skills.
Communication practices matter. Provide regular progress updates celebrating milestones and recognise individual contributions. Run a post-migration retrospective highlighting learnings.
Structure of your team shapes the architecture you build. Small teams make complex architecture challenging. Over-communicate to avoid wasted work.
Post-Migration Optimisation and Consolidation Benefits
After completing the migration, you have optimisation work to do. Remove scaffolding, refine boundaries, and measure benefits.
Remove migration scaffolding. Decommission the API gateway, feature flags, and shadow write logic.
Refine module boundaries based on migration experience. Evaluate coupling patterns and identify opportunities for further consolidation or separation. For guidance on implementing boundaries in consolidated code, see our comprehensive implementation guide.
Optimise performance by replacing network calls with in-process function calls. In-memory monolith calls take nanoseconds while microservice network calls take milliseconds, representing a 1,000,000x difference. When a request spanned five microservices, you were burning 50-100ms on network overhead alone before any actual work happened.
Eliminate network latency between formerly separate services and optimise database queries across former service boundaries.
Consolidate infrastructure. Combine databases to reduce licensing costs. Eliminate service discovery infrastructure. Simplify or remove Kubernetes complexity.
Simplify CI/CD pipelines. You now have a single deployment pipeline instead of coordinating multiple services. Testing is simpler and rollback is easier.
Operational complexity reduction shows up in fewer on-call alerts, simpler debugging, and easier developer onboarding.
Measure benefits against original projections. Compare infrastructure costs, deployment frequency, on-call incidents, and performance metrics before and after.
One consolidation resulted in deployment time decreasing 86% from 45 minutes to 6 minutes.
Prevent re-fragmentation by establishing architectural decision records and guidelines for when new services are justified. Maintain modular structure to avoid becoming a big ball of mud.
This migration playbook provides the step-by-step process for executing consolidation, but it’s part of a larger architectural consolidation trend reshaping modern software development. The strangler fig pattern enables you to make this transition safely, incrementally, and with continuous validation at every step.
FAQ Section
How long does microservices to monolith migration typically take?
Timeline depends on number of services, complexity, and team size. Industry case studies show migrations taking 2-4 weeks per service with a small team. Plan for additional time for data consolidation and infrastructure changes.
Can you migrate some services and leave others as microservices?
Yes, partial migration is a valid end state. Some services may justify remaining separate: external-facing APIs, truly independent business domains, or different scaling requirements. Hybrid architectures are common and acceptable if boundaries are clear and intentional.
What if the team is too small to manage migration while maintaining features?
Consider dedicated migration time where feature development pauses, or allocate specific team members to migration while others maintain the current system. The strangler fig pattern allows spreading migration over an extended timeline if needed. Some teams migrate one service per quarter to balance ongoing work.
How do you handle third-party integrations during migration?
Maintain existing integrations by keeping API contracts identical initially. Once consolidated, you can optimise integration architecture. Use the adapter pattern to isolate third-party dependencies, making them easier to migrate or swap later.
What monitoring tools work best during strangler fig migration?
Use distributed tracing like Jaeger or Zipkin to track requests across microservices and monolith. Centralised logging with ELK stack or Splunk helps debugging. APM tools like DataDog or New Relic enable performance comparison. Custom dashboards comparing metrics between old and new implementations provide visibility.
How do you migrate authentication and authorisation during consolidation?
Centralise authentication early in migration to avoid reimplementing for each service consolidation. Use shared JWT validation or a centralised auth service that both microservices and monolith can leverage. Migrate authorisation logic when migrating individual services, maintaining identical policies initially.
Should you change programming languages during migration?
Only if language change provides significant value and the team has expertise. InfluxDB migrated from Go to Rust for type safety benefits, but this added complexity. Most teams should maintain the existing language to focus migration complexity on architecture, not implementation language.
How do you measure migration success beyond just “it works”?
Define success metrics before migration: infrastructure cost reduction percentage, deployment frequency increase, mean time to recovery reduction, developer satisfaction scores, onboarding time for new engineers, p95 latency improvements, error rate reductions, on-call incident decreases.
What’s the difference between monolith and modular monolith for migration?
Modular monolith maintains logical separation between former services through module boundaries, dependency rules, and hexagonal architecture, while sharing deployment and database. Traditional monolith lacks these boundaries and can become unmaintainable. Always target modular monolith to preserve migration investment.
How do you handle rollback if data consolidation has already happened?
Maintain database backups and transaction logs for point-in-time recovery. Use blue-green database strategy keeping old databases available during early migration phases. Implement shadow writes that can be reversed. Validate data extensively before decommissioning old databases. Plan for longer rollback windows for database changes.
Can microservices team structure continue after consolidation?
Team structure should evolve to match architecture. Consolidated codebase works better with feature teams or component teams rather than service teams. However, maintain module ownership to preserve accountability and expertise. Teams can own multiple modules instead of individual services.
What compliance considerations exist for migration?
Audit trail requirements may mandate maintaining separation for regulatory domains. Data residency rules might require keeping certain databases separate. Change management policies may require additional approval gates for migrations. Security reviews are needed for authentication and authorisation changes. Backup and recovery procedures must meet compliance requirements throughout migration.