You’ve just stepped into the CTO role. Suddenly every technical choice has business consequences you need to justify to a CFO who speaks a different language. And the decisions that seem obvious? They’re the ones that create the most costly mistakes.
The problem isn’t making decisions. It’s knowing which ones deserve three weeks of analysis versus three hours. It’s explaining to your board why the “expensive” option is actually cheaper over three years. It’s building consensus without drowning in endless meetings.
Here’s what works: proven frameworks from Amazon, NASA, and CTOs who’ve been through it. Classification systems that separate reversible experiments from irreversible commitments. Evaluation matrices that make trade-offs transparent. Stakeholder tools that prevent paralysis. Communication templates that translate technical costs to CFO language.
You’ll learn how to classify decisions by reversibility, allocate your analysis effort appropriately, identify when costs invert over time, build stakeholder consensus in weeks not months, and document rationale that builds credibility.
This article covers eight practical frameworks with templates and step-by-step processes. By the end, you’ll have clear processes for evaluating technical decisions that initially appear counterintuitive.
How do you classify technical decisions as one-way doors vs two-way doors?
Amazon has this framework for making decisions. They call them one-way doors and two-way doors. One-way doors are the big calls with consequences you can’t easily undo. Two-way doors? You can walk back through them if things don’t work out.
The difference comes down to what it costs to reverse a decision. Add up the team retraining costs, the code migration effort, vendor switching penalties, and the opportunity cost of getting distracted from your strategic priorities.
If reversal cost exceeds 3-6 months of team capacity or creates business disruption, treat it as a one-way door.
Database choice, programming language, and cloud provider are one-way doors. UI framework version, monitoring tool, and deployment schedule are two-way doors. These choices are disruptive and costly to change.
The classification affects your process. One-way doors demand broad stakeholder involvement, extensive analysis, and formal documentation. Two-way doors need minimal consultation and quick commitment. When you have enough evidence to believe it could benefit customers, you simply walk through. Waiting for 90% or more data means you’re likely moving too slow.
Here’s the counterintuitive bit: some expensive choices are actually two-way doors if containerisation reduces switching costs. Some cheap choices lock you in through accumulated dependencies. The scale of the decision and its potential impact matter more than the initial price tag.
Decision Reversal Cost Calculation Template:
Calculate costs across five categories:
- Team retraining – learning curve productivity loss
- Code migration – rewrite and refactor effort
- Vendor switching – contract penalties, data migration, integration rebuild
- Opportunity cost – features not shipped during reversal
- Risk buffer – contingency for complications
Sum these categories. If the total exceeds 3-6 months of team capacity, you’re looking at a one-way door.
Examples:
- Database migration: One-way door with 12-18 month reversal cost (schema changes, query rewrites, data migration, team retraining)
- Logging tool switch: Two-way door with 2-week reversal cost (configuration changes, minimal code impact)
Ask yourself: How difficult would it be to reverse this decision? What steps would undo it? How much time would we lose? Can we test on a smaller scale first? These practical decision frameworks help you classify correctly.
What is a RACI matrix and how does it prevent decision paralysis?
RACI is a framework that clarifies who does what. Responsible means you do the work. Accountable means you own the decision. Consulted means you provide input. Informed means you receive notification. Simple.
It prevents two common problems: trying to get consensus from your entire organisation and decisions that get abandoned because no-one owns them.
The framework prevents decision paralysis by limiting who gets Consulted. For one-way doors, that’s 5-10 people maximum. For two-way doors, 0-3 people. More than 10 Consulted creates consensus paralysis where everyone has veto power and nothing moves.
Here’s how it works for different door types. One-way doors need broad consultation with a clear Accountable owner. That means essential subject matter experts, affected teams, and governance functions like security and FinOps. Two-way doors often just need the doer plus someone informed.
The mistakes are predictable. Making everyone Consulted creates veto culture. Multiple Accountable parties diffuse responsibility. Forgetting Informed stakeholders creates surprise and resistance when you announce the decision.
RACI Template for Cloud Migration Decision:
- Responsible: Platform Engineering Team (executes migration)
- Accountable: CTO (owns final decision)
- Consulted: Engineering Leads (5 people), Product VP, Security Lead, FinOps Manager
- Informed: CEO, CFO, All Engineering, Customer Success
For architecture decisions, you’re typically Accountable. You consult engineering leads, product, security, and FinOps. The CEO and CFO get informed, not consulted. That distinction matters. Broad representation minimises delays that occur when stakeholders aren’t engaged from the beginning.
Many stakeholders should be Informed rather than Consulted. They receive decision communication but don’t provide input that might delay the decision. This is the key to moving fast while maintaining alignment.
Before/After Example:
Before RACI: Microservices adoption decision takes 3 months with 15 stakeholders in every meeting, endless debate, no clear owner, decision abandoned twice.
After RACI: Same decision takes 3 weeks with CTO Accountable, 7 people Consulted via async written proposals, 30 people Informed at announcement.
Integrate RACI with your one-way door process as step two: stakeholder mapping. Once you’ve classified the decision as one-way or two-way, build your RACI matrix to determine consultation breadth.
How do Architecture Decision Records (ADRs) create accountability and enable future reviews?
An ADR captures an architecture decision along with its context and consequences. It prevents you from relitigating past decisions and lets you make informed reviews when circumstances change.
The standard format has seven sections. Title uses the pattern “ADR-001: Use PostgreSQL for Primary Datastore”. Status shows Proposed, Accepted, Deprecated, or Superseded. Context explains what prompted the decision. Decision states what you chose. Consequences detail the trade-offs you accepted. Alternatives Considered shows the options you rejected and why. Review Triggers define when you’ll revisit.
Write ADRs for all one-way door decisions. High-impact two-way doors benefit from brief ADRs covering just Context, Decision, and Review Trigger. Low-impact two-way doors need only a decision log entry.
Store them in GitHub repositories as markdown files in /docs/adr/, in Confluence spaces, or Notion databases. Version control lets you track how decisions evolved over time.
The accountability mechanism works like this. The ADR author owns the decision outcome. Future people in the role can understand reversal costs by reading the ADR’s context and consequences. When the team accepts an ADR, it becomes immutable. If new insights require a different decision, you propose a new ADR that supersedes the previous one.
Complete ADR Template Example: Migrate from Monolith to Microservices
Title: ADR-003: Adopt Microservices Architecture for Core Platform
Status: Accepted
Date: 2024-11-15
Context: Monolithic application has grown to 400k lines of code. Deployment cycles take 6 weeks due to coordination across 12 teams. Single database limits scaling. Customer-facing features are blocked by backend changes. Three production incidents in last quarter traced to unintended side effects of changes.
Decision: Migrate core platform to microservices architecture over 18 months. Start with payment and notification services. Maintain modular monolith pattern for tightly coupled business logic.
Consequences Accepted: Increased operational complexity requiring investment in observability, service mesh, and DevOps capabilities. Initial development velocity will decrease 20-30% during migration. Team retraining costs estimated at $180k. Infrastructure costs increase $40k annually.
Benefits Expected: Independent deployment cadence per service. Team autonomy increases. Database scaling becomes possible. Fault isolation improves. New feature velocity expected to increase 40% post-migration.
Alternatives Considered:
- Modular Monolith Only: Lower operational complexity but doesn’t solve deployment or scaling issues. Rejected because deployment bottleneck is primary pain point.
- Serverless Architecture: Reduces operational burden but vendor lock-in concerns and team lacks expertise. Deferred for future consideration.
Accountable: CTO Consulted: 3 Engineering Leads, Platform Lead, Security Lead, FinOps Manager Review Triggers: Infrastructure costs exceed $80k annual increase, velocity doesn’t improve by 20% within 24 months, operational incidents increase 50% during migration
Process for Writing ADRs:
- Draft Context and Decision sections
- Circulate to Consulted stakeholders for feedback
- Incorporate feedback and document dissenting views
- Mark as Accepted once Accountable owner commits
- Communicate to Informed stakeholders
Common review triggers include: technology officially deprecated, cost assumptions violated by more than 30%, team capabilities changed significantly, business requirements shifted.
ADRs serve as communication tools letting teams and stakeholders grasp the reasoning behind decisions. They help you trace why decisions were made when initial drivers evolve. Keep them concise at one or two pages, readable within 5 minutes.
How does time horizon analysis reveal counterintuitive trade-offs?
ADRs document your decisions, but choosing the right alternative requires evaluating costs across time. That’s where time horizon analysis becomes essential.
Time horizon analysis evaluates decisions across multiple periods to identify when cost and benefit rankings flip over time. The cheap option creates hidden operational costs that compound. The expensive upfront investment delivers long-term savings. The fast implementation slows velocity by year two.
Match your evaluation horizon to expected technology lifespan. Infrastructure decisions need 3-5 years. Framework choices need 2-3 years. Library decisions need 1-2 years. Process and tooling decisions need 6-12 months.
Model four cost categories in each time period: direct costs like licensing and infrastructure, operational costs like maintenance and support, team costs like training and productivity, and opportunity costs for features not built.
Managed vs Self-Hosted Database Example:
Managed database costs $50k per year, constant across all years. Self-hosted costs $150k in Year 1 for setup, infrastructure, and team training. Then $30k per year in Years 2-5 for maintenance.
Crossover happens at Year 2. By Year 5, you’re $80k ahead with self-hosted despite the higher upfront cost.
The counterintuitive part? Most teams evaluate infrastructure decisions on 12-month horizons and choose managed because it looks cheaper. They’re optimising for the wrong timeframe.
Cloud Migration Economics:
Year 1 costs are high due to migration effort and parallel running systems. Year 2 costs are medium during optimisation and rightsizing. Years 3-5 costs are low at steady state optimised.
Break-even typically happens in Year 2-3. Full ROI becomes visible by Year 5. Shorter horizons make migration look incorrectly expensive, which is why CFOs often reject these initiatives when you present 12-month numbers.
Technical debt now represents a measurable liability on engineering balance sheets. Modern ROI models account for accumulating interest in unaddressed code quality issues. The financial impact extends beyond developer productivity to customer satisfaction and market responsiveness.
Time Horizon Template Structure:
Create a spreadsheet with rows for cost categories and columns for time periods. Cells contain projected costs.
Rows: Infrastructure, Licenses, Team Training, Ongoing Maintenance, Opportunity Cost Columns: Year 1, Year 2, Year 3, Year 4, Year 5 Calculate totals for each alternative across all years
Visualise with a stacked area chart showing total cost evolution for each alternative. The chart makes crossover points obvious to executives who need to approve the investment.
This is where time horizon analysis provides real value to CFO communication. The graphs show exactly when investments pay back, making counterintuitive expensive choices defensible through demonstrated long-term ROI.
How do you translate technical debt and architecture decisions to CFO language?
Technical concepts lack business value in engineering terminology. CFOs need costs mapped to recognised financial categories like risk reduction, revenue enablement, and cost avoidance.
Lead with business outcome, support with technical rationale. Show a comparison matrix of alternatives with costs and benefits. Include sensitivity analysis on key assumptions.
ROI Communication Template:
- Current State Cost: Quantify existing pain in business terms
- Proposed Investment: Itemise implementation costs with timeline
- Expected Benefits: Map technical improvements to business outcomes
- Time Horizon: Show payback periods across multiple years
- Risk Factors: What could delay benefits or increase costs
- Alternative Comparison: Cost of NOT investing
Velocity Quantification Example:
Technical debt reduced velocity from 40 to 28 story points per sprint. That’s a 30% tax on development capacity. With 8 developers at $120k loaded cost, that’s $288k in annual capacity. You’re losing $86k per year.
Refactoring the authentication system costs $180k upfront. It recovers $86k annually in developer capacity. Payback happens in 2.1 years. Over five years, you’re $250k ahead.
Rather than discussing abstract concepts like “refactoring the codebase”, communicate tangible benefits: “This update will reduce downtime and allow us to ship features 20% faster”.
Use this vocabulary mapping to translate technical concepts to business terms:
| Technical Term | CFO Translation | |—————-|—————–| | Technical Debt | Maintenance liability reducing velocity by X% | | Refactoring | Risk reduction investment preventing $Y failures | | Migration | Platform modernisation enabling $Z revenue | | Performance Optimisation | Customer experience investment reducing churn by N% | | Automated Testing | Quality assurance reducing defect costs by X% | | CI/CD Pipeline | Deployment efficiency reducing time-to-market by Y days | | Observability | Operational risk mitigation reducing downtime cost by $Z | | Security Hardening | Compliance investment avoiding regulatory penalties |
Common mistakes undermine these presentations. Starting with technical details before business context loses executives immediately. Using jargon without translation creates confusion. Omitting alternative comparison makes your proposal look like the only option. Failing to quantify velocity and reliability improvements leaves benefits vague.
Focus on showcasing return on investment through concrete examples. Highlight how fixing an issue reduced bug reports by 50% or how optimising database queries cut server costs.
The presentation structure matters. Executive Leadership needs an executive dashboard with business impact metrics showing financial outcomes, strategic alignment, and key milestones. Finance Department needs TCO analysis with sensitivity modelling showing cost structure breakdown, benefit timing, and risk-adjusted projections.
What is a weighted decision matrix and when should you use it?
A weighted decision matrix gives you a systematic way to compare technical alternatives using explicit criteria, weights, and scoring. It reduces bias and makes trade-offs transparent to stakeholders.
Use it for one-way door decisions with three or more viable alternatives. Use it when decisions involve conflicting priorities like cost versus risk versus strategic fit. Use it when you need stakeholder alignment on trade-off priorities.
Construction Process:
- Define evaluation criteria: cost, risk, reversibility, team capability, strategic fit, time-to-implement
- Assign criterion weights summing to 100% based on organisational priorities
- Score each alternative 1-5 on each criterion
- Calculate weighted scores by multiplying weight times score
- Sum weighted scores for overall ranking
- Sensitivity test by adjusting weights to see if ranking changes
Build vs Buy vs Open Source Example:
Criteria and weights:
- Cost (20%): Direct financial outlay
- Customisability (15%): Ability to tailor to specific needs
- Time-to-Market (25%): Speed of deployment
- Team Expertise (15%): Current capabilities match
- Vendor Risk (15%): Dependency and lock-in concerns
- Long-term Flexibility (10%): Ability to adapt over time
Score each alternative 1-5 on each criterion. Multiply scores by weights. Sum for total.
The matrix shows explicitly why the option with higher cost scores higher overall when accounting for strategic fit and lower risk. It enables informed debate about weight assignments rather than endless option comparisons.
Trade-off analysis requires examining all the forces driving you forward because there are no silver bullets in software architecture. The out-of-context problem is common when analysing trade-offs. Looking at 20 pros and only 2 cons seems compelling until you realise those 2 cons are out of context for your specific situation.
Sensitivity Analysis Example:
If you increase Team Capability weight from 15% to 25%, ranking changes from Microservices (score 82) to Modular Monolith (score 87). This reveals that your ranking is sensitive to team capability assessment. It prompts the question: how confident are we in our team’s microservices expertise?
The counterintuitive revelations surface here. The obvious choice often scores poorly once all criteria are weighted appropriately. The cheap option frequently loses on total score when you include risk, reversibility, and opportunity cost.
Integrate with RACI by using stakeholder consultation to determine criterion weights and validate scoring. The Consulted stakeholders debate and agree on weights upfront, then score alternatives. This creates buy-in because they shaped the decision criteria.
What is the risk assessment matrix for technical decision-making?
The risk matrix evaluates Impact (high or low scope and consequences) versus Reversibility (high or low decision reversal cost). It classifies decision types and determines appropriate process rigour.
Four Quadrants:
High Impact + Low Reversibility = Deliberate: This is a one-way door. Use slow decision-making with broad stakeholder involvement, extensive analysis, and formal documentation.
High Impact + High Reversibility = Experiment: Make fast decisions with clear success metrics. Iterate quickly based on results.
Low Impact + Low Reversibility = Defer or Set Constraints: These are low priority. Establish guardrails and delegate to the team.
Low Impact + High Reversibility = Just Decide: Delegate to the team and move on immediately.
Application Process:
- Estimate decision impact on business, customers, and team
- Calculate decision reversal cost using the five-category template
- Plot on the matrix
- Apply corresponding decision process from the quadrant
- Document classification in ADR or decision log
Common Examples:
- Cloud provider choice: Deliberate quadrant (high impact on infrastructure, hard to reverse due to integrations and team knowledge)
- UI component library: Experiment quadrant (high visibility to users, easy to swap with proper abstraction)
- Code formatting tool: Just Decide quadrant (low impact on outcomes, trivial to change)
- Observability vendor: Experiment quadrant if containerised (high impact on operations, moderate reversibility with proper abstraction)
You eliminate over-analysis of trivial decisions in the Just Decide quadrant. You prevent under-analysis of commitments in the Deliberate quadrant. This approach typically saves 10-15 hours per week by triaging correctly.
The risk matrix determines if you need RACI, if ADR is required, if time horizon analysis is warranted, and if CFO presentation is necessary. It’s the meta-framework that tells you which other frameworks to apply.
The relationship to Amazon’s framework is straightforward. Deliberate quadrant equals one-way doors. Experiment and Just Decide quadrants equal two-way doors. The matrix adds nuance by separating high-impact experiments from low-impact quick decisions.
How do you build consensus on technical decisions without endless debate?
The frameworks above provide structure for evaluating decisions. But even the best analysis fails without stakeholder buy-in.
One-way doors require stakeholder input to avoid costly mistakes. But broad consultation creates consensus paralysis and analysis loops lasting months.
Bounded Consultation Framework:
- Use RACI to limit Consulted group to essential voices (5-10 maximum)
- Set explicit decision timeline (2-4 week consultation period for one-way doors)
- Define decision criteria upfront using weighted matrix with stakeholder-agreed weights
- Use written proposals like ADR drafts enabling async feedback
- Single Accountable owner makes final call after consultation
- Communicate decision rationale to Informed stakeholders
The facilitation techniques make trade-offs explicit. Weighted decision matrix lets stakeholders debate weights rather than endlessly comparing options. Time horizon analysis shows how economics evolve, building shared understanding. ADRs capture dissenting opinions and explain why they were overruled, so stakeholders feel heard even when not followed.
When stakeholders deadlock, the Accountable owner decides using agreed weights and criteria. Document minority views in the ADR. Establish review triggers to revisit if assumptions prove wrong.
This is where Amazon’s “disagree and commit” principle becomes powerful. If you have conviction on a particular direction even without consensus, ask stakeholders: “Will you gamble with me on it? Disagree and commit?” By the time you reach this point, no one can know the answer for sure, and you’ll probably get a quick yes.
The principle works both ways. If you’re the boss, you should disagree and commit too. The value is in getting commitment rather than conviction. The principle requires genuine disagreement of opinion with commitment to execute, not dismissive thinking where you believe others are wrong but avoid the confrontation.
Timeline Expectations:
- Two-way doors: 1-3 days to decide
- One-way doors with consensus building: 2-4 weeks
- Beyond 4 weeks signals a RACI problem or poor process design
Red Flags for Consensus Failure:
- Same meeting recurring for 4+ weeks
- RACI with 15+ Consulted stakeholders
- No single Accountable owner identified
- Debate about same alternatives without new information
Sometimes teams have different objectives and fundamentally different views. They are not aligned. No amount of discussion, no number of meetings will resolve that deep misalignment. Without escalation, the default dispute resolution mechanism for this scenario is exhaustion. Whoever has more stamina carries the decision. That’s a failure mode to avoid.
What are the most common decision-making mistakes and how do you avoid them?
Mistake 1 – Misclassifying Decision Reversibility:
Treating two-way doors as one-way creates over-analysis paralysis. Treating one-way doors as two-way creates costly reversals.
Avoidance: Calculate decision reversal cost explicitly using the five-category template. Plot on risk matrix to classify objectively.
Mistake 2 – Optimising for Wrong Time Horizon:
Choosing based on 6-month costs when the decision has a 3-year lifespan. Or evaluating a 6-month decision on 5-year economics.
Avoidance: Match evaluation period to technology expected lifespan. Model costs at multiple horizons to see crossover points.
Mistake 3 – Consensus Paralysis:
Consulting too many stakeholders, giving everyone veto power, lacking single decision owner.
Avoidance: Create RACI matrix limiting Consulted to essential voices. Designate one Accountable owner. Set decision deadline upfront.
Mistake 4 – Undocumented Rationale:
Making decisions without ADRs, unable to explain later why you chose this path, team relitigates decision repeatedly.
Avoidance: Require ADR for all one-way doors. Include alternatives considered and trade-offs accepted.
Mistake 5 – Hidden Cost Blindness:
Choosing cheap option without modelling operational costs, team productivity impacts, and scaling economics.
Avoidance: Use weighted decision matrix including operational cost, team capability, and long-term flexibility criteria.
Mistake 6 – CFO Communication Failure:
Presenting technical rationale without business translation. CFO denies budget for valuable investment.
Avoidance: Lead with business outcome. Quantify ROI. Show time horizon payback. Use CFO vocabulary from the translation table.
Mistake 7 – Ignoring Team Capability:
Choosing technically superior option the team lacks skills to operate.
Avoidance: Include team expertise and operational maturity as weighted criteria in decision matrix.
Mistake 8 – No Review Triggers:
Never revisiting decisions as context changes, riding bad decisions for years.
Avoidance: Define review triggers in ADR including cost assumption violations, technology deprecation, and requirement changes.
Mistake Diagnosis Checklist:
- Spending 3+ weeks on a decision with $5k reversal cost? Misclassified as one-way door.
- Chose option based on Year 1 costs for a 5-year decision? Wrong time horizon.
- 15 people in decision meetings with no progress? See Mistake 3.
- Team asking “Why did we choose this again?” six months later? Undocumented rationale.
- Surprised by operational costs in Year 2? Hidden cost blindness.
- CFO rejected your architecture proposal? Communication failure.
- Struggling to operate the chosen technology? Ignored team capability.
- Stuck with deprecated technology? No review triggers.
FAQ Section
What’s the difference between a decision matrix and a risk matrix?
A decision matrix (weighted matrix) compares multiple alternatives across multiple criteria to choose the best option. A risk matrix classifies a single decision’s impact and reversibility to determine what process to use.
Use the risk matrix first to determine if the decision warrants detailed weighted matrix analysis. If you land in the Deliberate quadrant (one-way door), then build a weighted decision matrix to compare alternatives.
How long should technical decision processes take for different decision types?
Two-way door decisions should take 1-3 days with minimal stakeholder consultation and quick commitment.
One-way door decisions should take 2-4 weeks including RACI stakeholder involvement, weighted matrix analysis, ADR documentation, and consensus building.
Beyond 4 weeks signals a process problem. You have too many consulted stakeholders, analysis paralysis, or no single accountable owner.
When should you use Amazon’s one-way door framework vs NASA’s decision analysis matrices?
Amazon’s framework is a decision classification system determining how much rigour you need. NASA’s weighted matrices are evaluation tools for executing deliberate one-way door analysis.
Use Amazon’s framework first to classify the decision as one-way or two-way. Then apply NASA-style weighted matrix if you classified it as a one-way door requiring systematic evaluation.
How do you handle stakeholders who disagree with a technical decision?
RACI matrix establishes that Consulted stakeholders provide input but don’t have veto power. The Accountable owner makes the final call using agreed criteria from the weighted matrix.
Document the dissenting view and your rationale in the ADR. Establish a review trigger to revisit if the dissenter’s concerns prove valid.
Apply Amazon’s “disagree and commit” principle. Stakeholders execute the decision even if they disagreed, because RACI made decision authority clear and ADR captured their input.
What should be included in an Architecture Decision Record?
Seven required sections:
- Title using “ADR-NNN: Descriptive Name” format
- Status showing Proposed, Accepted, Deprecated, or Superseded
- Context explaining circumstances prompting the decision
- Decision stating what exactly was chosen
- Consequences detailing trade-offs accepted
- Alternatives Considered showing options rejected and why
- Review Triggers defining conditions for revisiting
Include RACI stakeholders (Accountable and Consulted) and decision date.
How do you calculate decision reversal cost for architecture decisions?
Use the five-category template: team retraining, code migration, vendor switching, opportunity cost, and risk buffer. See the Decision Reversal Cost Calculation Template in the one-way/two-way doors section for details. If the total exceeds 3-6 months of team capacity, classify as a one-way door.
How often should you review past architecture decisions?
Define review triggers in the ADR rather than using a fixed schedule.
Common triggers: technology officially deprecated, cost assumptions violated by more than 30%, team capabilities changed significantly, business requirements shifted, security vulnerability discovered in chosen technology.
Typical one-way door decisions get reviewed every 12-18 months unless a trigger fires earlier.
What time horizon should you use for cloud migration ROI analysis?
Cloud migration is an infrastructure decision requiring a 3-5 year evaluation horizon (see time horizon analysis section for framework). Model costs at Year 1 (high due to migration effort and parallel running), Year 2 (medium during optimisation and rightsizing), and Years 3-5 (low at steady state optimised).
Break-even typically happens in Year 2-3. Full ROI becomes visible by Year 5. Shorter horizons make migration look incorrectly expensive.
How do you communicate counterintuitive technical trade-offs to non-technical executives?
Use time horizon analysis graphs showing cost evolution over multiple years. Contrast the obvious cheap option’s rising costs versus the expensive option’s declining costs.
Translate to CFO language using the vocabulary mapping table. Technical debt becomes maintenance liability. Refactoring becomes risk reduction investment.
Show the weighted decision matrix making explicit why the expensive option scores higher on total value when including risk, flexibility, and strategic fit.
Should you document two-way door decisions or only one-way doors?
All one-way doors require full ADR with seven sections.
High-impact two-way doors benefit from lightweight ADR covering just Context, Decision, and Review Trigger.
Low-impact two-way doors need only a decision log entry with one sentence stating what was decided, who decided, and when.
Over-documenting two-way doors creates bureaucracy. Under-documenting one-way doors creates repeated relitigating and inability to estimate reversal costs later.
How many stakeholders should be Consulted in RACI matrices for technical decisions?
One-way doors: 5-10 Consulted stakeholders including essential subject matter experts, affected teams, and governance functions like security and FinOps.
Two-way doors: 0-3 Consulted, often just the doer and immediate team.
More than 10 Consulted creates consensus paralysis. Remember that many stakeholders should be Informed (receive decision communication) rather than Consulted (provide input that might delay decision).
What’s the relationship between decision reversal cost and time horizon analysis?
Reversal costs change over time. A decision easy to reverse in Month 3 may be prohibitively expensive by Year 2 due to accumulated dependencies, team learning curves, and integration complexity.
Time horizon analysis models how reversal costs evolve across periods. This reveals when a flexible choice becomes locked-in and informs review trigger timing in ADRs.
The cheap option with low initial reversal cost often becomes expensive to reverse as dependencies accumulate. The expensive option with high initial reversal cost sometimes becomes easier to reverse if it’s built with abstraction layers.