Insights Business| SaaS| Technology AI Model Comparison 2025: DeepSeek vs GPT-4 vs Claude vs Llama for Enterprise Use Cases
Business
|
SaaS
|
Technology
Jan 1, 2026

AI Model Comparison 2025: DeepSeek vs GPT-4 vs Claude vs Llama for Enterprise Use Cases

AUTHOR

James A. Wondrasek James A. Wondrasek
Graphic representation of the topic AI Model Comparison for Enterprise Use Cases

You’re staring at over ten frontier AI models. They all claim similar performance. But the costs? The deployment options? The geopolitical risks? Totally different.

Benchmark scores like HumanEval 85% or SWE-bench 80.9% sound impressive. But they don’t tell you which model actually fits what you’re trying to do.

This article is part of our strategic framework for choosing between open source and proprietary AI, where we explore the critical decisions facing SMB tech leaders in 2025. Here, we compare seven models across eight dimensions: performance, cost, deployment flexibility, enterprise support, compliance, ecosystem integration, use case fit, and geopolitical risk. We’ll translate those benchmarks into actual business outcomes and give you a practical framework for evaluating both proprietary models (GPT-4, Claude, Gemini) and open-source alternatives (DeepSeek V3, Llama 4, Mistral).

Which AI Model Performs Best for Enterprise Coding Tasks in 2025?

Claude Opus 4.5 leads enterprise coding with an 80.9% SWE-bench score and 54% market share among enterprise developers. That’s the headline.

DeepSeek V3 delivers competitive performance—85% on HumanEval—at $1.50 per million tokens versus $15 for Claude. GPT-4.1 provides balanced coding with strong Azure integration if you’re already in the Microsoft ecosystem.

Here’s what matters: Claude’s SWE-bench score translates to 40% faster debugging and three hours saved per developer per sprint. That’s real time, real money.

The benchmark landscape shows clear tiers. Claude Sonnet 4.5 hits 85% on HumanEval, matching DeepSeek V3. But SWE-bench verified results—which test on real-world GitHub issues—show more differentiation. Claude leads at 80.9%, DeepSeek at 78%, GPT-4 at 72%.

Beyond the aggregate scores, use case differentiation matters. Claude excels at multi-file refactoring and complex code review. It can sustain autonomous tasks for over 30 hours. DeepSeek handles boilerplate generation efficiently—perfect for those repetitive tasks that eat up developer time. GPT-4.1 integrates seamlessly with Azure DevOps pipelines, so if you’re already using Microsoft, it’s a natural fit.

Anthropic holds 32% of the overall enterprise AI market, with 54% among developers specifically. That’s more than double OpenAI‘s 21% overall share. Teams with extensive AI use finished 21% more tasks and created 98% more pull requests per developer.

But there’s a catch. The METR study reveals something interesting: developers using Cursor with Claude were 19% slower on familiar codebases. Experienced users, however, achieved 20% speedup. There’s a learning curve that affects initial productivity.

Cost-performance analysis reveals strategic trade-offs you need to consider. DeepSeek offers 90% of Claude’s capability at 10% of the cost for high-volume boilerplate work. For complex debugging where quality trumps cost, Claude’s premium pricing justifies the faster resolution. Google’s data shows coding tools increase development speed 21% while reducing code review time 40%.

How Do Open-Source and Closed-Source AI Models Compare for Enterprise Use?

Closed-source models—GPT-4, Claude, Gemini—offer superior out-of-box performance, enterprise SLAs, and zero-maintenance deployment. The downside? They create vendor lock-in and ongoing per-token costs.

Open-source alternatives like Llama 4, DeepSeek V3, and Mistral enable on-premise deployment, fine-tuning, and elimination of per-token costs. But you’re looking at $50,000-$200,000 in infrastructure investment and you’ll need ML expertise on staff. That’s not trivial.

The cost crossover point? Around 5 million tokens monthly. Below that, APIs make more sense. Above it, self-hosting starts to pay off.

Hybrid architecture is where smart money goes: use open-source for high-volume predictable tasks and closed-source APIs for complex edge cases. This optimises your cost-performance ratio.

Deployment models differ fundamentally. GPT-4, Claude, and Gemini operate API-only. Llama 4, DeepSeek V3, and Mistral support flexible deployment: cloud APIs, on-premise servers, or edge devices.

TCO breakdown extends well beyond per-token pricing. Closed-source scales linearly with usage but requires zero upfront investment. Open-source demands $50,000-$200,000 infrastructure, one to two ML operations staff, monitoring tools, vector databases, and fine-tuning expenses. These costs add up quickly.

Performance gaps have narrowed considerably. Llama 4 reportedly matches GPT-4o on coding and reasoning benchmarks. DeepSeek V3’s Mixture-of-Experts architecture—671 billion total parameters but only 37 billion activated per query—achieves competitive performance with lower inference costs. We’ll dig into how that works in a later section.

Control and customisation differentiate open-weight models. They enable fine-tuning for industry-specific legal, medical, and financial applications. Closed-source models offer limited customisation beyond prompt engineering.

For regulated industries, geopolitical and compliance considerations often drive the deployment decision. On-premise open-source keeps all your data within your infrastructure. Closed-source requires trusting the provider’s zero-data-retention claims. When you’re dealing with sensitive data, on-premise control often outweighs API convenience.

Here’s a practical approach: deploy DeepSeek or Llama 4 on-premise for high-volume tasks like ticket classification. Reserve Claude or GPT-4 APIs for complex debugging or sensitive content requiring Constitutional AI safety features.

What Are the Key Cost Differences Between GPT-4, Claude, and DeepSeek for Production Use?

Per-token pricing varies dramatically. DeepSeek V3 costs $1.50/$3 per million input/output tokens. Llama 4 via Nebius runs $2-4/M. Claude Sonnet 4.5 is $3/$15. Claude Opus 4.5 jumps to $15/$75. GPT-4 Turbo sits at $10/$30. GPT-5 is estimated at $20-30 for input alone.

But the per-token price is just the start. There are hidden costs you need to account for. For a complete breakdown of total cost of ownership including infrastructure, talent, and operational expenses, see our TCO calculator and ROI measurement framework.

Embeddings cost $0.10-0.50 per million tokens. Vector databases run $500-2,000 monthly. Monitoring platforms cost $200-1,000 monthly. Compliance audits? $15,000-50,000 annually.

Let’s look at real-world TCO for 10 million tokens monthly. DeepSeek costs you $150 in token fees. Claude runs $450-1,500 depending on tier. GPT-4 or GPT-5 sits at $1,000-3,000. Then add $1,000-3,500 in operational overhead across the board.

Volume discounts and enterprise agreements add complexity. Anthropic offers enterprise pricing, though specifics remain confidential. OpenAI provides Azure AI Foundry bundles that reduce per-token costs 20-40% for committed spend. Google bundles Gemini with Workspace at $30 per user monthly—which can be a steal if you’re already paying for Workspace.

Those hidden costs multiply your per-token pricing. Vector databases range from Pinecone‘s $200-2,000 monthly to self-hosted Weaviate at $500+ in infrastructure costs. Monitoring platforms like LangSmith charge $200-1,000 monthly. SOC 2 audits cost $15,000-50,000 annually.

Reasoning token surcharges create variable costs you need to account for. OpenAI o1 and Claude extended thinking modes charge 2-5 times output token pricing for internal reasoning tokens. Complex analysis or debugging consumes significantly more tokens than simple translation or summarisation tasks.

Usage scenarios demonstrate the implications. For high-volume customer support at 10M tokens monthly, DeepSeek at $150 provides 10x savings versus Claude Sonnet at $1,500. But for complex reasoning at 1M tokens monthly, Claude Opus’s $750 justifies the premium when faster resolution saves you $5,000-10,000 in engineering hours. That’s the calculation you need to make.

Cost optimisation strategies can reduce your TCO significantly. Model routing directs simple queries to lower-cost tiers based on complexity scoring. Caching frequently requested completions eliminates redundant processing. Using lower-tier models for drafts and premium models for final review reduces premium token consumption by 60-80%.

How Should You Evaluate AI Models Without Deep Machine Learning Expertise?

Here’s a practical approach: define three to five representative tasks from your real workflows—code review, customer queries, data analysis. Test each model via multi-model platforms like OpenRouter or AI Studio using identical prompts. Measure quality using domain expert review rather than benchmarks.

Timeline? Two to four weeks for thorough testing.

Your non-technical metrics should include expert-verified accuracy, brand-appropriate tone, edge case handling, and consistency across similar prompts. Decision criteria go beyond performance: vendor stability, enterprise support quality, compliance certifications, and ecosystem integration with your existing tools.

Here’s the step-by-step evaluation process:

  1. Identify tasks covering 80% of your planned AI usage
  2. Create 20-50 standardised test prompts
  3. Test all candidate models via a unified API platform
  4. Conduct blind review with domain experts rating outputs 1-10
  5. Calculate cost per acceptable output, not just cost per token
  6. Evaluate vendor stability and support quality

Multi-model testing platforms make direct comparison straightforward. OpenRouter provides unified API access to 50+ models. Azure AI Foundry offers OpenAI models with enterprise controls. Google AI Studio, Anthropic Console, and Nebius AI Studio provide access to their respective models.

Here’s the key insight: calculate cost per acceptable output rather than per token. If Model A costs $3/M tokens with 95% acceptance rate while Model B costs $1.50/M with 80% acceptance, your effective cost per acceptable output is $3.16 versus $1.88. That changes the equation completely.

Vendor due diligence includes checking financial stability—recent funding rounds, revenue growth trajectory. Look at market momentum, read customer testimonials, examine SLA guarantees. Anthropic offers 99.9% uptime. OpenAI via Azure matches 99.9%. Google Cloud achieves 99.95%.

Pilot deployment best practices reduce your risk. Start with a non-critical use case like internal documentation or draft email generation. Run parallel AI and human systems for 30-90 days. Measure time saved and error reduction with actual numbers. Gather qualitative feedback from the people who’ll use it daily.

What Geopolitical Risks Should You Consider When Using DeepSeek or Qwen?

Chinese models—DeepSeek V3 and Qwen 3—face specific concerns. Chinese data security laws require government data sharing if requested. There’s potential supply chain disruption if US-China tensions escalate. Export control risks exist, though nothing’s been enacted yet.

Risk varies significantly by industry. Regulated sectors like finance and healthcare face stricter data residency requirements. You need to consider this carefully.

Mitigation options exist. On-premise deployment eliminates data transmission to Chinese servers entirely. Hybrid architecture uses Chinese models for non-sensitive tasks while keeping regulated data on Western models. Regular risk reassessment as the geopolitical landscape shifts is essential.

The regulatory framework shows complexity. Chinese data security laws enacted in 2021 require local data storage and government access when requested. US export controls don’t currently restrict DeepSeek or Qwen, but they could trigger restrictions similar to what happened with Huawei. GDPR considerations centre on data transfer adequacy for EU enterprises.

Data flow analysis distinguishes different architectures. API usage sends your prompts to Chinese servers with direct exposure to Chinese law. Self-hosted deployment using open-weight models keeps all data on-premise, eliminating transmission entirely. That $50,000-200,000 infrastructure investment? It functions as geopolitical insurance.

Industry risk profiles vary widely. Financial services face high sensitivity from transaction data and strict regulatory requirements. Healthcare confronts HIPAA complications. Legal industries worry about attorney-client privilege. SaaS and e-commerce dealing with non-PII face lower risk overall.

Mitigation patterns let you address risk while capturing the cost benefits. Fully on-premise deployment eliminates exposure completely. Hybrid approaches use DeepSeek for content generation and Claude for customer data processing. Multi-vendor strategies avoid single dependency on any one provider or jurisdiction.

Here’s a practical framework: assess your risk tolerance level—high, medium, or low. Implement data classification systems—public, internal, confidential, restricted. Map your use cases to appropriate models based on data sensitivity. Establish quarterly monitoring protocols. Maintain abstraction layers that enable rapid switching if the geopolitical situation changes.

Which AI Model Offers the Best Enterprise Support and Service Level Agreements?

Anthropic with Claude leads on enterprise support. They offer 99.9% uptime SLA—that’s a maximum of 43 minutes of downtime monthly. You get dedicated account management, one-hour critical issue response time, and zero-data-retention guarantees that address compliance concerns directly. For more on how these security and compliance features fit into a comprehensive governance framework, see our guide on building enterprise AI governance.

OpenAI provides strong support via Azure AI Foundry integration with Microsoft’s enterprise SLAs. But their direct API support has historically been less robust for urgent issues.

Google offers their Cloud enterprise support infrastructure, but they have less AI-specific expertise compared to Anthropic. Open-source models lack vendor support entirely—you’ll rely on cloud deployment partners like Nebius or AWS, or your internal ML teams.

Support tier differences extend well beyond response times. Enterprise customers receive dedicated account managers, Slack or Teams channel integration, and architectural consulting services. Professional tiers offer email support with 24-hour response targets. Free tiers rely entirely on community support forums.

Compliance offerings address regulated industry requirements. Zero-data-retention: Anthropic provides this as standard, OpenAI requires enterprise tier. SOC 2 Type II: all major providers are certified. HIPAA Business Associate Agreements are available from Anthropic, OpenAI, and Google, but only at enterprise tier.

Vendor stability indicators help predict long-term viability. Anthropic’s recent funding rounds and expanding enterprise customer base demonstrate strong market viability. OpenAI’s Microsoft partnership provides both capital and sales strength. Google shows long-term commitment despite competitive pressure. DeepSeek and Qwen face more enterprise roadmap uncertainty.

How Do Mixture-of-Experts Models Like DeepSeek Achieve Cost Efficiency?

Mixture-of-Experts architecture—MoE for short—splits models into specialised expert networks. Only subsets of these experts activate for any given query.

DeepSeek V3 contains 671 billion total parameters but activates only 37 billion per token. This reduces computational cost by 90% versus dense models while maintaining comparable performance. That’s how they achieve those low prices.

MoE inference requires less GPU memory and compute power. This enables DeepSeek’s $1.50/M token pricing versus $15 for comparable Claude Opus performance.

There are trade-offs. MoE excels at diverse tasks because different experts specialise in different domains. But it may underperform dense models on highly specialised single-domain work. The routing overhead adds minimal latency—typically negligible for most use cases.

MoE architecture centres on routing networks that direct inputs to relevant experts. Traditional dense models activate all parameters for every single query. MoE routes each query to the specialised experts most likely to produce quality output, leaving the others dormant.

DeepSeek V3 demonstrates MoE at scale. Those 671 billion parameters divide into 256 separate experts. For each token, it activates only eight experts—37 billion parameters. This selective activation reduces memory bandwidth by 95% and compute by 90%. Training cost was $5.5 million versus $100 million or more for dense model equivalents.

Performance validation shows MoE maintains quality despite the selective activation. DeepSeek V3 achieves 85% on HumanEval, matching Claude and GPT-4. Its 78% SWE-bench score versus Claude’s 80.9% demonstrates competitive performance on complex reasoning tasks.

The cost-performance calculation reveals significant economic advantage. Activating 37 billion parameters instead of 671 billion enables that 10x lower pricing. For 100 million monthly tokens, this creates $1.35 million in annual savings—$150,000 versus $1.5 million—while delivering 90-95% of the quality.

When does MoE work best? Task diversity is the key factor. Diverse multi-domain use cases benefit enormously from expert specialisation. Think customer support spanning products, billing, technical issues, and account management. Each expert develops deep competence in its domain.

Single-domain applications may prefer dense models. For something like legal M&A analysis, you might benefit more from concentrated expertise rather than distributed specialisation across multiple experts.

What Integration and Ecosystem Advantages Does Each AI Model Offer?

GPT-4 and GPT-5 integrate deeply with Microsoft’s ecosystem via Azure AI Foundry. You get seamless Azure DevOps integration, Power Platform connectivity, Microsoft 365 Copilot compatibility. If you’re a Microsoft-committed organisation, these models become the default choice.

Gemini offers unique Google Workspace native integration. It works directly in Gmail, Docs, Sheets, and Drive. Bundled pricing reduces your TCO significantly if you’re already a Google customer.

Claude provides broad third-party integrations through platforms like OpenRouter and tools like Claude Code. But it lacks the proprietary ecosystem lock-in of Microsoft or Google.

Open-source models—Llama 4, DeepSeek, Mistral—support maximum deployment flexibility. You can run them on any cloud platform, on-premise servers, or edge devices. But you’ll need to handle custom integration work yourself.

Azure AI Foundry delivers pre-built connectors to Azure services. DevOps for CI/CD pipelines. Functions for serverless computing. Cosmos DB for storage. Power Platform for citizen development. Your enterprise security posture inherits directly from Azure subscriptions. Unified billing consolidates your AI spending within existing Microsoft enterprise agreements.

Google Workspace advantage centres on that native integration. Gemini in Gmail drafts email responses using full thread context. Gemini in Docs assists with document creation and analysis. Gemini in Sheets provides data insights without leaving your spreadsheet. The bundled pricing at $30 per user monthly means existing Workspace customers pay virtually no additional per-token fees, versus $15-75/M for standalone API usage elsewhere.

Claude’s ecosystem emphasises flexibility over lock-in. Claude Code provides autonomous development capabilities for tasks spanning 30+ hours. The Anthropic API enables custom integrations for your specific workflows. You get broad third-party support through OpenRouter, LangChain, and LlamaIndex. Constitutional AI delivers safety features for customer-facing and other critical applications.

Open-source deployment flexibility maximises your control. Llama 4, DeepSeek, and Mistral deploy on any infrastructure: AWS, Azure, GCP, on-premise data centres, or edge devices. Integration typically uses OpenAI-compatible endpoints, enabling drop-in replacement for existing systems. When you’re ready to move from model selection to actual implementation, our guide on RAG implementation, fine-tuning, and hybrid architecture blueprints provides step-by-step deployment strategies.

FAQ Section

What is the difference between reasoning tokens and output tokens in AI model pricing?

Reasoning tokens represent internal processing steps models use to solve complex problems—invisible to you as the user. Output tokens constitute visible generated text. Models with extended thinking modes (OpenAI o1, Claude Opus extended thinking) charge separately for reasoning tokens at 2-5 times output pricing. Reasoning-heavy tasks consume significantly more tokens than simple tasks. So a complex debugging session costs way more than translating a paragraph.

Can I fine-tune GPT-4, Claude, or DeepSeek with my company’s proprietary data?

GPT-4.1 supports fine-tuning at $25/M training tokens plus inference surcharges. Claude Opus 4 doesn’t offer fine-tuning but provides Constitutional AI customisation. DeepSeek V3 and Llama 4 support full fine-tuning for on-premise deployments, enabling maximum customisation. But fine-tuning requires ML expertise and $10,000-50,000 infrastructure investment. It’s not a trivial undertaking.

How do I calculate the total cost of ownership for different AI models beyond per-token pricing?

TCO includes: (1) token costs (input/output/reasoning × monthly volume), (2) embeddings ($0.10-0.50/M tokens), (3) vector database storage ($500-2,000/month managed or $500+ self-hosted), (4) monitoring tools ($200-1,000/month), (5) compliance certifications (SOC 2 audits $15,000-50,000 annually), (6) staffing (one to two ML operations FTEs for open-source). The crossover where open-source becomes cheaper typically occurs at 5 million+ tokens monthly.

What benchmarks are most relevant for evaluating AI models for enterprise coding tasks?

HumanEval measures ability to generate correct Python functions from docstrings. SWE-bench tests models on real-world GitHub issues requiring code understanding and modification. These prove most predictive of enterprise coding performance. For business value, SWE-bench 80% correlates to 40% faster debugging and three hours saved per developer per sprint.

Are Chinese AI models like DeepSeek safe for enterprise use in regulated industries?

DeepSeek and Qwen safety depends on deployment and data sensitivity. Cloud API usage sends prompts to Chinese servers subject to Chinese data security laws, creating compliance risks for finance, healthcare, legal. Mitigation: on-premise deployment of open-weight DeepSeek V3 keeps all data within your infrastructure. Hybrid architectures balance cost and risk. Regular reassessment needed as geopolitical landscape evolves. There’s no one-size-fits-all answer here.

Which AI model has the largest enterprise market share in 2025?

Anthropic (Claude) leads with 32% enterprise AI market share according to Menlo Ventures 2025 data, driven by coding performance. Among enterprise developers specifically, Claude captures 54% share. OpenAI (GPT-4/GPT-5) holds 21% overall with strength in general-purpose use cases. Market momentum favours Anthropic as enterprises prioritise coding and Constitutional AI safety.

How long does it take to evaluate and deploy an enterprise AI model in production?

Typical timeline spans three to six months: (1) Initial testing two to four weeks defining use cases, (2) Pilot deployment 30-90 days running parallel AI and human systems, (3) Production rollout four to eight weeks covering infrastructure, training, monitoring. API-based models deploy faster than on-premise open-source requiring infrastructure procurement.

Can I use multiple AI models simultaneously for different use cases?

Hybrid multi-model strategies are increasingly common. Use coding-optimised models (Claude Sonnet) for development, cost-efficient models (DeepSeek) for high-volume support, reasoning-focused models (GPT-4) for complex analysis. Multi-model platforms like OpenRouter simplify management with unified billing. Implementation requires model routing logic based on task type, cost constraints, quality requirements. There’s no reason to be tied to a single model.

What is Constitutional AI and why does it matter for enterprise deployments?

Constitutional AI represents Anthropic’s methodology for training Claude models with built-in ethical guidelines and safety constraints, reducing harmful outputs without human oversight. For your organisation, Constitutional AI provides: (1) reduced brand risk from inappropriate content, (2) built-in compliance with ethical guidelines for regulated industries, (3) consistent behaviour aligned with company values. It’s particularly valuable for customer-facing applications and risk-averse industries.

How do I migrate from one AI model to another without disrupting production systems?

Migration strategy requires: (1) Implement abstraction layer providing unified interface wrapping different model APIs, (2) Shadow testing routing traffic to both models, comparing outputs and gradually shifting percentages, (3) Prompt migration adapting prompts to new model response patterns, (4) Rollback planning maintaining old model integration 30-90 days post-migration. Effort: two to four weeks for API-to-API migrations, eight to sixteen weeks for API-to-on-premise migrations.

What infrastructure is required to self-host Llama 4 or DeepSeek V3 on-premise?

Minimum infrastructure: (1) GPU servers with four to eight NVIDIA A100/H100 GPUs for Llama 4 70B, eight to sixteen GPUs for DeepSeek V3, (2) Inference serving platform (vLLM, TensorRT-LLM, Text Generation Inference), (3) Vector database (Weaviate, ChromaDB self-hosted), (4) Monitoring stack (Prometheus, Grafana), (5) Load balancing (Kubernetes). Initial investment: $50,000-200,000. Ongoing: electricity ($500-2,000/month), cooling, ML operations staffing (one to two FTEs at $120,000-200,000 annually). It’s not cheap.

How frequently should I re-evaluate my AI model selection as new models are released?

Quarterly re-evaluation recommended given rapid release pace. Re-evaluation triggers: (1) New frontier releases with +10% improved benchmarks, (2) Major pricing changes (+20%), (3) Vendor stability events (acquisition, funding concerns, disruptions), (4) New organisational use cases expanding AI requirements. Maintain abstraction layer enabling model switching without full redeployment. The market moves fast, so you need to keep up.

Integrating Model Selection Into Your AI Strategy

Choosing the right AI model is just one piece of your overall AI strategy. Once you’ve identified the models that fit your technical requirements and budget, you need to consider the broader organisational context.

Return to our strategic framework for choosing between open source and proprietary AI to understand how model selection fits into your overall decision-making process. Consider how your chosen models will impact your team’s skills requirements—our guide on preparing your organisation for AI provides roadmaps for building the capabilities you need to deploy and maintain these models effectively.

The model comparison landscape changes rapidly, but the fundamentals remain: match your model choice to your specific use cases, understand the total cost of ownership, and build organisational readiness to support your deployment. With the framework provided here, you can make informed decisions that balance performance, cost, and risk for your SMB tech organisation.

AUTHOR

James A. Wondrasek James A. Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices
Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Jakarta

JAKARTA

Plaza Indonesia, 5th Level Unit
E021AB
Jl. M.H. Thamrin Kav. 28-30
Jakarta 10350
Indonesia

Plaza Indonesia, 5th Level Unit E021AB, Jl. M.H. Thamrin Kav. 28-30, Jakarta 10350, Indonesia

+62 858-6514-9577

Bandung

BANDUNG

Jl. Banda No. 30
Bandung 40115
Indonesia

Jl. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660