When three of the world’s largest economies independently arrive at the same technical requirement, that requirement stops being a regional rule and starts being the global standard. The EU AI Act, South Korea’s AI Basic Act, and China’s GB 45438-2025 have all landed in the same place: AI-generated content must be labelled, its provenance documented, and records retained. Three economies. Three independent legislative processes. One converging technical obligation.
This article is the Asia-Pacific deep-dive in the global AI regulation landscape. Part one covers the developer-actionable obligations under South Korea’s AI Basic Act and China’s GB 45438-2025. Part two introduces the legal-engineering loop — the recurring engineering function that keeps your compliance stack current as new laws arrive. You leave with exact obligation data and a minimum viable process for integrating regulatory tracking into your engineering workflow.
Why do Asia’s new AI laws matter even if most of your users are not in Asia?
The short answer: procurement.
Asia-Pacific AI laws don’t enter your compliance stack through direct legal obligation. They enter through the contracts your customers are bound by.
Here’s how it works. South Korea’s AI Basic Act comes into effect. Korean enterprise companies update their vendor policies to reference it. Those requirements start showing up in RFP questionnaires sent to software vendors worldwide. Your sales team gets the questionnaire and forwards it to engineering as an evidence request. None of this depends on whether you have a single Korean user. What matters is whether you are selling to enterprise clients who operate in, or sell into, South Korea.
This cascade is jurisdiction-agnostic. GDPR demonstrated it a decade ago: companies with no European users were fielding GDPR compliance questionnaires because their US clients had European customers. South Korea’s AI Basic Act and China’s GB 45438-2025 are working through the same procurement chains right now.
The second reason APAC laws matter is convergence. When three major economies independently arrive at the same technical requirement, that requirement is effectively the global standard — even in jurisdictions that haven’t yet enacted similar laws.
Australia’s 10 voluntary AI safety guardrails make this procurement argument neatly. They’re still voluntary, but they read like a procurement checklist, and enterprise RFPs already cite them as self-assessment criteria. Voluntary today does not mean voluntary in procurement tomorrow.
The practical response is a modular architecture: implement a shared global baseline — watermarking, provenance metadata, user disclosure — and add jurisdiction-specific modules as required. The APAC laws add two modules to your existing stack: a content labelling module and a log retention module. Additions to an existing structure, not replacements.
What does South Korea’s AI Basic Act require from companies building AI products?
South Korea’s AI Basic Act took effect on 22 January 2026 — the first comprehensive national AI law in the world. It applies to any organisation whose AI system affects Korean markets or users, regardless of where the organisation is headquartered.
Three developer-actionable obligations flow from it.
Obligation 1: Watermarking. AI-generated content must carry watermarks or other identifying markers. Content that stays within the service environment can use flexible labelling — symbols, logos, or pre-use guidance. Content that users can download or share outside the service requires strict labelling: human-readable watermarks or machine-readable metadata embedded in the content itself.
Obligation 2: Local representative designation. Foreign operators meeting any one of three thresholds must designate a named local representative in South Korea:
- Global annual revenue of 1 trillion won (approximately US$681 million)
- Domestic Korean sales of 10 billion won
- 1 million daily Korean users
Meeting any single threshold triggers the obligation. For most companies at the 50–500 employee scale, this won’t apply — the thresholds assume material Korean market presence. Kim & Chang notes the requirement is narrower than early readings suggested: it applies to operators already subject to Korean administrative fines under existing law.
Obligation 3: User disclosure. Entities deploying AI must tell users they are interacting with an AI, at the session or interaction level. This is separate from content-level watermarking. Internal use of AI as a productivity tool, without deployment to external users, does not trigger this one.
A one-year grace period is in effect. Treat January 2027 as the hard compliance deadline. Watermarking and user disclosure engineering tasks should be in your roadmap now. The Korea AI Basic Act Support Desk provides compliance consultations for foreign companies if you need to get up to speed quickly.
What does China’s GB 45438-2025 require for AI-generated content?
GB 45438-2025 is a mandatory national standard issued by the Cyberspace Administration of China (CAC), enforced from September 2025.
Three requirements apply.
Requirement 1: Visible labels. AI-generated content must carry a visible label. For images, the label height must be no less than 5% of the shortest side. For video, the label must appear at the opening screen and persist for no less than two seconds. This applies to content involving dialogue simulation, voice synthesis, facial manipulation, and image, video, or audio generation.
Requirement 2: Source and provider metadata. Machine-readable metadata must identify the synthesised content type, the service provider, and a content identification code — a traceability chain linking AI-generated content back to the system that produced it. Tampering with labels is prohibited.
Requirement 3: Six-month log retention. Records of AI-generated content labelling must be retained for a minimum of 180 days. The system must log what content was generated, when, with what label, and by which provider. Design this deliberately — retrofitting log retention is significantly harder than building it into the content generation pipeline from the start.
GB 45438-2025 applies to content distributed in China, regardless of where the company is registered. DeepSeek‘s V3 model is the clearest example: a Chinese open-weight frontier model operating under these requirements. Open-source and open-weight models are not exempt.
For cross-border implementation, C2PA (Coalition for Content Provenance and Authenticity) is the answer. A C2PA-compliant implementation satisfies both Korea’s watermarking obligation and China’s source and provider metadata requirement in a single build.
How do Japan’s AI Promotion Act and Australia’s guardrails signal the emerging global regulatory baseline?
South Korea and China set the binding obligations. Japan and Australia show where the region is heading.
Japan’s AI Promotion Act (May 2025, effective September 2025) uses best-effort obligations rather than hard requirements, but violations of existing laws remain fully enforceable. Its PDCA governance model — Plan, Do, Check, Act — establishes documentation and oversight as a baseline expectation regardless.
Australia has no AI technology-specific statutes. Yet the 10 voluntary AI safety guardrails — covering human oversight, transparency, testing, data governance, and accountability — are already appearing in enterprise vendor RFPs as self-assessment checklists.
Vietnam’s AI law, effective 1 March 2026, adds risk-based classification, transparency obligations, incident reporting, and local presence mandates for high-risk systems. Hong Kong’s Generative AI Technical and Application Guideline covers compliance throughout AI lifecycles.
The pattern is consistent: documentation, evaluation, oversight, and provenance are becoming baseline expectations across the region. The speed and form of enforcement vary. The underlying expectation does not.
What is the legal-engineering loop and how does it work inside a small team?
The legal-engineering loop is a recurring engineering function that treats regulatory tracking as a permanent operational process — not a one-time compliance event managed by legal, but an engineering function with an owner, a cadence, and defined outputs.
Most engineering teams discover new regulatory requirements when a client questionnaire arrives. At that point, the obligation already exists and the team is already behind. The legal-engineering loop inverts this.
The loop has four components.
Component 1: Ownership assignment. One named engineer or engineering lead holds the regulatory tracking role. For a team of 5 to 15 engineers, this is a 2 to 4 hour fortnightly responsibility — not a dedicated full-time role. Without a named owner, regulatory scanning defaults to nobody.
Component 2: Task translation process. When the owner identifies a new obligation, they translate it into specific engineering tasks — GitHub issues, Jira tickets, or Linear items — with defined scope, acceptance criteria, and an assigned owner. This is the step most teams skip: they read about a new law but never convert the legal text into actionable engineering work. A standard task template should capture: obligation name, jurisdiction, effective date, engineering impact, acceptance criteria, and a link to the source.
Component 3: Artefact update cycle. Every compliance-relevant design decision is captured as an Architecture Decision Record (ADR) that includes the regulatory context and rationale. Stored in the repository alongside code, an ADR with a “regulatory context” field simultaneously serves as engineering documentation and compliance evidence. The model cards, impact assessments, and other artefacts the legal-engineering loop produces are covered in the compliance documentation stack article.
Component 4: Review cadence. A quarterly full-stack review of all compliance documentation confirms everything is current, no new obligations have been missed, and log retention infrastructure is functioning correctly. A major new law publication triggers an additional out-of-cycle review.
The loop maps onto Japan’s PDCA framework: Plan (scan and translate obligations) → Do (implement) → Check (quarterly review) → Act (update ADRs and documentation).
How do you turn the legal-engineering loop into a repeatable process for a small engineering team?
The minimum viable implementation for a 5 to 15 person engineering team has six elements.
Owner. One named engineer or lead holds the regulatory tracking role — disciplined about the fortnightly scan and able to translate obligation text into engineering scope. Rotate the role annually if it suits the team.
Scan cadence. The owner reviews three to five curated sources fortnightly: law firm regulatory newsletters (Kim & Chang, Eversheds Sutherland, Baker McKenzie all publish APAC AI updates), government gazette RSS feeds, and APAC regulatory trackers from the IAPP. The scan takes 2 to 4 hours. Output: a short list of new obligations assessed for engineering relevance.
Task format. Each new obligation is translated into a GitHub/Jira/Linear issue: obligation name, jurisdiction, effective date, engineering impact, acceptance criteria, link to source. A traceable record connecting a regulatory obligation to a specific piece of engineering work.
ADR trigger. Any design decision made in response to a compliance requirement automatically triggers an ADR with a “regulatory context” field, stored in the repository alongside the code it governs. Compliance-driven decisions become permanent, auditable engineering artefacts rather than Slack threads.
Review gate. A quarterly all-hands engineering review covers the compliance stack: check for stale ADRs, update model cards if model behaviour has changed, confirm log retention is correctly configured, and verify obligations identified in the fortnightly scans have been fully implemented.
Escalation path. When a new law creates an obligation that exceeds the team’s capacity to scope, the owner escalates to legal counsel using the translated task brief — not the raw legal text. Legal counsel can advise on the brief rather than starting from scratch.
Singapore’s Model AI Governance Framework provides the architecture this workflow supports: a shared governance baseline with jurisdiction-specific overlays. Each new jurisdiction adds a module. The baseline stays stable. The loop is what converts that architecture from documentation into a functioning process that keeps pace with the full multi-jurisdictional compliance environment as it evolves.
FAQ
Does South Korea’s AI Basic Act apply to a company with no Korean users?
The local representative designation won’t apply — the thresholds assume material Korean market presence. But the Act may still reach you through procurement if you have enterprise clients with Korean operations. And if the product ever acquires Korean users, watermarking and user disclosure obligations apply without a separate threshold test. Not today, but plan for it via procurement.
What is the local representative designation requirement and do I need to comply?
The obligation applies to foreign operators meeting any one of: 1 trillion won (approximately US$681 million) in global annual revenue, 10 billion won in domestic Korean sales, or 1 million daily Korean users. Kim & Chang’s analysis notes the requirement applies to operators already subject to Korean administrative fines. Most software vendors at the 50–500 employee scale won’t meet these thresholds.
What is an architecture decision record and how does it function as a compliance artefact?
An ADR is a short document capturing the context, options considered, and rationale for a significant design decision, stored in the repository alongside code. For compliance purposes, add a “regulatory context” field identifying the relevant law and obligation — the decision becomes traceable to a specific regulatory requirement, which is exactly what an auditor or procurement team wants to see.
What does watermarking AI-generated content involve?
Two layers are required. First, a visible label: for images, a visible overlay occupying at least 5% of the shortest side; for video, a visible label persisting for at least two seconds. Second, machine-readable metadata using the C2PA standard, which satisfies both Korea’s watermarking obligation and China’s source and provider metadata requirement in a single implementation.
What is the six-month log retention requirement under China’s GB 45438-2025?
GB 45438-2025 requires records of AI-generated content labelling to be retained for at least 180 days. The system must log what content was generated, when, with what label, and by which provider. Design this deliberately — retrofitting log retention into an existing pipeline is significantly harder than building it in from the start.
Does DeepSeek’s status as an open-weight model exempt it from China’s labelling requirements?
No. The standard applies based on whether AI-generated content is distributed in China — not based on whether the model’s weights are open or proprietary. Open-source does not mean unregulated.
How do I explain Asia-Pacific AI regulation to my CEO or board?
Use the global baseline convergence framing: three major economies — the EU, South Korea, and China — have independently converged on the same core requirements: label AI-generated content, document its provenance, retain records. Frame it as: “This is now a baseline expectation in global enterprise markets. It reaches us through procurement questionnaires before any domestic law requires it. We are building a recurring process to track new requirements and translate them into engineering tasks so we are never caught off guard.”
Is Australia’s voluntary AI guidance worth implementing if it is not legally required?
Yes, for procurement reasons. Australia’s 10 voluntary AI safety guardrails already appear in enterprise vendor RFPs as self-assessment checklists. A company that can demonstrate alignment has a procurement advantage. They also surface gaps in AI oversight and accountability before a client’s due diligence team finds them.
How does the legal-engineering loop cadence work in practice?
Two cadences run in parallel. The scan cadence runs fortnightly — reviewing curated regulatory sources for new publications (2 to 4 hours, the owner’s core responsibility). The review cadence runs quarterly — reviewing all compliance documentation for currency. A significant new law triggers an additional out-of-cycle scan review, not automatically a full quarterly review.
What is C2PA and why is it relevant to AI content labelling compliance?
C2PA (Coalition for Content Provenance and Authenticity) is a technical standard for embedding provenance metadata into digital content in a standardised, machine-readable format. It provides a cross-border implementation path: South Korea’s watermarking obligation and China’s source and provider metadata requirement are both addressable via C2PA-compliant metadata in a single build.
How does the Korea AI Basic Act’s grace period affect compliance planning?
A one-year grace period runs from 22 January 2026, giving you until approximately January 2027. Treat that as a hard deadline. Watermarking and user disclosure tasks should be in the roadmap now. For companies that trigger the local representative designation, earlier action is required — designation involves administrative processes with Korean authorities that cannot be completed at the last minute.