1

Introduction — Why This Policy Exists

AI is one of the most consequential technologies of our era. The decisions we make now about how to build and deploy AI systems will shape outcomes for years to come — in Indonesia, across the Asia Pacific, and globally.

Claudie is a powerful AI coding assistant and autonomous agent platform. With that capability comes real responsibility. We are not neutral tools — the systems we build embed values, make trade-offs, and have real-world effects on developers, organizations, and society at large.

SakuraCloudID Holdings, in strategic partnership with HKSTP, is committed to building Claudie in a way that is safe, fair, transparent, and accountable. This document describes how we think about those commitments — and how we act on them.

This is a living document. As Claudie evolves and as our understanding of responsible AI matures, we will update these principles and practices. We welcome challenge, critique, and collaboration from the communities we serve.

2

Our 6 Core AI Principles

Human-Centered

Claudie is designed to augment human developers, not replace them. Every suggestion exists to serve human goals.

Implemented

Safety First

Multiple layers of safety filtering on inputs and outputs. Harmful content blocked at inference time.

Implemented

Fairness

Actively working to identify and reduce bias across languages, cultures, and geographies. APAC is a priority.

In Progress

Transparency

Honest about what Claudie is and its limitations. We clearly label AI outputs and publish policies openly.

Implemented

Accountability

SakuraCloudID Holdings takes responsibility for Claudie's behavior. Audit logs and incident procedures in place.

Implemented

Privacy by Design

Privacy built into architecture. Minimize collection, protect what we hold, give users meaningful control.

Implemented
3

AI Risk Classification

We classify AI use cases into four risk tiers, aligned with emerging global frameworks including the EU AI Act and OECD AI Principles:

Risk Tier Description Examples Oversight Required Claudie's Stance
PROHIBITED Use cases Claudie will never support under any circumstance. CSAM generation, bioweapon design, mass surveillance, election manipulation, non-consensual deepfakes N/A — blocked at model level Hard refusal, no override possible
HIGH RISK Use cases requiring significant human oversight and expert review. Medical diagnosis assistance, legal advice generation, financial decisions, critical infrastructure code, hiring automation Mandatory human review before deployment Assisted only — explicit warnings displayed, output requires expert validation
LIMITED RISK Use cases with moderate impact requiring transparency and disclosure. Customer-facing chatbots, automated content generation, biometric processing, emotion recognition User notification required (AI involvement disclosed) Supported with mandatory disclosure guidance
MINIMAL RISK Standard developer and productivity use cases. Code completion, bug fixing, documentation, test generation, refactoring, learning and education Standard responsible use Fully supported
PROHIBITED Prohibited Use Cases
DescriptionUse cases Claudie will never support under any circumstance.
ExamplesCSAM generation, bioweapon design, mass surveillance, election manipulation, non-consensual deepfakes
OversightN/A — blocked at model level
Claudie's StanceHard refusal, no override possible
HIGH RISK High Risk Use Cases
DescriptionUse cases requiring significant human oversight and expert review.
ExamplesMedical diagnosis assistance, legal advice generation, financial decisions, critical infrastructure code, hiring automation
OversightMandatory human review before deployment
Claudie's StanceAssisted only — explicit warnings, expert validation required
LIMITED RISK Limited Risk Use Cases
DescriptionUse cases with moderate impact requiring transparency and disclosure.
ExamplesCustomer-facing chatbots, automated content generation, biometric processing, [+4 more]
OversightUser notification required (AI involvement disclosed)
Claudie's StanceSupported with mandatory disclosure guidance
MINIMAL RISK Minimal Risk Use Cases
DescriptionStandard developer and productivity use cases.
ExamplesCode completion, bug fixing, documentation, [+2 more]
OversightStandard responsible use
Claudie's StanceFully supported
4

Human Oversight & Control

Input
User Input
AI
Claudie AI
Filter
Safety Filter
*
Human Review
Result
Output

* Where required for high-stakes outputs. Human validation always available.

Claudie is not autonomous in consequential decisions. Our approach:

  • AI agent pipelines include mandatory human-in-the-loop checkpoints for actions that are irreversible, high-cost, user-facing, or operating on production systems.
  • Users can configure agent autonomy levels:

Supervised Mode

Approve every action before execution. Full human control.

Assisted Mode

Approve actions above a configurable risk threshold.

Autonomous Mode

Full automation for low-risk tasks only, restricted scope, full audit log.

  • We will never remove the ability for users to override, pause, or terminate any AI agent action.
  • Emergency stop: all agent pipelines support immediate halt with rollback where technically feasible.
5

Bias, Fairness & Inclusion

We acknowledge that large language models, including Claudie, can and do exhibit biases present in training data. Our approach is proactive and honest:

  • Diverse training data: Multilingual training with deliberate APAC representation — Bahasa Indonesia, Traditional Chinese, Simplified Chinese, Japanese, and Korean included as first-class languages.
  • Ongoing evaluation: Continuous bias evaluation across demographic dimensions, coding traditions, and geographic contexts.
  • Cultural red-teaming: Dedicated testing for cultural bias in code suggestions and documentation generation across APAC markets.
  • External audit: Independent bias audit planned for Q2 2026.

We will be honest when we find bias — and publish what we discover. Users who encounter biased outputs are encouraged to report concerns to ethics@claudie.id.

6

Environmental Responsibility

Claudie runs on 24 NVIDIA H200 GPUs consuming significant power. We acknowledge this environmental impact directly, not through greenwashing.

Our environmental commitments:

  • Direct Liquid Cooling (DLC): Reduces cooling energy by approximately 40% compared to traditional air cooling.
  • Renewable energy target: 100% renewable energy sourcing for data center operations by 2026.
  • Carbon tracking: Annual carbon footprint reporting — honest numbers, even when unflattering.
  • Inference optimization: Continuous model efficiency improvements to reduce compute per request over time.
7

Data & AI Training Ethics

  • We do NOT train on user data without explicit, informed, opt-in consent.
  • We do NOT use prompts or outputs from Claudie users to train future model versions by default.
  • Enterprise data sovereignty: Your code and data stays yours — never used for training under any circumstance.
  • Training data sourcing: Licensed datasets, permissively licensed open source code, and synthetic data generation.
  • Data provenance: We maintain records tracking the source of training data.
  • Creator rights: We honor robots.txt and emerging opt-out standards. We support the right of creators to exclude their work from AI training.
8

Content Safety & Moderation

Claudie employs a multi-layer content safety pipeline:

1

Input Filtering (Pre-Inference)

Harmful prompts blocked before reaching the model. Immediate rejection with safety notice.

2

Model-Level Safety Training

Safety considerations built into the base model during training. Behavioral boundaries embedded in model weights.

3

Output Filtering (Post-Inference)

Catch harmful outputs before they reach the user. Secondary safety net for edge cases.

4

User Reporting

Community flagging system for safety issues. User reports trigger investigation and model updates.

Safety filters are NOT bypassable by any user tier — including enterprise API customers. We conduct regular red-teaming including adversarial prompt testing, jailbreak simulation, and cultural sensitivity review across APAC contexts. False positive rates are monitored — we know over-refusal is also a failure mode.

9

Transparency & Explainability

  • We publish this Responsible AI Policy publicly — no hidden principles.
  • We label AI-generated content clearly where technically feasible.
  • We are honest about Claudie's limitations:
    • Knowledge cutoff exists — Claudie may not know recent events
    • Code suggestions may contain bugs — always review before use
    • Claudie can be confidently wrong — calibrated uncertainty is an active research goal
    • Cultural context outside training distribution may be less reliable
  • Model card: Technical transparency report planned for beta launch.
  • We support explainability research and will implement interpretability features where practical.
10

Our Commitments to You

We will never build Claudie into a tool of oppression

Regardless of commercial pressure, we will not enable mass surveillance, censorship infrastructure, or tools designed to harm marginalized communities.

We will publish meaningful transparency reports

Not just marketing documents — honest assessments of capabilities, limitations, safety incidents, and performance across use cases and demographics.

We will respond to ethics concerns seriously

Not treat them as PR problems to manage. Concerns will be investigated, and findings will inform policy updates.

We will work with affected communities

Developers, organizations, and citizens across Indonesia and APAC will have opportunities to shape how Claudie is built and deployed.

We will be honest when we get things wrong

Post-mortems published for significant AI safety incidents. Learning in public, not just in private.

We will not race to deploy capabilities beyond safety

We will not release features faster than our safety measures can keep pace with. Speed does not justify recklessness.

11

Governance & Oversight

  • Internal AI Ethics Review: All major model updates reviewed before deployment by a cross-functional team including safety, legal, and product leadership.
  • External Ethics Advisory: We are establishing an independent AI ethics advisory panel. Open call for members planned for 2026.
  • HKSTP alignment: Our responsible AI practices are developed in consultation with HKSTP's AI ethics framework.
  • Regulatory engagement: Active engagement with Kominfo and relevant Indonesian regulatory bodies on AI governance.
  • Annual review: This policy reviewed and updated at minimum annually, or sooner if significant incidents or regulatory changes require it.
  • Community input: We accept public comments at ethics@claudie.id and consider them in policy development.
12

Reporting AI Harms & Concerns

AI Ethics Inquiries

Questions about our AI practices, bias reports, ethical concerns, or academic collaboration requests.

If you experience or observe any of the following, please report:

  • Biased or discriminatory outputs from Claudie
  • Safety filter failures or harmful content generation
  • Ethical concerns about Claudie's design or deployment
  • Academic or civil society collaboration inquiries

We commit to:

  • Acknowledging all reports within 3 business days
  • Investigating substantive concerns with appropriate seriousness
  • Updating this policy when investigations reveal needed changes
  • Protecting reporters from retaliation