1. Scope
This Acceptable Use Policy applies to every Customer, every Customer’s end user (the people Customer talks to through ARIA), and every operator of the ARIA Platform. Capitalised terms have the meaning given in the Terms of Service.
2. Prohibited content
You must not use the ARIA Platform to send, store, or process content that:
- is unlawful in your jurisdiction or in the jurisdiction the content is being sent to (including hate speech, terrorist content, content that infringes intellectual-property rights, content that violates export controls, or content that violates the rules of the channel you are routing through);
- sexually exploits or endangers minors. Content that depicts minors in a sexual or sexualised manner is reported to the National Center for Missing and Exploited Children (NCMEC) and to local authorities, and the responsible account is terminated immediately;
- harasses, defames, or threatens an identifiable individual or group;
- promotes violence, self-harm, or unlawful activity;
- contains malware, exploit kits, command-and-control payloads, phishing links, or other technically harmful material;
- misrepresents the source or destination of a communication (spoofing the sender identity of someone you are not authorised to represent), or otherwise constitutes fraud;
- violates the privacy or data-protection rights of any individual, including transmitting sensitive personal data without lawful basis or consent.
3. Prohibited conduct
You must not:
- Spam. Send bulk messages to recipients who have not consented to receive them, or use the platform to evade opt-out signals. Honour every documented unsubscribe and stop-keyword across every channel you operate. ARIA enforces an opt-out registry across all flows; do not work around it.
- Channel-rule violations. Comply with the rules of every channel you connect (for example, the WhatsApp Business Messaging Policy, the Telegram Bot API rules, anti-spam rules under CAN-SPAM, CASL, or local equivalents). Channel-imposed rate limits, quality scores, and content categories apply through ARIA just as they would directly.
- Impersonation. Impersonate Simplification, ARIA, another customer, or any person or entity, or otherwise misrepresent your affiliation. The “Powered by ARIA” widget cannot be altered to suggest a different vendor; the schema.org JSON-LD relationship between ARIA and Simplification cannot be falsified.
- Reverse-engineering or model extraction. Do not attempt to reverse-engineer, decompile, or extract source code, model weights, or system prompts. Do not use ARIA outputs to train, fine-tune, or evaluate a competing AI system.
- Resale or sub-licensing. Do not resell, sublicense, white-label, or rent the ARIA Platform without an Agency or Enterprise tier addendum that expressly permits it.
- Circumvention of rate limits. Do not split a workload across multiple accounts to evade per-tier quotas. ARIA tier limits exist to protect shared infrastructure; if you need more, contact sales@simplification.io.
- Disruption. Do not knowingly transmit data that is designed to overload, disrupt, or impair the ARIA Platform, including denial-of-service vectors, prompt-injection attacks against shared agents, or attempts to exfiltrate other tenants’ data.
- Security boundary tampering. Do not probe, scan, or test the vulnerability of the platform without prior written consent. Coordinated disclosure under our vulnerability disclosure programme is welcomed.
- Sanctioned-party use. You may not use ARIA from a jurisdiction or by a person on a sanctions list maintained by Canada (Department of Global Affairs), the United States (OFAC), the United Kingdom (OFSI), the European Union, or the United Nations.
4. AI-specific rules
ARIA produces probabilistic outputs from large language models. You must:
- Disclose AI use to your end users where the law requires. For high-risk uses under the EU AI Act, that includes a clear label that the response was AI-generated. ARIA exposes a per-decision explainability trace via
/v1/decisions/{id}/explain; route customer-facing disclosure through your own UX. - Keep human review where the stakes warrant it. Regulated verticals (healthcare, legal, financial services, government) must use the human-in-the-loop tier of any flow that produces customer-binding output. Default flows include human review; the auto-pilot tier requires explicit per-flow opt-in.
- Honour grounding. Do not strip the citation chain or the “unverified” flag from ARIA outputs before showing them to an end user. The verifiable-AI surface exists to give your end user the receipts; removing it defeats the safeguard.
- Do not evade ethical boundaries. Do not use jailbreak-style prompts, system-prompt extraction techniques, or adversarial input designed to bypass the platform’s ethical gates and content-safety classifiers.
5. Data-handling responsibilities
You are responsible for the lawful basis under which you process personal data through ARIA. In particular:
- obtain the consent or other lawful basis required by GDPR, PIPEDA, PHIPA, CCPA, or any other privacy law that applies to you;
- do not submit Special-Category Data under GDPR Article 9 (or equivalent) outside the channels and tiers configured to handle it; PHI specifically requires a tier with a BAA in place;
- keep your knowledge-base sources clear of content you do not have the right to use; ARIA’s grounding citations will accurately attribute the source of generated content, but the right-to-use decision is yours.
6. Enforcement
We try to enforce sparingly. The first response to a borderline violation is usually a notice from trust@simplification.io requesting that you remediate. For serious or repeated violations we may, in our reasonable discretion:
- Throttle the offending flow or channel temporarily, with notice;
- Suspend the offending account, with notice;
- Terminate the offending account, with notice unless prior notice would prejudice an ongoing investigation;
- Refer the matter to authorities where the conduct is unlawful (mandatory for child sexual exploitation material; otherwise on a case-by-case basis).
Where conduct creates an immediate risk of harm to other customers, end users, or our infrastructure, we may suspend without prior notice and provide notice as soon as reasonably practicable.
7. Reporting an AUP violation
If you believe a Customer or end user is using the ARIA Platform in violation of this Policy, report it to trust@simplification.io. For child safety concerns please use trust@simplification.io and mark the email CHILD-SAFETY; we will route it to a named senior on-call within one hour.
8. Changes to this Policy
We will update this Policy as needed. Material changes will be notified at least thirty (30) days before they take effect. Prior versions are accessible through the “View previous versions” link at the top of this page.
9. Contact
AUP enforcement: trust@simplification.io. Legal notices: legal@simplification.io.