Legal

AI Transparency Statement

How ORA's AI agents work, what they can and cannot do, and how human oversight is maintained.

Last updated: April 25, 2026

This AI Transparency Statement is published in compliance with the U.S. Federal Trade Commission’s AI policy framework (March 2026) and in preparation for EU AI Act transparency obligations effective August 2, 2026. It explains how ORA’s AI agents operate, what decisions they make autonomously versus with human approval, their known limitations, and the governance mechanisms that keep them auditable.

1. What ORA Is

ORA (Operational Reasoning Architecture) is a constitutional AI operating system. It orchestrates AI agents — autonomous software programs that execute multi-step tasks, make decisions, and interact with tools and external services on your behalf. ORA is built on the Mastra agent framework and routes inference requests to your configured model provider (by default, NVIDIA NIM).

ORA is not a model. It is an orchestration and governance layer. The underlying language models (GLM-5.1, GLM-4.7, and others) are provided by third-party providers. ORA governs how those models are used, what actions they can take, and under what conditions human approval is required.

2. Content Labeling

Under FTC guidance (March 2026), we disclose the following about ORA-generated content:

  • AI-generated: Content where ORA agents produced the primary output (research summaries, drafted communications, analysis reports, code)
  • AI-assisted: Content where a human author used ORA for research or structure but substantially revised the output
  • AI-enhanced: Content with minimal AI involvement (grammar correction, formatting)

When ORA agents produce outputs you intend to share externally (emails, reports, social content), you are responsible for appropriate disclosure as required by applicable law in your jurisdiction.

3. The Three Agents

ORA Operator Agent

  • Purpose: General orchestration, system operations, and decision-making
  • Model: GLM-5.1 (754B parameter, thinking-mode capable)
  • Capabilities: Task planning, mission coordination, system status queries, memory operations, tool calls within configured authority level
  • Limitations: Cannot access external systems beyond configured tool list. Cannot execute actions that exceed your authority configuration without human approval. May produce incorrect analysis.
  • Human oversight triggers: Any action classified as exceeding “structural” authority gate

ORA Researcher Agent

  • Purpose: Memory search, data analysis, findings compilation
  • Model: GLM-5.1
  • Capabilities: MCP memory search and storage, file reading, directory listing, analysis and summarization
  • Limitations: Knowledge is limited to your MCP memory contents and accessible files. Cannot browse the internet independently. Summaries may omit important context.
  • Human oversight triggers: Memory writes that modify critical records (configurable)

ORA Builder Agent

  • Purpose: File creation, modification, and configuration management
  • Model: GLM-5.1
  • Capabilities: Creating and modifying files, configuration changes, structured output generation
  • Limitations: Cannot execute code unless code execution tool is explicitly enabled. All file modifications are logged. Cannot modify files outside configured workspace.
  • Human oversight triggers: File writes to production systems require consensus gate approval by default

4. The Four Governance Gates

ORA’s governance kernel operates through four authority gates. Every agent action is evaluated against these gates before execution. Actions that do not pass all configured gates are held for human review.

  • Structural gate (blue): Is this action consistent with the mission scope and defined task boundaries?
  • Evidence gate (purple): Is this action supported by evidence in memory and context, or is the agent reasoning from insufficient information?
  • Consistency gate (cyan): Is this action consistent with past decisions and established patterns?
  • Consensus gate (green): For multi-agent scenarios, have the relevant agents reached agreement on this action?

You configure which gates apply to which action categories and how strict each gate is. You can require all four gates for high-consequence actions and allow single-gate approval for low-risk tasks.

5. What ORA Agents Can Do Autonomously

By default, ORA agents may execute the following without human approval:

  • Read files and directories within your configured workspace
  • Search and retrieve from MCP memory
  • Query system status and metrics
  • Draft content for your review
  • Call configured read-only API tools

6. What Always Requires Human Approval

Regardless of governance configuration, the following actions are never executed without explicit human approval:

  • Sending communications (email, messages) to external recipients
  • Writing to production databases or external systems
  • Executing commands on external servers
  • Financial transactions or API calls to payment systems
  • Deleting data in MCP memory or connected systems
  • Accessing systems not in the configured tool list

7. Known Limitations and Error Rates

  • Accuracy: GLM-5.1 achieves approximately 87% accuracy on standard reasoning benchmarks. Real-world accuracy varies significantly by task domain and complexity.
  • Hallucination: Like all large language models, ORA agents may generate plausible-sounding but incorrect information. Always verify agent outputs for factual claims.
  • Context limits: Agents may lose track of instructions in very long sessions or complex multi-step workflows.
  • Tool failures: External tool calls may fail due to network issues, API changes, or rate limiting. ORA logs but does not automatically retry failed tool calls.
  • Bias: Underlying models may reflect biases present in their training data. Critical decisions should not rely solely on agent outputs.

8. Audit Trail

ORA logs every agent action, tool call, governance gate decision, and human approval or override. Logs include: timestamp, agent ID, action type, gate outcomes, human intervention if any, and result. You can access your audit trail at any time from the ORA control room. Logs are retained for 90 days by default.

9. Human Override and Stop Mechanisms

You can stop any agent execution at any time:

  • Mission Stop button: Immediately halts all active agents and cancels queued actions
  • Safety Lock: Enables a mode where every action requires human approval regardless of governance settings
  • Scrutiny Mode: Requires human review of all proposals before any action is taken
  • Authority Level: Set the maximum scope of autonomous action (low, medium, high, locked)

10. High-Risk Use Prohibition

ORA is not designed or validated for use as a sole automated decision-making system in high-risk applications including: medical diagnosis or treatment decisions, credit or employment decisions affecting individuals, law enforcement targeting, critical infrastructure control, or any application where agent error could cause death or serious physical harm. Use in these contexts without appropriate human oversight and validation is prohibited by our Acceptable Use Policy.

11. NIST AI Risk Management Framework Alignment

ORA’s design is aligned with the NIST AI Risk Management Framework (RMF), which provides safe harbor in Colorado (SB 24-205) and affirmative defense in Texas (TRAIGA). Our alignment includes:

  • Governance: Constitutional governance kernel with four authority gates
  • Mapping: Mission scoping defines the intended use and limitations of each deployment
  • Measurement: Audit logs provide performance and error measurement
  • Management: Human oversight mechanisms, stop controls, and scrutiny modes

A full NIST AI RMF alignment report is available at Risk Assessment.

12. EU AI Act Compliance Status

Under the EU AI Act (effective August 2, 2026), ORA is assessed as a Limited Risk AI system (not Annex III high-risk). This classification is based on ORA’s role as an orchestration platform used by businesses for their own workflows, not as a system making decisions that directly affect individuals’ rights or opportunities.

As a limited risk AI system, ORA is required to maintain transparency with users (this document) and ensure humans are informed they are interacting with AI. We do not claim EU AI Act high-risk classification compliance at this time, and users who deploy ORA for high-risk applications within EU jurisdiction bear responsibility for applicable compliance.

13. Contact

Questions about ORA’s AI systems, governance, or this transparency statement: info@3d3d.ca