Legal

AI Risk Assessment

NIST AI RMF alignment and regulatory classification.

Last updated: April 25, 2026

This AI Risk Assessment documents ORA’s risk classification and alignment with the NIST AI Risk Management Framework (AI RMF 1.0, updated April 2026). This documentation supports safe harbor claims under Colorado SB 24-205 (effective June 30, 2026) and affirmative defense under Texas TRAIGA (effective January 1, 2026).

1. EU AI Act Classification

Classification: Limited Risk (not Annex III high-risk)

Rationale: ORA is a business-to-business AI orchestration platform. It does not make autonomous decisions that directly and significantly affect individuals’ legal rights, access to essential services, or physical safety. ORA’s governance kernel maintains human oversight for all consequential decisions.

ORA is not classified as high-risk because it:

  • Is not used for biometric identification or categorization of natural persons
  • Is not deployed in critical infrastructure management
  • Is not used for employment or worker management decisions without human review
  • Is not used for credit, insurance, or educational access decisions
  • Is not deployed in law enforcement contexts

As a limited-risk AI system, ORA is required to maintain transparency (this document and the AI Transparency Statement) and ensure users are informed they are interacting with AI.

2. NIST AI RMF Alignment

Govern

  • Constitutional governance kernel with four authority gates (structural, evidence, consistency, consensus)
  • Acceptable Use Policy prohibiting harmful applications
  • Human oversight required for all consequential actions
  • Audit trail for all agent decisions

Map

  • Mission scoping defines intended use and boundaries for each deployment
  • Tool list configuration limits agent reach
  • Authority level configuration limits autonomous action scope
  • This Risk Assessment identifies known risks and affected populations

Measure

  • Audit logs provide performance measurement per deployment
  • Error rates and governance gate outcomes tracked in telemetry
  • User feedback mechanism for reporting unexpected behavior
  • Underlying model performance documented in Agent System Cards

Manage

  • Scrutiny Mode: requires human review of all proposals
  • Safety Lock: forces human approval for every action
  • Mission Stop: immediately halts all agent activity
  • Authority Level: configures maximum autonomous action scope
  • Incident response procedures documented in Incident Disclosure Policy

3. Risk Categories and Mitigations

RiskLikelihoodImpactMitigation
Agent hallucination producing incorrect outputsMediumMediumHuman review for consequential decisions; audit logs
Agent exceeding intended scopeLowMediumGovernance gates; authority level configuration
Misuse for prohibited purposesLowHighAcceptable Use Policy; account monitoring; suspension
Data breachLowHighEncryption; access controls; incident response
Model provider outageMediumMediumConfigurable model endpoints; error handling

4. Annual Review

This Risk Assessment is reviewed and updated annually, or when significant changes to ORA’s capabilities or applicable regulations occur. Last review: April 25, 2026.

5. Contact

Risk Assessment questions: info@3d3d.ca