Legal
NIST AI RMF alignment and regulatory classification.
Last updated: April 25, 2026
This AI Risk Assessment documents ORA’s risk classification and alignment with the NIST AI Risk Management Framework (AI RMF 1.0, updated April 2026). This documentation supports safe harbor claims under Colorado SB 24-205 (effective June 30, 2026) and affirmative defense under Texas TRAIGA (effective January 1, 2026).
Classification: Limited Risk (not Annex III high-risk)
Rationale: ORA is a business-to-business AI orchestration platform. It does not make autonomous decisions that directly and significantly affect individuals’ legal rights, access to essential services, or physical safety. ORA’s governance kernel maintains human oversight for all consequential decisions.
ORA is not classified as high-risk because it:
As a limited-risk AI system, ORA is required to maintain transparency (this document and the AI Transparency Statement) and ensure users are informed they are interacting with AI.
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Agent hallucination producing incorrect outputs | Medium | Medium | Human review for consequential decisions; audit logs |
| Agent exceeding intended scope | Low | Medium | Governance gates; authority level configuration |
| Misuse for prohibited purposes | Low | High | Acceptable Use Policy; account monitoring; suspension |
| Data breach | Low | High | Encryption; access controls; incident response |
| Model provider outage | Medium | Medium | Configurable model endpoints; error handling |
This Risk Assessment is reviewed and updated annually, or when significant changes to ORA’s capabilities or applicable regulations occur. Last review: April 25, 2026.
Risk Assessment questions: info@3d3d.ca