Add minimal Pre-Action Authorization example pattern
Opened this issue · 0 comments
uchibeke commented
If this repo is the canonical place for guardrail patterns, I can contribute a minimal pre-action authorization example that demonstrates the pattern. The example is framework-agnostic (no OpenAI or runtime-specific code) and uses APort as the implementation example. Developers can adapt it to use their own authorization service.
APort Pre-Action Authorization Architecture
flowchart TB
subgraph "Agent Application"
A[User Request] --> B[Agent Runtime]
B --> C{Action Decision}
C -->|Wants to execute tool| D[Action Guardrail Wrapper]
end
subgraph "APort Service (Pre-Action Authorization)"
D -->|Verify Request| E[APort API]
E --> F[Load Passport<br/>Agent Identity & Limits]
E --> G[Load Policy<br/>Rules & Requirements]
F --> H[Policy Evaluator]
G --> H
H --> I{Evaluation Result}
I -->|Pass| J[ALLOW<br/>decision_id emitted]
I -->|Fail| K[DENY<br/>reasons provided]
end
subgraph "Tool Execution"
J --> L[Tool Executes<br/>Side Effects Happen]
K --> M[Tool Blocked<br/>No Side Effects]
L --> N[Response]
M --> N
end
style H fill:#2e7d32,stroke:#1b5e20,stroke-width:2px,color:#fff
style J fill:#388e3c,stroke:#1b5e20,stroke-width:2px,color:#fff
style K fill:#c62828,stroke:#b71c1c,stroke-width:2px,color:#fff
style L fill:#1565c0,stroke:#0d47a1,stroke-width:2px,color:#fff
style M fill:#e65100,stroke:#bf360c,stroke-width:2px,color:#fff
Key Point: Pre-action authorization runs after the LLM/agent decides what action to take but before any side effects occur. This addresses a distinct concern from input/output guardrails:
- Input/output guardrails: Protect against malicious/unsafe data
- Pre-action authorization: Enforce business policies, identity, and limits on actions
The proposed example demonstrates this as a generic pattern, using APort as the implementation example.
I have more details in the Agent Repo at openai/openai-agents-python#2022
Links and references
- Open Agent Passport (OAP) v1.0 Specification:
https://github.com/aporthq/aport-spec— The open standard for runtime trust and authorization in AI agents - OAP Policy Packs:
https://github.com/aporthq/aport-policies— Standardized policy implementations following OAP v1.0 - APort Implementation:
https://aport.io— Reference implementation of OAP v1.0 - Microsoft Agent Framework discussion: microsoft/agent-framework#1701
- APort Policies (public): https://github.com/aporthq/aport-policies