agentic ai infrastructure
Plyra builds open-source infrastructure for AI agents in production — starting with action control and persistent memory.
products
Two focused tools. One mission: make agentic AI safe to deploy.
Action middleware for AI agents. Intercepts every tool call, evaluates it against your policy, and blocks, logs, or escalates — before anything irreversible happens.
Persistent, structured memory for AI agents. Store episodic context, semantic facts, and working state across sessions — with retrieval that actually works.
from plyra_guard import ActionGuard guard = ActionGuard() # sensible defaults out of the box @guard.wrap def delete_file(path: str) -> str: import os; os.remove(path) return f"Deleted {path}" delete_file("/tmp/report.txt") # ✓ ALLOW 0.3ms delete_file("/etc/passwd") # ✗ BLOCK "System config is off-limits"
philosophy
Guardrails bolted onto models break when models change. Infrastructure baked into your agent loop doesn't.
Your agent framework will change. Your safety layer shouldn't. Plyra sits below the framework, not inside it.
Rules live in your repo, reviewed in PRs, tested in CI. Not in a vendor dashboard you can't version control.
Every action logged — allowed and blocked. Ship to OTEL, Datadog, or your own system. No black boxes.
Evaluation happens in-process. No network hop, no external API, no latency budget consumed by safety.
The safety and memory layers for your agents will always be open source. Apache 2.0, no strings.
Use plyra-guard standalone. Add plyra-memory when you need it. Mix with your existing stack.