Skip to content
← plyra.ai

Plyra Guard

Plyra Guard

Production-grade action middleware for agentic AI

Stop your agents before they do something irreversible. Plyra Guard intercepts every tool call, evaluates it against your policy, and blocks, logs, or escalates — in under 2ms.

PyPI Python License Tests

Get Started View on GitHub

Why Plyra Guard?

LLM agents fail in the same ways: they delete files they shouldn't, call APIs with wrong credentials, exfiltrate data, or loop forever. These aren't model problems — they're infrastructure problems. Plyra Guard is the missing safety layer.

Framework Agnostic

One-line wrap for LangChain, LangGraph, AutoGen, CrewAI, OpenAI, Anthropic, or any Python function.

🛡️

Policy as Code

Define allow/block rules in YAML or Python. Regex, semantic, and custom evaluators supported.

📊

Built-in Dashboard

Real-time action feed, policy hit rates, agent session replay — all in a local web UI.

🔍

Full Observability

OpenTelemetry, Datadog, and stdout exporters. Every action logged with intent, outcome, and latency.

60-Second Install

pip install plyra-guard
from plyra_guard import ActionGuard

guard = ActionGuard()

@guard.wrap
def delete_file(path: str) -> str:
    import os
    os.remove(path)
    return f"Deleted {path}"

# Safe call — allowed
delete_file("/tmp/report.txt")

# Blocked by default policy
delete_file("/etc/passwd")  # → PolicyViolationError

Framework Support

Framework Status Pattern
LangChain guard.wrap(tools)
LangGraph Custom tool node
AutoGen guard.wrap([func])
CrewAI guard.wrap(tools)
OpenAI Function call interceptor
Anthropic Tool use interceptor
Plain Python @guard.wrap decorator

Coming Soon: plyra-memory

Persistent, structured memory for AI agents — episodic context, semantic recall, and working state across sessions. Works with any agent framework. Learn more at plyra.ai and star the GitHub org for updates.