Signal #75148POSITIVE

Show HN: Runtime security for AI agents(injection,tool abuse, data exfiltration)

100

Hi HNI’ve been working on an open-source project to explore a problem I keep running into with LLM systems in production:We give models the ability to call tools, access data, and make decisions… but we don’t have a real runtime security layer around them.So I built a system that acts as a control plane for AI behavior, not just infrastructure.GitHub: https://github.com/dshapi/AI-SPMWhat it doesThe system sits around an LLM pipeline and enforces decisions in real time:Detects and blocks prompt injection (including obfuscation attempts) Forces structured tool calls (no direct execution from the model) Validates tool usage against policies Prevents data leakage (PII / sensitive outputs) Streams all activity for detection + audit Architecture (high-level) Gateway layer for request control Context inspection (prompt analysis + normalization) Policy engine (using Open Policy Agent) Runtime enforcement (tool validation + sandboxing) Streaming pipeline (Apache Kafka + Apache Flink) Output fil...

HackerNews Show AIabout 5 hours ago
Read Full Article

Explore with AI-Powered Tools

View All Signals

Explore more AI intelligence

Want to discover more AI signals like this?

Explore Steek
Show HN: Runtime security for AI agents(injection,tool abuse, data exfiltration) — Steek | Steek