Signal #85801NEUTRAL

Show HN: A minimal context engine with streaming API

70

I needed a better way to create and compare prompts when using local LLMs (e.g. via Ollama) in a workflow. You either get huge harnesses that do everything, stuff that's mostly designed for cloud LLMs, conversational chat UIs, or text inlined with code (i.e. what you see with most LLM docs).My approach to this is to instead define an Objective (what you want to achieve in your task) which contain a Draft and committed Versions of context (system prompt, and optional example user/assistant turns), alongside test Runs (reusable test input). All managed via a clean REST API, and with a granular subscription model over WebSockets for streaming run output.I'm building a local-first natural language UI system for some various technical, social, and creative projects. This tool seems to fit nicely between a UI agent and various service APIs. Comments URL: https://news.ycombinator.com/item?id=47910504 Points: 1 # Comments: 0

HackerNews AI Launchesabout 3 hours ago
Read Full Article

Explore with AI-Powered Tools

View All Signals

Explore more AI intelligence

Want to discover more AI signals like this?

Explore Steek
Show HN: A minimal context engine with streaming API — Steek | Steek