Show HN: Verdic Guard – deterministic guardrails for production AI I’m building Verdic Guard to explore a problem I kept seeing with LLMs in production. Models often behave well in demos and short interactions, but once they’re embedded into long, agentic, or real-world workflows, outputs can drift in subtle ways. Prompt tuning, retries, and monitoring help, but they don’t clearly define or enforce what the system is actually allowed to do. Verdic Guard treats AI reliability as a validation and enforcement problem, not just a prompting problem. The idea is to define intent, boundaries, and constraints upfront, then validate outputs against those constraints before they reach users or downstream systems. This is early and opinionated. I’m sharing to get feedback from people who’ve dealt with: LLMs in long-running or agentic workflows Production reliability vs demo behavior Guardrails beyond prompt engineering Project: https://www.verdic.dev Happy to answer questions or hear critiques. — Kundan |