Design Patterns for Securing LLM Agents Against Prompt Injections | Dark Hacker News