No Need for Speed: Why Batch LLM Inference Is Often the Smarter Choice | Dark Hacker News