Ask HN: Are LLM providers making LLMs worse on purpose? The question is less about the classic MoE/Quantization debate, but more about the trained behaviors for a model. It feels like the most ideal model for an LLM provider is the one where user have to follow up with another prompt to clarify or improve the result and make it behave like this maybe 50% of the time to avoid excessive churn. |