Fine-tuning LLMs for longer context and better RAG systems | Dark Hacker News