Show HN: Julie update – local LLMs, CUA, installers and perf gains(tryjulie.vercel.app) The biggest shift is that Julie now supports fully local LLMs and agentic workflows. It’s no longer limited to answering questions about what’s on screen. It can now run writing and coding agents, and optionally take concrete actions on your computer under supervision. What’s new: - Local LLM support. Julie can now run entirely on-device, - Agentic computer use. I added a computer-use mode with demos showing multi-step actions like clicking, typing, and navigation. - Writing and coding agents. Draft, refactor, and iterate in-place without moving into a separate workspace. - Installers are now available, so setup is a lot simpler. - Significant performance improvements across startup time, memory usage, and latency. I also wrote a full walkthrough and demos covering how the agents work and where the boundaries are: https://tryjulie.vercel.app/ Repo + installers: https://github.com/Luthiraa/julie Thanks for all the support and feedback. From the bottom of my heart, I really appreciate it. I’ve loved building this, and it’s been one of the fastest things I’ve taken from idea to something real that people are actually using. I really love this community. If you enjoyed checking it out, a star on the repo would mean a lot and helps more people find it. |