Ask HN: If evil ASI is a threat, is there anything scarier than LEO inference? There is a possibility of self-improving artificial super intelligence. While this sounds sci-fi, there is a non-zero possibility of it happening. I have been a hater, I mean... who knows... but for the sake of argument let's accept it as a possible outcome. Given that assumption, is there anything more scary than a space-based, distributed data center in the sky, with a non-aligned agent? How does one unplug that? Think about this for a moment. Starlink is the Internet 2.0. It is a very difficult to destroy network with millions of clients/base stations. The proposed always-in-sun-orbit inferance net is our most likely "great filter?" If there is even a .01% chance of non-aligned ASI, then should we not ban distributed inference in space? |