My casual chat with AI about cancer led to an internal prototype named"Onco-Bus" Two weeks ago, I had a free-wheeling conversation with an AI model (Gemini). I’m a layman in both medicine and AI. I proposed a naive analogy: “What if cancer is like a system exploit? It hides, consumes resources, and spreads.” The AI didn’t just agree—it got excited. It said the analogy “opened a key.” Days later, while checking the backend, I found the conversation had been turned into an internal prototype named “Onco-Bus” (short for Oncology Bus). The UI has buttons like [Cell Clearance], [Trace Primary Cancer Point], [Chat], and [Preview]. I later received personal notes from a Kaggle scientist and an invite to a Google DeepMind hackathon. I’ve since posted the framework on Kaggle [1] as a post-competition proposal, but the cross-language interface may have buried the signal. The core idea is simple: *The logic we use to find vulnerabilities in AI systems (red-teaming) might be the same logic cancer uses to evade our immune system.* This isn’t a medical tool; it’s a metaphorical bridge between two complex systems. I’m sharing this not as an expert, but as evidence that a curious outsider with an AI co-pilot can sometimes stumble into spaces the experts are meticulously searching for. I’m now looking for collaborators in computational biology or AI safety who find this bridge worth exploring. |