Faster Inference: Torch.compile vs. TensorRT | Dark Hacker News