rvLLM: High-performance LLM inference in Rust. Drop-in vLLM replacement.
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
how to optimize some algorithm in cuda.
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.