Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Cudam0at/rvllm

rvllm

rvLLM: High-performance LLM inference in Rust. Drop-in vLLM replacement.

59.4/100
568Forks: 55
View on GitHub
Loading report...

Similar Projects

SageAttention

63

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

Cuda3.3K

how-to-optim-algorithm-in-cuda

59

how to optimize some algorithm in cuda.

Cuda2.9K

mirage

73

Mirage Persistent Kernel: Compiling LLMs into a MegaKernel

Cuda2.2K

rtp-llm

80

RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

Cuda1.1K
Back to List