Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonvllm-project/vllm

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

92.8/100
72.4KForks: 14.1K
View on GitHubHomepage →
Loading report...

Similar Projects

sglang

90

SGLang is a high-performance serving framework for large language models and multimodal models.

Python24.2K

LMCache

87

Supercharge Your LLM with the Fastest KV Cache Layer

Python7.6K

vllm-ascend

77

Community maintained hardware plugin for vLLM on Ascend

Python1.7K

nano-vllm

49

Nano vLLM

Python12.1K
Back to List