Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonmicrosoft/sarathi-serve

sarathi-serve

A low-latency & high-throughput serving engine for LLMs

49.0/100
500Forks: 63
View on GitHub
Loading report...

Similar Projects

vllm

93

A high-throughput and memory-efficient inference and serving engine for LLMs

Python80.2K

lorax

85

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

Python3.8K

transformers

98

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Python160.7K

sglang

91

SGLang is a high-performance serving framework for large language models and multimodal models.

Python27.9K
Back to List