Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythontoverainc/willow-inference-server

willow-inference-server

Open source, local, and self-hosted highly optimized language inference server supporting ASR/STT, TTS, and LLM across WebRTC, REST, and WS

52.5/100
500Forks: 58
View on GitHub
Loading report...

Similar Projects

vllm

93

A high-throughput and memory-efficient inference and serving engine for LLMs

Python78.8K

sglang

91

SGLang is a high-performance serving framework for large language models and multimodal models.

Python26.9K

faster-whisper

64

Faster Whisper transcription with CTranslate2

Python22.6K

OpenLLM

89

Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.

Python12.3K
Back to List