Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++zhihu/ZhiLight

ZhiLight

A highly optimized LLM inference acceleration engine for Llama and its variants.

64.5/100
904Forks: 102
View on GitHub
Loading report...

Similar Projects

vllm

93

A high-throughput and memory-efficient inference and serving engine for LLMs

Python77.8K

PowerInfer

57

High-speed Large Language Model Serving for Local Deployment

C++9.4K

lemonade

86

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++3.6K

vllm-ascend

78

Community maintained hardware plugin for vLLM on Ascend

C++2.0K
Back to List