Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++jd-opensource/xllm

xllm

A high-performance inference engine for LLMs, optimized for diverse AI accelerators.

76.7/100
1.1KForks: 144
View on GitHubHomepage →
Loading report...

Similar Projects

KuiperLLama

48

校招、秋招、春招、实习好项目,带你从零动手实现支持LLama2/3和Qwen2.5的大模型推理框架。

C++506

PowerInfer

62

High-speed Large Language Model Serving for Local Deployment

C++8.8K

lemonade

85

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++2.3K

ZhiLight

74

A highly optimized LLM inference acceleration engine for Llama and its variants.

C++905
Back to List