Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++b4rtaz/distributed-llama

distributed-llama

Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.

79.6/100
2.9KForks: 226
View on GitHub
Loading report...

Similar Projects

PowerInfer

57

High-speed Large Language Model Serving for Local Deployment

C++9.4K

lemonade

86

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++3.7K

LeanCopilot

71

LLMs as Copilots for Theorem Proving in Lean

C++1.3K

ZhiLight

64

A highly optimized LLM inference acceleration engine for Llama and its variants.

C++904
Back to List