Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++Luce-Org/lucebox-hub

lucebox-hub

Lucebox optimization hub: hand-tuned LLM inference, built for specific consumer hardware.

71.7/100
561Forks: 44
View on GitHubHomepage →
Loading report...

Similar Projects

RCLI

68

Talk to your Mac, query your docs, no cloud required. On-device voice AI + RAG

C++1.5K

lemonade

86

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++3.6K

xllm

78

A high-performance inference engine for LLMs, optimized for diverse AI accelerators.

C++1.2K

llama.rn

79

React Native binding of llama.cpp

C++917
Back to List