Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++foldl/chatllm.cpp

chatllm.cpp

Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)

75.8/100
828Forks: 60
View on GitHub
Loading report...

Similar Projects

PowerInfer

62

High-speed Large Language Model Serving for Local Deployment

C++8.8K

distributed-llama

76

Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.

C++2.9K

lemonade

85

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++2.3K

LeanCopilot

79

LLMs as Copilots for Theorem Proving in Lean

C++1.2K
Back to List