Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++lean-dojo/LeanCopilot

LeanCopilot

LLMs as Copilots for Theorem Proving in Lean

79.3/100
1.2KForks: 122
View on GitHubHomepage →
Loading report...

Similar Projects

yalm

39

Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O

C++555

MNN

94

MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.

C++14.5K

PowerInfer

62

High-speed Large Language Model Serving for Local Deployment

C++8.8K

distributed-llama

76

Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.

C++2.9K
Back to List