Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
C++FastFlowLM/FastFlowLM

FastFlowLM

Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.

81.9/100
1.2KForks: 76
View on GitHubHomepage →
Loading report...

Similar Projects

lemonade

86

Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https://discord.gg/5xXzkMu8Zk

C++3.6K

tt-metal

77

:metal: TT-NN operator library, and TT-Metalium low level kernel programming model.

C++1.4K

PowerInfer

57

High-speed Large Language Model Serving for Local Deployment

C++9.4K

mllm

74

Fast Multimodal LLM on Mobile Devices

C++1.5K
Back to List