Embeddable physically based renderer
MNN: A blazing-fast, lightweight inference engine battle-tested by Alibaba, powering high-performance on-device LLMs and Edge AI.
A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
LLM inference in C/C++
Port of OpenAI's Whisper model in C/C++