Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonflashinfer-ai/flashinfer

flashinfer

FlashInfer: Kernel Library for LLM Serving

83.6/100
5.1KForks: 768
View on GitHubHomepage →
Loading report...

Similar Projects

TensorRT-LLM

89

TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

Python13.0K

InferenceX

69

Open Source Continuous Inference Benchmarking Qwen3.5, DeepSeek, GPTOSS - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TPUv6e/v7/Trainium2/3

Python638

pymde

76

Minimum-distortion embedding with PyTorch

Python580

vllm

93

A high-throughput and memory-efficient inference and serving engine for LLMs

Python72.4K
Back to List