Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonbentoml/BentoML

BentoML

The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!

89.8/100
8.5KForks: 917
View on GitHubHomepage →
Loading report...

Similar Projects

cognita

75

RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry

Python4.3K

FedML

67

FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https://TensorOpera.ai) is your generative AI platform at scale.

Python4.0K

vllm-ascend

77

Community maintained hardware plugin for vLLM on Ascend

Python1.7K

mosec

84

A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine

Python892
Back to List