Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
Pythonjundot/omlx

omlx

LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar

87.7/100
11.7KForks: 1.0K
View on GitHubHomepage →
Loading report...

Similar Projects

vllm-mlx

77

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

Python1.0K

mlx-vlm

83

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

Python4.5K

mlx-tune

83

Fine-tune LLMs on your Mac with Apple Silicon. SFT, DPO, GRPO, Vision, TTS, STT, Embedding, and OCR fine-tuning — natively on MLX. Unsloth-compatible API.

Python1.2K

gorilla

83

Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)

Python12.9K
Back to List