Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
PythonEvolvingLMMs-Lab/LLaVA-OneVision-1.5

LLaVA-OneVision-1.5

Fully Open Framework for Democratized Multimodal Training

50.8/100
794Forks: 62
View on GitHubHomepage →
Loading report...

Similar Projects

mlx-vlm

84

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

Python4.5K

VLMEvalKit

82

Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks

Python4.1K

WhisperJAV

83

ASR/STT subtitle generator. Uses Qwen3-ASR, local LLM, Whisper, TEN-VAD. Noise-robust for JAV

Python1.5K

vllm-mlx

78

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

Python923
Back to List