Back to List
Notice:This resource is provided by a third-party author. Please review the code with AI tools or manually before use to ensure security and compatibility.
PythonEvolvingLMMs-Lab/LLaVA-OneVision-1.5

LLaVA-OneVision-1.5

Fully Open Framework for Democratized Multimodal Training

52.1/100
757Forks: 60
View on GitHub
Loading report...

Similar Projects

mlx-vlm

82

MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.

Python2.2K

InternVL

67

[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型

Python9.9K

VLMEvalKit

81

Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks

Python3.9K

vllm-mlx

60

OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX backend, 400+ tok/s. Works with Claude Code.

Python531
Back to List