Update README.md
Browse files
README.md
CHANGED
|
@@ -14,11 +14,13 @@ Beijing University of Posts and Telecommunications; University of Chinese Academ
|
|
| 14 |
| [Paper](https://xxxx) | [Code](https://github.com/baaivision/EVE) |
|
| 15 |
</div>
|
| 16 |
|
| 17 |
-
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for
|
| 18 |
-
We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We
|
| 19 |
-
After an in-depth investigation, we launch EVEv2.0, a new family of encoder-free VLMs
|
| 20 |
-
We
|
| 21 |
-
|
|
|
|
|
|
|
| 22 |
|
| 23 |
## Model Weights
|
| 24 |
|
|
|
|
| 14 |
| [Paper](https://xxxx) | [Code](https://github.com/baaivision/EVE) |
|
| 15 |
</div>
|
| 16 |
|
| 17 |
+
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment.
|
| 18 |
+
We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We develop efficient strategies for encoder-free VLMs that rival mainstream encoder-based ones.
|
| 19 |
+
After an in-depth investigation, we launch EVEv2.0, a new and improved family of encoder-free VLMs.
|
| 20 |
+
We show that: (i) Properly decomposing and hierarchically associating vision and language within a unified model reduces interference between modalities.
|
| 21 |
+
(ii) A well-designed training strategy enables effective optimization for encoder-free VLMs.
|
| 22 |
+
Through extensive evaluation, our EVEv2.0 represents a thorough study for developing a decoder-only architecture across modalities, demonstrating superior data efficiency and strong vision-reasoning capability.
|
| 23 |
+
|
| 24 |
|
| 25 |
## Model Weights
|
| 26 |
|