Add paper link, task category, and GitHub link
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,79 +1,91 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-sa-4.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
##
|
| 26 |
-
1.
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- video-text-to-text
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# Dataset used in D2VLM
|
| 10 |
+
|
| 11 |
+
[**Paper**](https://huggingface.co/papers/2512.24097) | [**Code**](https://github.com/nusnlp/d2vlm)
|
| 12 |
+
|
| 13 |
+
Here we provided the instruction for dataset preparation. We provided the transformed annotation (or steps for transformation) tailor for D2VLM, due to the new training objective design.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
| Dataset Name | Role | Source |
|
| 17 |
+
|----------------|:--------------------:|:------:|
|
| 18 |
+
| E.T. Bench | Evaluation Benchmark | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md) |
|
| 19 |
+
| Charades-STA | Evaluation Benchmark | [Data](https://prior.allenai.org/projects/charades), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
|
| 20 |
+
| Youcook2 | Evaluation Benchmark | [Data](http://youcook2.eecs.umich.edu/download), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
|
| 21 |
+
| E.T. Instruct | Finetuning data | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md) |
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
## E.T. Bench
|
| 26 |
+
1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md).
|
| 27 |
+
|
| 28 |
+
2. Use the annotations provided in this Hugging Face repo (i.e., `D2VLM-Dataset/ETBench/evi`).
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
## Charades-STA
|
| 32 |
+
1. Follow the provided link in the table above to download the raw videos and the annotations.
|
| 33 |
+
The folder is organized as follows.
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
xx/charades
|
| 37 |
+
ββ Charades_v1 (Downloaded and extracted raw videos from Charades_v1.zip)
|
| 38 |
+
β ββ 001TG.mp4
|
| 39 |
+
β ββ 003WS.mp4
|
| 40 |
+
β ββ ...
|
| 41 |
+
ββ test.caption_coco_format.json
|
| 42 |
+
```
|
| 43 |
+
2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
|
| 44 |
+
|
| 45 |
+
```shell
|
| 46 |
+
bash other_benchmark_organize/charades/run.sh
|
| 47 |
+
```
|
| 48 |
+
## Youcook2
|
| 49 |
+
1. Follow the provided link in the table above to download the raw videos and the annotations.
|
| 50 |
+
The folder is organized as follows.
|
| 51 |
+
|
| 52 |
+
```
|
| 53 |
+
xx/youcook2
|
| 54 |
+
ββ raw_videos (Downloaded and extracted raw videos from raw_videos.tar.gz)
|
| 55 |
+
β ββ testing
|
| 56 |
+
β ββ training
|
| 57 |
+
β ββ validation
|
| 58 |
+
ββ val.caption_coco_format.json
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
|
| 63 |
+
|
| 64 |
+
```shell
|
| 65 |
+
bash other_benchmark_organize/youcook2/run.sh
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## E.T. Instruct
|
| 69 |
+
1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md).
|
| 70 |
+
|
| 71 |
+
2. Use the annotations provided in this Hugging Face repo.
|
| 72 |
+
|
| 73 |
+
- `D2VLM-Dataset/ET-Instruct/evi.json` --> For supervised finetuning (SFT).
|
| 74 |
+
|
| 75 |
+
- `/D2VLM-Dataset/ET-Instruct/FPO/tokenized_fpo_annotation.pt` --> For FPO.
|
| 76 |
+
|
| 77 |
+
>> For FPO, we applied an optimization: the dispreferred sample is appended after the preferred (positive) sample within the same sequence, and we adjust the positional encodings and attention mask accordingly. This makes training more efficient than naively training paired samples as separate batch items. One may check the `fpo_anno_gen/run.sh` in the project repo to learn more.
|
| 78 |
+
|
| 79 |
+
## Citation
|
| 80 |
+
|
| 81 |
+
If you find our work useful in your research, please consider citing our paper:
|
| 82 |
+
|
| 83 |
+
```bibtex
|
| 84 |
+
@inproceedings{d2vlm,
|
| 85 |
+
title={Factorized Learning for Temporally Grounded Video-Language Models},
|
| 86 |
+
author={Zeng, Wenzheng and Gao, Difei and Shou, Mike Zheng and Ng, Hwee Tou},
|
| 87 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
| 88 |
+
year={2025},
|
| 89 |
+
pages={20683-20693}
|
| 90 |
+
}
|
| 91 |
+
```
|