Add paper link, task category, and GitHub link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +91 -79
README.md CHANGED
@@ -1,79 +1,91 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
4
-
5
- # Dataset used in D2VLM
6
-
7
- Here we provided the instruction for dataset preparation. We provided the transformed annotation (or steps for transformation) tailor for D2VLM, due to the new training objective design.
8
-
9
-
10
- | Dataset Name | Role | Source |
11
- |----------------|:--------------------:|:------:|
12
- | E.T. Bench | Evaluation Benchmark | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md) |
13
- | Charades-STA | Evaluation Benchmark | [Data](https://prior.allenai.org/projects/charades), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
14
- | Youcook2 | Evaluation Benchmark | [Data](http://youcook2.eecs.umich.edu/download), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
15
- | E.T. Instruct | Finetuning data | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md) |
16
-
17
-
18
-
19
- ## E.T. Bench
20
- 1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md).
21
-
22
- 2. Use the annotations provided in this Hugging Face repo (i.e., `D2VLM-Dataset/ETBench/evi`).
23
-
24
-
25
- ## Charades-STA
26
- 1. Follow the provided link in the table above to download the raw videos and the annotations.
27
- The folder is organized as follows.
28
-
29
- ```
30
- xx/charades
31
- β”œβ”€ Charades_v1 (Downloaded and extracted raw videos from Charades_v1.zip)
32
- β”‚ β”œβ”€ 001TG.mp4
33
- β”‚ β”œβ”€ 003WS.mp4
34
- β”‚ └─ ...
35
- └─ test.caption_coco_format.json
36
- ```
37
- 2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
38
-
39
- ```shell
40
- bash other_benchmark_organize/charades/run.sh
41
- ```
42
- ## Youcook2
43
- 1. Follow the provided link in the table above to download the raw videos and the annotations.
44
- The folder is organized as follows.
45
-
46
- ```
47
- xx/youcook2
48
- β”œβ”€ raw_videos (Downloaded and extracted raw videos from raw_videos.tar.gz)
49
- β”‚ β”œβ”€ testing
50
- β”‚ β”œβ”€ training
51
- β”‚ └─ validation
52
- └─ val.caption_coco_format.json
53
- ```
54
-
55
-
56
- 2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
57
-
58
- ```shell
59
- bash other_benchmark_organize/youcook2/run.sh
60
- ```
61
-
62
- ## E.T. Instruct
63
- 1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md).
64
-
65
- 2. Use the annotations provided in this Hugging Face repo.
66
-
67
- - `D2VLM-Dataset/ET-Instruct/evi.json` --> For supervised finetuning (SFT).
68
-
69
- - `/D2VLM-Dataset/ET-Instruct/FPO/tokenized_fpo_annotation.pt` --> For FPO.
70
-
71
- >> For FPO, we applied an optimization: the dispreferred sample is appended after the preferred (positive) sample within the same sequence, and we adjust the positional encodings and attention mask accordingly. This makes training more efficient than naively training paired samples as separate batch items. One may check the `fpo_anno_gen/run.sh` in the project repo to learn more.
72
-
73
-
74
-
75
-
76
-
77
-
78
-
79
-
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - video-text-to-text
5
+ language:
6
+ - en
7
+ ---
8
+
9
+ # Dataset used in D2VLM
10
+
11
+ [**Paper**](https://huggingface.co/papers/2512.24097) | [**Code**](https://github.com/nusnlp/d2vlm)
12
+
13
+ Here we provided the instruction for dataset preparation. We provided the transformed annotation (or steps for transformation) tailor for D2VLM, due to the new training objective design.
14
+
15
+
16
+ | Dataset Name | Role | Source |
17
+ |----------------|:--------------------:|:------:|
18
+ | E.T. Bench | Evaluation Benchmark | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md) |
19
+ | Charades-STA | Evaluation Benchmark | [Data](https://prior.allenai.org/projects/charades), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
20
+ | Youcook2 | Evaluation Benchmark | [Data](http://youcook2.eecs.umich.edu/download), [Annotation](https://huggingface.co/datasets/ShuhuaiRen/TimeIT) |
21
+ | E.T. Instruct | Finetuning data | [Data](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md) |
22
+
23
+
24
+
25
+ ## E.T. Bench
26
+ 1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/BENCHMARK.md).
27
+
28
+ 2. Use the annotations provided in this Hugging Face repo (i.e., `D2VLM-Dataset/ETBench/evi`).
29
+
30
+
31
+ ## Charades-STA
32
+ 1. Follow the provided link in the table above to download the raw videos and the annotations.
33
+ The folder is organized as follows.
34
+
35
+ ```
36
+ xx/charades
37
+ β”œβ”€ Charades_v1 (Downloaded and extracted raw videos from Charades_v1.zip)
38
+ β”‚ β”œβ”€ 001TG.mp4
39
+ β”‚ β”œβ”€ 003WS.mp4
40
+ β”‚ └─ ...
41
+ └─ test.caption_coco_format.json
42
+ ```
43
+ 2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
44
+
45
+ ```shell
46
+ bash other_benchmark_organize/charades/run.sh
47
+ ```
48
+ ## Youcook2
49
+ 1. Follow the provided link in the table above to download the raw videos and the annotations.
50
+ The folder is organized as follows.
51
+
52
+ ```
53
+ xx/youcook2
54
+ β”œβ”€ raw_videos (Downloaded and extracted raw videos from raw_videos.tar.gz)
55
+ β”‚ β”œβ”€ testing
56
+ β”‚ β”œβ”€ training
57
+ β”‚ └─ validation
58
+ └─ val.caption_coco_format.json
59
+ ```
60
+
61
+
62
+ 2. Run the post-processing scripts (in project repo). Remember to update paths in the .py files referenced by the script below.
63
+
64
+ ```shell
65
+ bash other_benchmark_organize/youcook2/run.sh
66
+ ```
67
+
68
+ ## E.T. Instruct
69
+ 1. Download and process the data following [official document](https://github.com/PolyU-ChenLab/ETBench/blob/main/docs/DATASET.md).
70
+
71
+ 2. Use the annotations provided in this Hugging Face repo.
72
+
73
+ - `D2VLM-Dataset/ET-Instruct/evi.json` --> For supervised finetuning (SFT).
74
+
75
+ - `/D2VLM-Dataset/ET-Instruct/FPO/tokenized_fpo_annotation.pt` --> For FPO.
76
+
77
+ >> For FPO, we applied an optimization: the dispreferred sample is appended after the preferred (positive) sample within the same sequence, and we adjust the positional encodings and attention mask accordingly. This makes training more efficient than naively training paired samples as separate batch items. One may check the `fpo_anno_gen/run.sh` in the project repo to learn more.
78
+
79
+ ## Citation
80
+
81
+ If you find our work useful in your research, please consider citing our paper:
82
+
83
+ ```bibtex
84
+ @inproceedings{d2vlm,
85
+ title={Factorized Learning for Temporally Grounded Video-Language Models},
86
+ author={Zeng, Wenzheng and Gao, Difei and Shou, Mike Zheng and Ng, Hwee Tou},
87
+ booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
88
+ year={2025},
89
+ pages={20683-20693}
90
+ }
91
+ ```