Datasets:

Formats:
csv
ArXiv:
Libraries:
Datasets
Dask
License:
Files changed (1) hide show
  1. README.md +79 -2
README.md CHANGED
@@ -21,6 +21,9 @@ This repository contains the instruction-based video editing evaluation benchmar
21
 
22
 
23
 
 
 
 
24
  **(1) Clone the EditVerseBench Repository**
25
 
26
  ```
@@ -34,7 +37,7 @@ git clone https://huggingface.co/datasets/EditVerse/EditVerseBench
34
  The original source videos cannot be directly distributed due to licensing restrictions.
35
  Instead, you can download them using the provided script with the Pixabay API. (The network connection may occasionally fail, so you might need to run the script multiple times.)
36
 
37
- > ⚠️ Note: Please remember to revise the API key to your own key in `download_source_video.py`. You can find the API key [here](https://pixabay.com/api/docs/#api_search_images) (marked in green on the website). The API is free but you need to sign up an account to have the API key.
38
 
39
 
40
  ```
@@ -52,7 +55,81 @@ rm EditVerse_Comparison_Results.tar.gz
52
  ```
53
 
54
 
55
- If you find our work useful for your research, please consider citing our paper:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ```
57
  @article{ju2025editverse,
58
  title = {EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning},
 
21
 
22
 
23
 
24
+ ## ⏬ Download Benchmark
25
+
26
+
27
  **(1) Clone the EditVerseBench Repository**
28
 
29
  ```
 
37
  The original source videos cannot be directly distributed due to licensing restrictions.
38
  Instead, you can download them using the provided script with the Pixabay API. (The network connection may occasionally fail, so you might need to run the script multiple times.)
39
 
40
+ > ⚠️ Note: Please remember to revise the API key to your own key in `download_source_video.py`. You can find the API key [here](https://pixabay.com/api/docs/#api_search_images) (marked in Parameters-key(required) on the website). The API is free but you need to sign up an account to have the API key.
41
 
42
 
43
  ```
 
55
  ```
56
 
57
 
58
+ ## Benchmark Results
59
+
60
+
61
+
62
+
63
+ <table>
64
+ <thead>
65
+ <tr>
66
+ <th rowspan="2">Method</th>
67
+ <th colspan="1">VLM evaluation</th>
68
+ <th colspan="1">Video Quality</th>
69
+ <th colspan="2">Text Alignment</th>
70
+ <th colspan="2">Temporal Consistency</th>
71
+ </tr>
72
+ <tr>
73
+ <th>Editing Quality ↑</th>
74
+ <th>Pick Score ↑</th>
75
+ <th>Frame ↑</th>
76
+ <th>Video ↑</th>
77
+ <th>CLIP ↑</th>
78
+ <th>DINO ↑</th>
79
+ </tr>
80
+ </thead>
81
+ <tbody>
82
+ <!-- Attention Manipulation -->
83
+ <tr>
84
+ <td colspan="7" style="text-align:center; font-weight:bold;">Attention Manipulation (Training-free)</td>
85
+ </tr>
86
+ <tr>
87
+ <td><b>TokenFlow</b></td>
88
+ <td>5.26</td><td>19.73</td><td>25.57</td><td>22.70</td><td>98.36</td><td>98.09</td>
89
+ </tr>
90
+ <tr>
91
+ <td><b>STDF</b></td>
92
+ <td>4.41</td><td>19.45</td><td>25.24</td><td>22.26</td><td>96.04</td><td>95.22</td>
93
+ </tr>
94
+ <!-- First-Frame Propagation -->
95
+ <tr>
96
+ <td colspan="7" style="text-align:center; font-weight:bold;">First-Frame Propagation (w/ End-to-End Training)</td>
97
+ </tr>
98
+ <tr>
99
+ <td><b>Señorita-2M</b></td>
100
+ <td>6.97</td><td>19.71</td><td>26.34</td><td>23.24</td><td>98.05</td><td>97.99</td>
101
+ </tr>
102
+ <!-- Instruction-Guided -->
103
+ <tr>
104
+ <td colspan="7" style="text-align:center; font-weight:bold;">Instruction-Guided (w/ End-to-End Training)</td>
105
+ </tr>
106
+ <tr>
107
+ <td><b>InsV2V</b></td>
108
+ <td>5.21</td><td>19.39</td><td>24.99</td><td>22.54</td><td>97.15</td><td>96.57</td>
109
+ </tr>
110
+ <tr>
111
+ <td><b>Lucy Edit</b></td>
112
+ <td>5.89</td><td>19.67</td><td>26.00</td><td>23.11</td><td>98.49</td><td>98.38</td>
113
+ </tr>
114
+ <tr>
115
+ <td><b>Ours (Ours)</b></td>
116
+ <td><b>7.65</b></td><td><b>20.07</b></td><td><b>26.73</b></td><td><b>23.93</b></td><td><b>98.56</b></td><td><b>98.42</b></td>
117
+ </tr>
118
+ <!-- Closed-Source -->
119
+ <tr>
120
+ <td colspan="7" style="text-align:center; font-weight:bold; color:gray;">Closed-Source Commercial Models</td>
121
+ </tr>
122
+ <tr style="color:gray;">
123
+ <td>Runway Aleph</td>
124
+ <td>7.44</td><td>20.42</td><td>27.70</td><td>24.27</td><td>98.94</td><td>98.60</td>
125
+ </tr>
126
+ </tbody>
127
+ </table>
128
+
129
+
130
+
131
+
132
+ 💌 If you find our work useful for your research, please consider citing our paper:
133
  ```
134
  @article{ju2025editverse,
135
  title = {EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning},