sungyub commited on
Commit
0065f1c
·
verified ·
1 Parent(s): 73a15b6

Fix README: correct ground_truth format (inputs/outputs, not test_cases/templates)

Browse files
Files changed (1) hide show
  1. README.md +21 -51
README.md CHANGED
@@ -36,7 +36,7 @@ configs:
36
 
37
  # Code Contests Plus (VERL Format)
38
 
39
- This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes multi-language test cases validated through sandbox execution.
40
 
41
  **Source**: [ByteDance-Seed/Code-Contests-Plus](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus) (1x config)
42
 
@@ -51,39 +51,26 @@ The dataset follows the VERL format with the following fields:
51
  - `ability` (string): Task category ("code")
52
  - `reward_model` (dict): Evaluation information
53
  - `style`: Evaluation method ("rule")
54
- - `ground_truth`: JSON-encoded test cases with multi-language support
55
  - `extra_info` (dict): Additional metadata
56
  - `index`: Example index from original dataset
57
 
58
  ## Test Case Format
59
 
60
- Each problem includes comprehensive test cases in the `reward_model.ground_truth` field, stored as JSON with the following structure:
61
 
62
  ```json
63
  {
64
- "test_cases": [
65
- {
66
- "input": "3\n1 2 3\n",
67
- "output": "6\n"
68
- }
69
- ],
70
- "templates": {
71
- "python": "def solve():\n {code}\n\nif __name__ == '__main__':\n solve()",
72
- "cpp": "#include <bits/stdc++.h>\nusing namespace std;\n\n{code}\n\nint main() {\n solve();\n return 0;\n}",
73
- "java": "import java.util.*;\nimport java.io.*;\n\npublic class Main {\n {code}\n \n public static void main(String[] args) {\n solve();\n }\n}",
74
- "go": "package main\n\nimport (\n\t\"fmt\"\n\t\"bufio\"\n\t\"os\"\n)\n\n{code}\n\nfunc main() {\n\tsolve()\n}",
75
- "rust": "use std::io::{self, BufRead};\n\n{code}\n\nfn main() {\n solve();\n}"
76
- }
77
  }
78
  ```
79
 
80
- ### Supported Languages
 
 
81
 
82
- - Python 3
83
- - C++ (with standard library)
84
- - Java
85
- - Go
86
- - Rust
87
 
88
  ## Data Processing
89
 
@@ -96,7 +83,7 @@ The dataset was created through a multi-step processing pipeline:
96
 
97
  ### 2. Sandbox Validation
98
  - Each problem's test cases were validated using a sandbox environment
99
- - Template execution tested for all supported languages
100
  - Only problems with passing validation were included
101
 
102
  ### 3. Size Filtering
@@ -131,51 +118,34 @@ print("Problem:", problem)
131
 
132
  # Parse test cases
133
  ground_truth = json.loads(example['reward_model']['ground_truth'])
134
- test_cases = ground_truth['test_cases']
135
- templates = ground_truth['templates']
136
 
137
- print(f"\nTest cases: {len(test_cases)}")
138
- print(f"First input: {test_cases[0]['input']}")
139
- print(f"Expected output: {test_cases[0]['output']}")
140
-
141
- # Available language templates
142
- print(f"\nSupported languages: {list(templates.keys())}")
143
  ```
144
 
145
  ## Example Problem
146
 
147
  **Problem Description:**
148
  ```
149
- Given an array of n integers, find the sum of all elements.
150
-
151
- Input Format:
152
- - First line: n (number of elements)
153
- - Second line: n space-separated integers
154
 
155
- Output Format:
156
- - Single integer: sum of all elements
157
  ```
158
 
159
  **Test Case:**
160
  ```python
161
- Input: "3\n1 2 3\n"
162
- Output: "6\n"
163
- ```
164
-
165
- **Python Template:**
166
- ```python
167
- def solve():
168
- {code}
169
-
170
- if __name__ == '__main__':
171
- solve()
172
  ```
173
 
174
  ## Statistics
175
 
176
  - **Total examples**: 8,432
177
  - **Average test cases per problem**: ~10-15
178
- - **Languages supported**: 5 (Python, C++, Java, Go, Rust)
179
  - **Dataset size**: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
180
  - **Format**: Parquet (11 shards, ~1GB each)
181
  - **Schema**: VERL-compatible
@@ -185,7 +155,7 @@ if __name__ == '__main__':
185
  All problems in this dataset have been validated to ensure:
186
 
187
  1. **Valid test cases**: Each problem has at least one valid test case
188
- 2. **Executable templates**: Templates for all languages pass basic validation
189
  3. **Size constraints**: Test cases are within reasonable size limits (≤10MB)
190
  4. **Format consistency**: All examples follow the same schema structure
191
 
 
36
 
37
  # Code Contests Plus (VERL Format)
38
 
39
+ This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes test cases validated through sandbox execution.
40
 
41
  **Source**: [ByteDance-Seed/Code-Contests-Plus](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus) (1x config)
42
 
 
51
  - `ability` (string): Task category ("code")
52
  - `reward_model` (dict): Evaluation information
53
  - `style`: Evaluation method ("rule")
54
+ - `ground_truth`: JSON-encoded test cases with input/output pairs
55
  - `extra_info` (dict): Additional metadata
56
  - `index`: Example index from original dataset
57
 
58
  ## Test Case Format
59
 
60
+ Each problem includes test cases in the `reward_model.ground_truth` field, stored as JSON with the following structure:
61
 
62
  ```json
63
  {
64
+ "inputs": ["3\n1 2 3\n"],
65
+ "outputs": ["6\n"]
 
 
 
 
 
 
 
 
 
 
 
66
  }
67
  ```
68
 
69
+ The format consists of two parallel arrays:
70
+ - `inputs`: Array of input strings for each test case
71
+ - `outputs`: Array of expected output strings corresponding to each input
72
 
73
+ Each problem typically contains between 1 and 32 test cases, validated through sandbox execution during dataset creation.
 
 
 
 
74
 
75
  ## Data Processing
76
 
 
83
 
84
  ### 2. Sandbox Validation
85
  - Each problem's test cases were validated using a sandbox environment
86
+ - Test input/output pairs verified for correctness
87
  - Only problems with passing validation were included
88
 
89
  ### 3. Size Filtering
 
118
 
119
  # Parse test cases
120
  ground_truth = json.loads(example['reward_model']['ground_truth'])
121
+ inputs = ground_truth['inputs']
122
+ outputs = ground_truth['outputs']
123
 
124
+ print(f"\nNumber of test cases: {len(inputs)}")
125
+ print(f"First input: {repr(inputs[0])}")
126
+ print(f"Expected output: {repr(outputs[0])}")
 
 
 
127
  ```
128
 
129
  ## Example Problem
130
 
131
  **Problem Description:**
132
  ```
133
+ Twins
 
 
 
 
134
 
135
+ square1001 and E869120 are twins, but they are not identical twins...
 
136
  ```
137
 
138
  **Test Case:**
139
  ```python
140
+ Input: ""
141
+ Output: "square1001"
 
 
 
 
 
 
 
 
 
142
  ```
143
 
144
  ## Statistics
145
 
146
  - **Total examples**: 8,432
147
  - **Average test cases per problem**: ~10-15
148
+ - **Test case range**: 1-32 per problem
149
  - **Dataset size**: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
150
  - **Format**: Parquet (11 shards, ~1GB each)
151
  - **Schema**: VERL-compatible
 
155
  All problems in this dataset have been validated to ensure:
156
 
157
  1. **Valid test cases**: Each problem has at least one valid test case
158
+ 2. **Correct input/output pairs**: Test cases verified through sandbox execution
159
  3. **Size constraints**: Test cases are within reasonable size limits (≤10MB)
160
  4. **Format consistency**: All examples follow the same schema structure
161