Fix README: correct ground_truth format (inputs/outputs, not test_cases/templates)
Browse files
README.md
CHANGED
|
@@ -36,7 +36,7 @@ configs:
|
|
| 36 |
|
| 37 |
# Code Contests Plus (VERL Format)
|
| 38 |
|
| 39 |
-
This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes
|
| 40 |
|
| 41 |
**Source**: [ByteDance-Seed/Code-Contests-Plus](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus) (1x config)
|
| 42 |
|
|
@@ -51,39 +51,26 @@ The dataset follows the VERL format with the following fields:
|
|
| 51 |
- `ability` (string): Task category ("code")
|
| 52 |
- `reward_model` (dict): Evaluation information
|
| 53 |
- `style`: Evaluation method ("rule")
|
| 54 |
-
- `ground_truth`: JSON-encoded test cases with
|
| 55 |
- `extra_info` (dict): Additional metadata
|
| 56 |
- `index`: Example index from original dataset
|
| 57 |
|
| 58 |
## Test Case Format
|
| 59 |
|
| 60 |
-
Each problem includes
|
| 61 |
|
| 62 |
```json
|
| 63 |
{
|
| 64 |
-
"
|
| 65 |
-
|
| 66 |
-
"input": "3\n1 2 3\n",
|
| 67 |
-
"output": "6\n"
|
| 68 |
-
}
|
| 69 |
-
],
|
| 70 |
-
"templates": {
|
| 71 |
-
"python": "def solve():\n {code}\n\nif __name__ == '__main__':\n solve()",
|
| 72 |
-
"cpp": "#include <bits/stdc++.h>\nusing namespace std;\n\n{code}\n\nint main() {\n solve();\n return 0;\n}",
|
| 73 |
-
"java": "import java.util.*;\nimport java.io.*;\n\npublic class Main {\n {code}\n \n public static void main(String[] args) {\n solve();\n }\n}",
|
| 74 |
-
"go": "package main\n\nimport (\n\t\"fmt\"\n\t\"bufio\"\n\t\"os\"\n)\n\n{code}\n\nfunc main() {\n\tsolve()\n}",
|
| 75 |
-
"rust": "use std::io::{self, BufRead};\n\n{code}\n\nfn main() {\n solve();\n}"
|
| 76 |
-
}
|
| 77 |
}
|
| 78 |
```
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
| 83 |
-
- C++ (with standard library)
|
| 84 |
-
- Java
|
| 85 |
-
- Go
|
| 86 |
-
- Rust
|
| 87 |
|
| 88 |
## Data Processing
|
| 89 |
|
|
@@ -96,7 +83,7 @@ The dataset was created through a multi-step processing pipeline:
|
|
| 96 |
|
| 97 |
### 2. Sandbox Validation
|
| 98 |
- Each problem's test cases were validated using a sandbox environment
|
| 99 |
-
-
|
| 100 |
- Only problems with passing validation were included
|
| 101 |
|
| 102 |
### 3. Size Filtering
|
|
@@ -131,51 +118,34 @@ print("Problem:", problem)
|
|
| 131 |
|
| 132 |
# Parse test cases
|
| 133 |
ground_truth = json.loads(example['reward_model']['ground_truth'])
|
| 134 |
-
|
| 135 |
-
|
| 136 |
|
| 137 |
-
print(f"\
|
| 138 |
-
print(f"First input: {
|
| 139 |
-
print(f"Expected output: {
|
| 140 |
-
|
| 141 |
-
# Available language templates
|
| 142 |
-
print(f"\nSupported languages: {list(templates.keys())}")
|
| 143 |
```
|
| 144 |
|
| 145 |
## Example Problem
|
| 146 |
|
| 147 |
**Problem Description:**
|
| 148 |
```
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
Input Format:
|
| 152 |
-
- First line: n (number of elements)
|
| 153 |
-
- Second line: n space-separated integers
|
| 154 |
|
| 155 |
-
|
| 156 |
-
- Single integer: sum of all elements
|
| 157 |
```
|
| 158 |
|
| 159 |
**Test Case:**
|
| 160 |
```python
|
| 161 |
-
Input: "
|
| 162 |
-
Output: "
|
| 163 |
-
```
|
| 164 |
-
|
| 165 |
-
**Python Template:**
|
| 166 |
-
```python
|
| 167 |
-
def solve():
|
| 168 |
-
{code}
|
| 169 |
-
|
| 170 |
-
if __name__ == '__main__':
|
| 171 |
-
solve()
|
| 172 |
```
|
| 173 |
|
| 174 |
## Statistics
|
| 175 |
|
| 176 |
- **Total examples**: 8,432
|
| 177 |
- **Average test cases per problem**: ~10-15
|
| 178 |
-
- **
|
| 179 |
- **Dataset size**: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
|
| 180 |
- **Format**: Parquet (11 shards, ~1GB each)
|
| 181 |
- **Schema**: VERL-compatible
|
|
@@ -185,7 +155,7 @@ if __name__ == '__main__':
|
|
| 185 |
All problems in this dataset have been validated to ensure:
|
| 186 |
|
| 187 |
1. **Valid test cases**: Each problem has at least one valid test case
|
| 188 |
-
2. **
|
| 189 |
3. **Size constraints**: Test cases are within reasonable size limits (≤10MB)
|
| 190 |
4. **Format consistency**: All examples follow the same schema structure
|
| 191 |
|
|
|
|
| 36 |
|
| 37 |
# Code Contests Plus (VERL Format)
|
| 38 |
|
| 39 |
+
This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes test cases validated through sandbox execution.
|
| 40 |
|
| 41 |
**Source**: [ByteDance-Seed/Code-Contests-Plus](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus) (1x config)
|
| 42 |
|
|
|
|
| 51 |
- `ability` (string): Task category ("code")
|
| 52 |
- `reward_model` (dict): Evaluation information
|
| 53 |
- `style`: Evaluation method ("rule")
|
| 54 |
+
- `ground_truth`: JSON-encoded test cases with input/output pairs
|
| 55 |
- `extra_info` (dict): Additional metadata
|
| 56 |
- `index`: Example index from original dataset
|
| 57 |
|
| 58 |
## Test Case Format
|
| 59 |
|
| 60 |
+
Each problem includes test cases in the `reward_model.ground_truth` field, stored as JSON with the following structure:
|
| 61 |
|
| 62 |
```json
|
| 63 |
{
|
| 64 |
+
"inputs": ["3\n1 2 3\n"],
|
| 65 |
+
"outputs": ["6\n"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
}
|
| 67 |
```
|
| 68 |
|
| 69 |
+
The format consists of two parallel arrays:
|
| 70 |
+
- `inputs`: Array of input strings for each test case
|
| 71 |
+
- `outputs`: Array of expected output strings corresponding to each input
|
| 72 |
|
| 73 |
+
Each problem typically contains between 1 and 32 test cases, validated through sandbox execution during dataset creation.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
## Data Processing
|
| 76 |
|
|
|
|
| 83 |
|
| 84 |
### 2. Sandbox Validation
|
| 85 |
- Each problem's test cases were validated using a sandbox environment
|
| 86 |
+
- Test input/output pairs verified for correctness
|
| 87 |
- Only problems with passing validation were included
|
| 88 |
|
| 89 |
### 3. Size Filtering
|
|
|
|
| 118 |
|
| 119 |
# Parse test cases
|
| 120 |
ground_truth = json.loads(example['reward_model']['ground_truth'])
|
| 121 |
+
inputs = ground_truth['inputs']
|
| 122 |
+
outputs = ground_truth['outputs']
|
| 123 |
|
| 124 |
+
print(f"\nNumber of test cases: {len(inputs)}")
|
| 125 |
+
print(f"First input: {repr(inputs[0])}")
|
| 126 |
+
print(f"Expected output: {repr(outputs[0])}")
|
|
|
|
|
|
|
|
|
|
| 127 |
```
|
| 128 |
|
| 129 |
## Example Problem
|
| 130 |
|
| 131 |
**Problem Description:**
|
| 132 |
```
|
| 133 |
+
Twins
|
|
|
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
+
square1001 and E869120 are twins, but they are not identical twins...
|
|
|
|
| 136 |
```
|
| 137 |
|
| 138 |
**Test Case:**
|
| 139 |
```python
|
| 140 |
+
Input: ""
|
| 141 |
+
Output: "square1001"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
```
|
| 143 |
|
| 144 |
## Statistics
|
| 145 |
|
| 146 |
- **Total examples**: 8,432
|
| 147 |
- **Average test cases per problem**: ~10-15
|
| 148 |
+
- **Test case range**: 1-32 per problem
|
| 149 |
- **Dataset size**: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
|
| 150 |
- **Format**: Parquet (11 shards, ~1GB each)
|
| 151 |
- **Schema**: VERL-compatible
|
|
|
|
| 155 |
All problems in this dataset have been validated to ensure:
|
| 156 |
|
| 157 |
1. **Valid test cases**: Each problem has at least one valid test case
|
| 158 |
+
2. **Correct input/output pairs**: Test cases verified through sandbox execution
|
| 159 |
3. **Size constraints**: Test cases are within reasonable size limits (≤10MB)
|
| 160 |
4. **Format consistency**: All examples follow the same schema structure
|
| 161 |
|