File size: 6,512 Bytes
d9f1ff1 73a15b6 db590dc 73a15b6 d9f1ff1 a00adb8 d9f1ff1 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 15f48fb 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 0065f1c 73a15b6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
---
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: role
dtype: string
- name: content
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: style
dtype: string
- name: ground_truth
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
splits:
- name: train
num_bytes: 10737418240
num_examples: 7861
download_size: 10737418240
dataset_size: 10737418240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- reinforcement-learning
- text-generation
tags:
- code
- reasoning
- rlhf
- verl
---
# Code Contests Plus (VERL Format)
This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes test cases validated through sandbox execution.
**Source**: [ByteDance-Seed/Code-Contests-Plus](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus) (1x config)
**License**: MIT
## Dataset Structure
The dataset follows the VERL format with the following fields:
- `data_source` (string): Dataset source identifier ("code-contests-plus")
- `prompt` (list): Chat template format with role/content structure containing the coding problem
- `ability` (string): Task category ("code")
- `reward_model` (dict): Evaluation information
- `style`: Evaluation method ("rule")
- `ground_truth`: JSON-encoded test cases with input/output pairs
- `extra_info` (dict): Additional metadata
- `index`: Example index from original dataset
## Test Case Format
Each problem includes test cases in the `reward_model.ground_truth` field, stored as JSON with the following structure:
```json
{
"inputs": ["3\n1 2 3\n"],
"outputs": ["6\n"]
}
```
The format consists of two parallel arrays:
- `inputs`: Array of input strings for each test case
- `outputs`: Array of expected output strings corresponding to each input
Each problem typically contains between 1 and 32 test cases, validated through sandbox execution during dataset creation.
## Data Processing
The dataset was created through a multi-step processing pipeline:
### 1. Test Case Extraction
- Extracted public test cases from the original dataset
- Validated format and executability
- Filtered problems without valid test cases
### 2. Sandbox Validation
- Each problem's test cases were validated using a sandbox environment
- Test input/output pairs verified for correctness
- Only problems with passing validation were included
### 3. Size Filtering
- Applied 10MB size limit to test case JSON (encoded)
- Removed overly large problems to ensure efficient processing
- Balanced dataset quality and usability
### Processing Statistics
- **Total input examples**: 11,690
- **Successfully processed**: 8,432 (72.1% success rate)
- **Total filtered**: 3,258 (27.9%)
- No test cases: 54 (0.5%)
- Size filtered (>10MB): 3,204 (27.4%)
- **Processing time**: 69 minutes
- **Configuration used**: 1x (standard difficulty)
## Usage
```python
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("sungyub/code-contests-plus-verl")
# Access an example
example = dataset['train'][0]
# Get the problem description
problem = example['prompt'][0]['content']
print("Problem:", problem)
# Parse test cases
ground_truth = json.loads(example['reward_model']['ground_truth'])
inputs = ground_truth['inputs']
outputs = ground_truth['outputs']
print(f"\nNumber of test cases: {len(inputs)}")
print(f"First input: {repr(inputs[0])}")
print(f"Expected output: {repr(outputs[0])}")
```
## Example Problem
**Problem Description:**
```
Twins
square1001 and E869120 are twins, but they are not identical twins...
```
**Test Case:**
```python
Input: ""
Output: "square1001"
```
## Statistics
- **Total examples**: 8,432
- **Average test cases per problem**: ~10-15
- **Test case range**: 1-32 per problem
- **Dataset size**: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
- **Format**: Parquet (11 shards, ~1GB each)
- **Schema**: VERL-compatible
## Data Quality
All problems in this dataset have been validated to ensure:
1. **Valid test cases**: Each problem has at least one valid test case
2. **Correct input/output pairs**: Test cases verified through sandbox execution
3. **Size constraints**: Test cases are within reasonable size limits (≤10MB)
4. **Format consistency**: All examples follow the same schema structure
## Conversion Script
The dataset was created using `preprocess_codecontests_verl.py`:
```bash
# Standard conversion (used for this dataset)
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 1x \
--output-dir ./codecontests_verl_full \
--sandbox-url http://localhost:8080/run_code \
--batch-size 100
# Process with different configuration
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 2x \
--output-dir ./codecontests_verl_2x \
--sandbox-url http://localhost:8080/run_code \
--batch-size 100
# Process limited samples for testing
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 1x \
--output-dir ./codecontests_test \
--sandbox-url http://localhost:8080/run_code \
--max-examples 100
```
## Related Datasets
- [Code Contests Plus (Original)](https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus): Original dataset with competitive programming problems
- [Skywork-OR1-Code-VERL](https://huggingface.co/datasets/sungyub/skywork-or1-code-verl): Similar VERL-format dataset with 14,057 coding problems
## Additional Information
For more information about VERL format and usage in reinforcement learning, see:
- [VERL Documentation](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)
- [VERL GitHub Repository](https://github.com/volcengine/verl)
## Citation
If you use this dataset, please cite the original Code-Contests-Plus dataset:
```bibtex
@misc{code-contests-plus,
title={Code-Contests-Plus},
author={ByteDance-Seed},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus}
}
```
## License
This dataset is released under the MIT License, following the license of the original Code-Contests-Plus dataset.
|