dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: role
dtype: string
- name: content
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: style
dtype: string
- name: ground_truth
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
splits:
- name: train
num_bytes: 10737418240
num_examples: 7861
download_size: 10737418240
dataset_size: 10737418240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- reinforcement-learning
- text-generation
tags:
- code
- reasoning
- rlhf
- verl
Code Contests Plus (VERL Format)
This dataset contains 8,432 competitive programming problems from the Code-Contests-Plus dataset, converted to VERL format for reinforcement learning applications. Each problem includes test cases validated through sandbox execution.
Source: ByteDance-Seed/Code-Contests-Plus (1x config)
License: MIT
Dataset Structure
The dataset follows the VERL format with the following fields:
data_source(string): Dataset source identifier ("code-contests-plus")prompt(list): Chat template format with role/content structure containing the coding problemability(string): Task category ("code")reward_model(dict): Evaluation informationstyle: Evaluation method ("rule")ground_truth: JSON-encoded test cases with input/output pairs
extra_info(dict): Additional metadataindex: Example index from original dataset
Test Case Format
Each problem includes test cases in the reward_model.ground_truth field, stored as JSON with the following structure:
{
"inputs": ["3\n1 2 3\n"],
"outputs": ["6\n"]
}
The format consists of two parallel arrays:
inputs: Array of input strings for each test caseoutputs: Array of expected output strings corresponding to each input
Each problem typically contains between 1 and 32 test cases, validated through sandbox execution during dataset creation.
Data Processing
The dataset was created through a multi-step processing pipeline:
1. Test Case Extraction
- Extracted public test cases from the original dataset
- Validated format and executability
- Filtered problems without valid test cases
2. Sandbox Validation
- Each problem's test cases were validated using a sandbox environment
- Test input/output pairs verified for correctness
- Only problems with passing validation were included
3. Size Filtering
- Applied 10MB size limit to test case JSON (encoded)
- Removed overly large problems to ensure efficient processing
- Balanced dataset quality and usability
Processing Statistics
- Total input examples: 11,690
- Successfully processed: 8,432 (72.1% success rate)
- Total filtered: 3,258 (27.9%)
- No test cases: 54 (0.5%)
- Size filtered (>10MB): 3,204 (27.4%)
- Processing time: 69 minutes
- Configuration used: 1x (standard difficulty)
Usage
from datasets import load_dataset
import json
# Load the dataset
dataset = load_dataset("sungyub/code-contests-plus-verl")
# Access an example
example = dataset['train'][0]
# Get the problem description
problem = example['prompt'][0]['content']
print("Problem:", problem)
# Parse test cases
ground_truth = json.loads(example['reward_model']['ground_truth'])
inputs = ground_truth['inputs']
outputs = ground_truth['outputs']
print(f"\nNumber of test cases: {len(inputs)}")
print(f"First input: {repr(inputs[0])}")
print(f"Expected output: {repr(outputs[0])}")
Example Problem
Problem Description:
Twins
square1001 and E869120 are twins, but they are not identical twins...
Test Case:
Input: ""
Output: "square1001"
Statistics
- Total examples: 8,432
- Average test cases per problem: ~10-15
- Test case range: 1-32 per problem
- Dataset size: ~10 GB uncompressed, ~10 GB compressed (includes test cases)
- Format: Parquet (11 shards, ~1GB each)
- Schema: VERL-compatible
Data Quality
All problems in this dataset have been validated to ensure:
- Valid test cases: Each problem has at least one valid test case
- Correct input/output pairs: Test cases verified through sandbox execution
- Size constraints: Test cases are within reasonable size limits (≤10MB)
- Format consistency: All examples follow the same schema structure
Conversion Script
The dataset was created using preprocess_codecontests_verl.py:
# Standard conversion (used for this dataset)
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 1x \
--output-dir ./codecontests_verl_full \
--sandbox-url http://localhost:8080/run_code \
--batch-size 100
# Process with different configuration
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 2x \
--output-dir ./codecontests_verl_2x \
--sandbox-url http://localhost:8080/run_code \
--batch-size 100
# Process limited samples for testing
python preprocess_codecontests_verl.py \
--dataset-id ByteDance-Seed/Code-Contests-Plus \
--config 1x \
--output-dir ./codecontests_test \
--sandbox-url http://localhost:8080/run_code \
--max-examples 100
Related Datasets
- Code Contests Plus (Original): Original dataset with competitive programming problems
- Skywork-OR1-Code-VERL: Similar VERL-format dataset with 14,057 coding problems
Additional Information
For more information about VERL format and usage in reinforcement learning, see:
Citation
If you use this dataset, please cite the original Code-Contests-Plus dataset:
@misc{code-contests-plus,
title={Code-Contests-Plus},
author={ByteDance-Seed},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/datasets/ByteDance-Seed/Code-Contests-Plus}
}
License
This dataset is released under the MIT License, following the license of the original Code-Contests-Plus dataset.