File size: 1,663 Bytes
9679515 609ab65 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
license: apache-2.0
---
A cleaned version of [OpenWebText2](https://huggingface.co/datasets/defunct-datasets/the_pile_openwebtext2) by removing non-English, duplicated, copyrighted, and low-quality (too short, too many special characters, etc) samples.
This dataset has also been decontaminated with respect to the following benchmarks based on n-gram overlap:
- GLUE (dev set of SST-2, CoLA, QQP, WNLI, RTE, QNLI, MNLI; test set of MPRC)
- SIQA, PIQA, QASC, CSQA, HellaSWAG (all dev set)
- CONLL 2003
- BLIMP
- [MAIN](https://main.leibniz-zas.de/en/main-materials/main-materials/)
- BoolQ (dev set)
- WinoGrande (dev set)
- ANLI (test set)
- ARC easy and challenge (test set)
- RACE middle and high (test set)
- MMLU (dev, val, and test sets)
- MATH, GSM8K (test set)
- HumanEval (test set)
- GPQA (diamond)
4,096 documents are removed in this step.
### Dataset Statistics
Total number of samples: 13,071,217.
Size of downloaded parquet files: 34G.
### Filtered Version
There is a model-filtered version in the [filtered branch](https://huggingface.co/datasets/Geralt-Targaryen/openwebtext2/tree/filtered), including 12,804,779 samples .
Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 250K C4 samples. A RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score of 1 or 2 from the classifier is removed. The remaining documents are also accompanied by their scores.
You can download this version by specifying the `--revision` argument:
```
huggingface-cli download --repo-type dataset Geralt-Targaryen/openwebtext2 --revision filtered --local-dir .
``` |