Chandan Akiti
commited on
Commit
·
3180859
1
Parent(s):
c705738
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,167 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- text-generation
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
pretty_name: Red Pajama 1T
|
| 7 |
---
|
| 8 |
+
|
| 9 |
+
This dataset is derived from `togethercomputer/RedPajama-Data-1T`. We removed CommonCrawl and C4 from the original RedPajama dataset
|
| 10 |
+
|
| 11 |
+
### Getting Started
|
| 12 |
+
|
| 13 |
+
The dataset consists of 2084 jsonl files.
|
| 14 |
+
You can download the dataset using HuggingFace:
|
| 15 |
+
```python
|
| 16 |
+
from datasets import load_dataset
|
| 17 |
+
ds = load_dataset("togethercomputer/RedPajama-Data-1T")
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
Or you can directly download the files using the following command:
|
| 21 |
+
|
| 22 |
+
```
|
| 23 |
+
wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
|
| 24 |
+
while read line; do
|
| 25 |
+
dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
|
| 26 |
+
mkdir -p $(dirname $dload_loc)
|
| 27 |
+
wget "$line" -O "$dload_loc"
|
| 28 |
+
done < urls.txt
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
|
| 33 |
+
|
| 34 |
+
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
|
| 35 |
+
|
| 36 |
+
### Dataset Summary
|
| 37 |
+
|
| 38 |
+
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
|
| 39 |
+
|
| 40 |
+
| Dataset | Token Count |
|
| 41 |
+
|---------------|-------------|
|
| 42 |
+
| Commoncrawl | 878 Billion |
|
| 43 |
+
| C4 | 175 Billion |
|
| 44 |
+
| GitHub | 59 Billion |
|
| 45 |
+
| Books | 26 Billion |
|
| 46 |
+
| ArXiv | 28 Billion |
|
| 47 |
+
| Wikipedia | 24 Billion |
|
| 48 |
+
| StackExchange | 20 Billion |
|
| 49 |
+
| Total | 1.2 Trillion |
|
| 50 |
+
|
| 51 |
+
### Languages
|
| 52 |
+
|
| 53 |
+
Primarily English, though the Wikipedia slice contains multiple languages.
|
| 54 |
+
|
| 55 |
+
## Dataset Structure
|
| 56 |
+
|
| 57 |
+
The dataset structure is as follows:
|
| 58 |
+
|
| 59 |
+
```json
|
| 60 |
+
{
|
| 61 |
+
"text": ...,
|
| 62 |
+
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...},
|
| 63 |
+
"red_pajama_subset": "common_crawl" | "c4" | "github" | "books" | "arxiv" | "wikipedia" | "stackexchange"
|
| 64 |
+
}
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
## Dataset Creation
|
| 68 |
+
|
| 69 |
+
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
|
| 70 |
+
|
| 71 |
+
### Source Data
|
| 72 |
+
|
| 73 |
+
#### Commoncrawl
|
| 74 |
+
|
| 75 |
+
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
|
| 76 |
+
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
|
| 77 |
+
classify paragraphs as Wikipedia references or random Commoncrawl samples.
|
| 78 |
+
|
| 79 |
+
#### C4
|
| 80 |
+
|
| 81 |
+
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
|
| 82 |
+
|
| 83 |
+
#### GitHub
|
| 84 |
+
|
| 85 |
+
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
|
| 86 |
+
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
|
| 87 |
+
|
| 88 |
+
#### Wikipedia
|
| 89 |
+
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
|
| 90 |
+
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
|
| 91 |
+
formatting boilerplate has been removed.
|
| 92 |
+
|
| 93 |
+
#### Gutenberg and Books3
|
| 94 |
+
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
|
| 95 |
+
simhash to remove near duplicates.
|
| 96 |
+
|
| 97 |
+
#### ArXiv
|
| 98 |
+
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
|
| 99 |
+
remove preambles, comments, macros and bibliographies.
|
| 100 |
+
|
| 101 |
+
#### Stackexchange
|
| 102 |
+
The Stack Exchange split of the dataset is download from the
|
| 103 |
+
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
|
| 104 |
+
remove html tags, group the posts into question-answer pairs, and order answers by their score.
|
| 105 |
+
|
| 106 |
+
### SHA256 Checksums
|
| 107 |
+
|
| 108 |
+
SHA256 checksums for the dataset files for each data source are available here:
|
| 109 |
+
|
| 110 |
+
```
|
| 111 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/arxiv_SHA256SUMS.txt
|
| 112 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/book_SHA256SUMS.txt
|
| 113 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/c4_SHA256SUMS.txt
|
| 114 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/common_crawl_SHA256SUMS.txt
|
| 115 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/github_SHA256SUMS.txt
|
| 116 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/stackexchange_SHA256SUMS.txt
|
| 117 |
+
https://data.together.xyz/redpajama-data-1T/v1.0.0/sha256/wikipedia_SHA256SUMS.txt
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
To cite RedPajama, please use:
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
@software{together2023redpajama,
|
| 124 |
+
author = {Together Computer},
|
| 125 |
+
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
|
| 126 |
+
month = April,
|
| 127 |
+
year = 2023,
|
| 128 |
+
url = {https://github.com/togethercomputer/RedPajama-Data}
|
| 129 |
+
}
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
### License
|
| 133 |
+
Please refer to the licenses of the data subsets you use.
|
| 134 |
+
|
| 135 |
+
* [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
|
| 136 |
+
* [C4 license](https://huggingface.co/datasets/allenai/c4#license)
|
| 137 |
+
* GitHub was limited to MIT, BSD, or Apache licenses only
|
| 138 |
+
* Books: [the_pile_books3 license](https://huggingface.co/datasets/the_pile_books3#licensing-information) and [pg19 license](https://huggingface.co/datasets/pg19#licensing-information)
|
| 139 |
+
* [ArXiv Terms of Use](https://info.arxiv.org/help/api/tou.html)
|
| 140 |
+
* [Wikipedia License](https://huggingface.co/datasets/wikipedia#licensing-information)
|
| 141 |
+
* [StackExchange license on the Internet Archive](https://archive.org/details/stackexchange)
|
| 142 |
+
|
| 143 |
+
<!--
|
| 144 |
+
### Annotations
|
| 145 |
+
#### Annotation process
|
| 146 |
+
[More Information Needed]
|
| 147 |
+
#### Who are the annotators?
|
| 148 |
+
[More Information Needed]
|
| 149 |
+
### Personal and Sensitive Information
|
| 150 |
+
[More Information Needed]
|
| 151 |
+
## Considerations for Using the Data
|
| 152 |
+
### Social Impact of Dataset
|
| 153 |
+
[More Information Needed]
|
| 154 |
+
### Discussion of Biases
|
| 155 |
+
[More Information Needed]
|
| 156 |
+
### Other Known Limitations
|
| 157 |
+
[More Information Needed]
|
| 158 |
+
## Additional Information
|
| 159 |
+
### Dataset Curators
|
| 160 |
+
[More Information Needed]
|
| 161 |
+
### Licensing Information
|
| 162 |
+
[More Information Needed]
|
| 163 |
+
### Citation Information
|
| 164 |
+
[More Information Needed]
|
| 165 |
+
### Contributions
|
| 166 |
+
[More Information Needed]
|
| 167 |
+
-->
|