Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Column() changed from object to string in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1391, in _parse
                  self.obj = DataFrame(
                             ^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/frame.py", line 778, in __init__
                  mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
                  return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
                  index = _extract_index(arrays)
                          ^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index
                  raise ValueError("All arrays must be of the same length")
              ValueError: All arrays must be of the same length
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card: Processed 17 Scientific Books Sentence Pairs

Dataset Description

Language: English (scientific/technical domain)

License: Research use only (derived from copyrighted books, full texts not redistributed)

Dataset Summary: This dataset contains sentence pairs extracted from 17 scientific books. Each book was processed with GROBID to obtain structured text from PDF files. Sentences were segmented and paired for training sentence-transformer models on semantic similarity tasks. To ensure balance, a maximum of 6,000 sentences per book was included.

Dataset Structure

Data Instances

Each entry is a pair of sentences:

{
  "sentence_0": "Some of the generated samples that had been achieved with this architecture already in 2014 can be seen in Figure 3.14.",
  "sentence_1": "Conditioning on Text: So far, only image generation has been covered, completely ignoring textual input."
}

Data Splits

All data is in the "train" split.

Total size: ~167,112 sentence pairs.

Balanced across 17 books (≤6000 sentences per book).

Dataset Creation

Curation Rationale

The dataset was created to provide high-quality sentence pairs for training and evaluating sentence-transformer models in the scientific domain. Limiting to 6000 sentences per book ensures balanced representation and reduces copyright risks.

Source Data

-> Books: 17 scientific/technical books (copyrighted, not redistributed).

-> Extraction: PDFs processed with GROBID → structured text → sentence segmentation (NLTK).

-> Pairs: Constructed from consecutive sentences and curated positive/negative examples.

Data Extraction Logic

-> Raw PDFs processed with GROBID.

-> Sentences segmented with NLTK.

-> Maximum 6000 sentences per book included.

-> Sentence pairs generated for semantic similarity training.

Additional Information

Citation

If you use this dataset, please cite:

@misc{aghakhani2025synergsticrag,
  author       = {Danial Aghakhani Zadeh},
  title        = {Processed 17 Scientific Books Sentence Pairs},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/DigitalAsocial/ds-tb-17-g}}
}

Personal and Sensitive Information

-> The dataset consists of scientific/technical text.

-> No personal or sensitive information is included.

Bias, Risks, and Limitations

-> Texts reflect the style and biases of their original authors.

-> Dataset is domain-specific (scientific books) and may not generalize to everyday language.

-> Full copyrighted texts are not included; only derived sentence pairs are shared.

Notice and Takedown Policy

If you believe this dataset contains material that infringes copyright, please contact us with:

-> Your contact information

-> Reference to the original work

-> Identification of the material claimed to be infringing

We will comply with legitimate requests by removing affected sources from future releases.

Dataset Curators

Created by Danial.A for research on sentence-transformers and semantic similarity.

License

Derived from copyrighted books.

Shared under Research Use Only license.

Full texts are not redistributed.

Downloads last month
12

Models trained or fine-tuned on DigitalAsocial/ds-tb-17-g