SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Fine-tuned all-mpnet-base-v2 for Data Science

Overview

This model is a fine-tuned version of all-mpnet-base-v2 (originally released by Microsoft under the Apache 2.0 License).
It has been fine-tuned on a curated set of 17 reference books in Data Science and Machine Learning to improve semantic embeddings in this domain.

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False, 'architecture': 'MPNetModel'})
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    "We start by multiplying and dividing the log-likelihood by an arbitrary probability distribution q(z) over the latent variables:  (17.14)  We then use Jensen's inequality for the logarithm (equation 17.12) to find a lower bound:  (17.15)  where the right-hand side is termed the evidence lower bound or ELBO. It gets this name because P r(x|ϕ) is called the evidence in the context of Bayes' rule (equation 17.19).",
    'The neural architecture that computes this quantity is the VAE.',
    'To train Flamingo, the authors solve these challenges by foremost exploring ways to generate their own web-scraped multimodal data set similar to existing ones in the language-only domain.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.5135, 0.1154],
#         [0.5135, 1.0000, 0.1692],
#         [0.1154, 0.1692, 1.0000]])

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine nan
spearman_cosine nan

Training Details

Training Dataset

Dataset Summary

Sentence pairs were extracted from 17 scientific books using GROBID for text structuring and NLTK for sentence segmentation.
Pairs were generated with a Semantic Node Splitter (SNS) approach, ensuring both consecutive and slightly distant pairs.
A maximum of 6,000 sentences per book was included to maintain

Training Data

The model was fine-tuned using 17 reference books in Data Science and Machine Learning, including: All source books were preprocessed using GROBID, an open-source tool for extracting and structuring text from PDF documents.
The raw PDF files were converted into structured text, segmented into sentences, and cleaned before being used for training.
This ensured consistent formatting and reliable sentence boundaries across the dataset.

  1. Aßenmacher, Matthias. Multimodal Deep Learning. Self-published, 2023.
  2. Bertsekas, Dimitri P. A Course in Reinforcement Learning. Arizona State University.
  3. Boykis, Vicki. What are Embeddings. Self-published, 2023.
  4. Bruce, Peter, and Andrew Bruce. Practical Statistics for Data Scientists: 50 Essential Concepts. O’Reilly Media, 2017.
  5. Daumé III, Hal. A Course in Machine Learning. Self-published.
  6. Deisenroth, Marc Peter, A. Aldo Faisal, and Cheng Soon Ong. Mathematics for Machine Learning. Cambridge University Press, 2020.
  7. Devlin, Hannah, Guo Kunin, Xiang Tian. Seeing Theory. Self-published.
  8. Gutmann, Michael U. Pen & Paper: Exercises in Machine Learning. Self-published.
  9. Jung, Alexander. Machine Learning: The Basics. Springer, 2022.
  10. Langr, Jakub, and Vladimir Bok. Deep Learning with Generative Adversarial Networks. Manning Publications, 2019.
  11. MacKay, David J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
  12. Montgomery, Douglas C., Cheryl L. Jennings, and Murat Kulahci. Introduction to Time Series Analysis and Forecasting. 2nd Edition, Wiley, 2015.
  13. Nilsson, Nils J. Introduction to Machine Learning: An Early Draft of a Proposed Textbook. Stanford University, 1996.
  14. Prince, Simon J.D. Understanding Deep Learning. Draft Edition, 2024.
  15. Shashua, Amnon. Introduction to Machine Learning. The Hebrew University of Jerusalem, 2008.
  16. Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. 2nd Edition, MIT Press, 2018.
  17. Alpaydin, Ethem. Introduction to Machine Learning. 3rd Edition, MIT Press, 2014.

⚠️ Note: Due to copyright restrictions, the full text of these books is not included in this repository. Only the fine-tuned model weights are shared.

Dataset

  • Size: 127,058 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 9 tokens
    • mean: 42.1 tokens
    • max: 384 tokens
    • min: 7 tokens
    • mean: 41.5 tokens
    • max: 384 tokens
  • Samples:
    sentence_0 sentence_1
    For example, if we take training index t as an attribute, the impurity measure will choose that because then the impurity of each branch is 0, although it is not a reasonable feature. When there is noise, growing the tree until it is purest, we may grow a very large tree and it overfits; for example, consider the case of a mislabeled instance amid a group of correctly labeled instances.
    Chapter 17

    Generative adversarial networks learn a mechanism for creating samples that are statistically indistinguishable from the training data {x i }.
    However, the properties of the VAE mean that it is unfortunately not possible to evaluate the probability of new examples x * exactly.
    In the case of an estimator, eligibility is associated with the parameters of the estimator. The difference is that in the case of momentum previous weight changes are remembered, whereas here previous gradient vectors are remembered.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 6
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Loss Progression

Epoch Step Training Loss
0.06 500 1.4767
0.13 1000 1.3015
0.19 1500 1.1985
0.25 2000 1.1202
0.31 2500 1.0386
0.38 3000 0.9994
0.44 3500 0.9570
0.50 4000 0.9129
0.63 5000 0.8529
0.76 6000 0.7813
0.82 6500 0.7212
0.88 7000 0.6963
0.94 7500 0.6627
1.00 7942 0.6280
2.00 ~9500 0.2499
3.00 ~15000 0.1436
4.00 ~20000 0.1068
5.00 ~25000 0.0850
6.00 47652 0.0716

Runtime Statistics

  • Total steps: 47,652
  • Total runtime: ~6h 40m
  • Train samples per second: ~31.8
  • Train steps per second: ~1.99
  • Final average training loss: 0.3198

Evaluation Notes

  • Validation set used constant similarity labels (all = 1.0).
  • As a result, Pearson/Spearman correlation metrics are undefined (NaN).
  • Despite this, training loss shows consistent convergence across epochs.

Framework Versions

  • Python: 3.11.7
  • Sentence Transformers: 5.1.1
  • Transformers: 4.57.0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.12.0
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Provided By

If you use this model, please cite:

@misc{aghakhani2025synergsticrag,
  author       = {Danial Aghakhani Zadeh},
  title        = {Fine-tuned all-mpnet-base-v2 for Data Science RAG},
  year         = {2025},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/DigitalAsocial/all-mpnet-base-v2-ds-rag-17s}}
}

Contact

For questions, feedback, or collaboration requests regarding this dataset/model, please contact:

Downloads last month
25
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DigitalAsocial/all-mpnet-base-v2-ds-rag-17s

Finetuned
(328)
this model

Dataset used to train DigitalAsocial/all-mpnet-base-v2-ds-rag-17s

Evaluation results