--- library_name: transformers license: apache-2.0 base_model: answerdotai/ModernBERT-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: modernbert_agree_classifier results: [] --- # modernbert_agree_classifier This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7424 - Accuracy: 0.5687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.000000000000001e-05 - train_batch_size: 6 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - training_steps: 550 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.327 | 0.1497 | 50 | 0.6950 | 0.5640 | | 1.5088 | 0.2994 | 100 | 0.6833 | 0.5687 | | 1.2018 | 0.4491 | 150 | 0.6867 | 0.5687 | | 1.3053 | 0.5988 | 200 | 0.6923 | 0.5687 | | 1.2024 | 0.7485 | 250 | 0.6879 | 0.5687 | | 1.774 | 0.8982 | 300 | 0.7197 | 0.4313 | | 1.4274 | 1.0479 | 350 | 0.6862 | 0.5687 | | 1.199 | 1.1976 | 400 | 0.6995 | 0.5687 | | 1.4548 | 1.3473 | 450 | 0.7085 | 0.4313 | | 1.581 | 1.4970 | 500 | 0.6845 | 0.5687 | | 1.8775 | 1.6467 | 550 | 0.7424 | 0.5687 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1