disham993 commited on
Commit
41f54e7
·
verified ·
1 Parent(s): 4390624

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -18
README.md CHANGED
@@ -8,55 +8,79 @@ tags:
8
  datasets:
9
  - disham993/ElectricalDeviceFeedbackBalanced
10
  metrics:
11
- - epoch: 1.0
12
- - eval_f1: 0.8920010578367876
13
- - eval_accuracy: 0.897189349112426
14
- - eval_runtime: 3.7358
15
- - eval_samples_per_second: 361.901
16
- - eval_steps_per_second: 11.51
17
  ---
18
 
19
- # disham993/electrical-classification-ModernBERT-large
20
 
21
  ## Model description
22
 
23
- This model is fine-tuned from [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) for text-classification tasks.
24
 
25
  ## Training Data
26
 
27
- The model was trained on the disham993/ElectricalDeviceFeedbackBalanced dataset.
28
 
29
  ## Model Details
30
- - **Base Model:** answerdotai/ModernBERT-large
31
  - **Task:** text-classification
32
  - **Language:** en
33
- - **Dataset:** disham993/ElectricalDeviceFeedbackBalanced
34
 
35
  ## Training procedure
36
 
37
  ### Training hyperparameters
38
- [Please add your training hyperparameters here]
 
 
 
 
 
 
 
39
 
40
  ## Evaluation results
41
 
42
- ### Metrics\n- epoch: 1.0\n- eval_f1: 0.8920010578367876\n- eval_accuracy: 0.897189349112426\n- eval_runtime: 3.7358\n- eval_samples_per_second: 361.901\n- eval_steps_per_second: 11.51
 
 
 
 
 
 
 
43
 
44
  ## Usage
45
 
 
 
46
  ```python
47
- from transformers import AutoTokenizer, AutoModel
 
 
 
 
 
48
 
49
- tokenizer = AutoTokenizer.from_pretrained("disham993/electrical-classification-ModernBERT-large")
50
- model = AutoModel.from_pretrained("disham993/electrical-classification-ModernBERT-large")
 
51
  ```
52
 
53
  ## Limitations and bias
54
 
55
- [Add any known limitations or biases of the model]
 
 
56
 
57
  ## Training Infrastructure
58
 
59
- [Add details about training infrastructure used]
60
 
61
  ## Last update
62
 
 
8
  datasets:
9
  - disham993/ElectricalDeviceFeedbackBalanced
10
  metrics:
11
+ - epoch: 5.0
12
+ - eval_f1: 0.9106
13
+ - eval_accuracy: 0.90828
14
+ - eval_runtime: 3.2658
15
+ - eval_samples_per_second: 413.989
16
+ - eval_steps_per_second: 13.167
17
  ---
18
 
19
+ # electrical-classification-ModernBERT-large
20
 
21
  ## Model description
22
 
23
+ This model is fine-tuned from [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) for text classification tasks, specifically sentiment analysis of customer feedback on electrical devices - circuit breakers, transformers, smart meters, inverters, solar panels, power strips etc. The model has been optimized to classify sentiments into categories such as Positive, Negative, Neutral, and Mixed with high precision and recall, making it ideal for analyzing product reviews, customer surveys, and other feedback to derive actionable insights.
24
 
25
  ## Training Data
26
 
27
+ The model was trained on the [disham993/ElectricalDeviceFeedbackBalanced](https://huggingface.co/datasets/disham993/ElectricalDeviceFeedbackBalanced) dataset, which has been carefully balanced to address class imbalances effectively.
28
 
29
  ## Model Details
30
+ - **Base Model:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large)
31
  - **Task:** text-classification
32
  - **Language:** en
33
+ - **Dataset:** [disham993/ElectricalDeviceFeedbackBalanced](https://huggingface.co/datasets/disham993/ElectricalDeviceFeedbackBalanced)
34
 
35
  ## Training procedure
36
 
37
  ### Training hyperparameters
38
+
39
+ The model was fine-tuned using the following hyperparameters:
40
+
41
+ -**Evaluation Strategy:** epoch
42
+ -**Learning Rate:** 1e-5
43
+ -**Batch Size:** 32 (for both training and evaluation)
44
+ -**Number of Epochs:** 5
45
+ -**Weight Decay:** 0.01
46
 
47
  ## Evaluation results
48
 
49
+ The following metrics were achieved during evaluation:
50
+
51
+ -**F1 Score:** 0.9106
52
+ -**Accuracy:** 0.90828
53
+ -**eval_runtime:** 3.2658
54
+ -**eval_samples_per_second:** 413.989
55
+ -**eval_steps_per_second:** 13.167
56
+
57
 
58
  ## Usage
59
 
60
+ You can use this model for Sentiment Analysis of the Electrical Device Feedback as follows:
61
+
62
  ```python
63
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
64
+
65
+ model_name = "disham993/electrical-classification-ModernBERT-large"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
68
+ nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)
69
 
70
+ text = "The new washing machine is efficient but produces a bit of noise."
71
+ classification_results = nlp(text)
72
+ print(classification_results)
73
  ```
74
 
75
  ## Limitations and bias
76
 
77
+ The dataset includes synthetic data generated using Llama 3.1:8b, and despite careful optimization and prompt engineering, the model is not immune to errors in labeling. Additionally, as LLM technology is still in its early stages, there may be inherent inaccuracies or biases in the generated data that can impact the model's performance.
78
+
79
+ This model is intended for research and educational purposes only, and users are encouraged to validate results before applying them to critical applications.
80
 
81
  ## Training Infrastructure
82
 
83
+ For a complete guide covering the entire process - from data tokenization to pushing the model to the Hugging Face Hub - please refer to the [GitHub repository](https://github.com/di37/classification-electrical-feedback-finetuning/).
84
 
85
  ## Last update
86