darkc0de commited on
Commit
55be812
·
verified ·
1 Parent(s): e6e9979

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -1,16 +1,25 @@
1
  ---
2
- base_model: mlabonne/gemma-3-27b-it-abliterated
 
 
3
  library_name: transformers
4
  license: gemma
5
  pipeline_tag: image-text-to-text
6
  tags:
 
 
7
  - llama-cpp
8
  - gguf-my-repo
 
 
 
 
 
9
  ---
10
 
11
  # darkc0de/gemma-3-27b-it-abliterated-Q5_K_M-GGUF
12
- This model was converted to GGUF format from [`mlabonne/gemma-3-27b-it-abliterated`](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
- Refer to the [original model card](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated) for more details on the model.
14
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
 
1
  ---
2
+ base_model: huihui-ai/gemma-3-27b-it-abliterated
3
+ language:
4
+ - en
5
  library_name: transformers
6
  license: gemma
7
  pipeline_tag: image-text-to-text
8
  tags:
9
+ - abliterated
10
+ - uncensored
11
  - llama-cpp
12
  - gguf-my-repo
13
+ extra_gated_heading: Access Gemma on Hugging Face
14
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
15
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
16
+ Face and click below. Requests are processed immediately.
17
+ extra_gated_button_content: Acknowledge license
18
  ---
19
 
20
  # darkc0de/gemma-3-27b-it-abliterated-Q5_K_M-GGUF
21
+ This model was converted to GGUF format from [`huihui-ai/gemma-3-27b-it-abliterated`](https://huggingface.co/huihui-ai/gemma-3-27b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
+ Refer to the [original model card](https://huggingface.co/huihui-ai/gemma-3-27b-it-abliterated) for more details on the model.
23
 
24
  ## Use with llama.cpp
25
  Install llama.cpp through brew (works on Mac and Linux)