introvoyz041 commited on
Commit
bd336bd
·
verified ·
1 Parent(s): e2f60f7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ license: other
4
+ license_name: lfm1.0
5
+ license_link: LICENSE
6
+ language:
7
+ - en
8
+ - ar
9
+ - zh
10
+ - fr
11
+ - de
12
+ - ja
13
+ - ko
14
+ - es
15
+ pipeline_tag: text-generation
16
+ tags:
17
+ - liquid
18
+ - lfm2
19
+ - edge
20
+ - moe
21
+ - mlx
22
+ - mlx
23
+ - mlx-my-repo
24
+ base_model: mlx-community/LFM2-8B-A1B-8bit
25
+ ---
26
+
27
+ # introvoyz041/LFM2-8B-A1B-8bit-mlx-8Bit
28
+
29
+ The Model [introvoyz041/LFM2-8B-A1B-8bit-mlx-8Bit](https://huggingface.co/introvoyz041/LFM2-8B-A1B-8bit-mlx-8Bit) was converted to MLX format from [mlx-community/LFM2-8B-A1B-8bit](https://huggingface.co/mlx-community/LFM2-8B-A1B-8bit) using mlx-lm version **0.28.3**.
30
+
31
+ ## Use with mlx
32
+
33
+ ```bash
34
+ pip install mlx-lm
35
+ ```
36
+
37
+ ```python
38
+ from mlx_lm import load, generate
39
+
40
+ model, tokenizer = load("introvoyz041/LFM2-8B-A1B-8bit-mlx-8Bit")
41
+
42
+ prompt="hello"
43
+
44
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
45
+ messages = [{"role": "user", "content": prompt}]
46
+ prompt = tokenizer.apply_chat_template(
47
+ messages, tokenize=False, add_generation_prompt=True
48
+ )
49
+
50
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
51
+ ```