Translation
Transformers
Safetensors
qwen3
text-generation
text-generation-inference
luoyingfeng commited on
Commit
8a8fd5d
·
verified ·
1 Parent(s): da6d5fb

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - ar
6
+ - es
7
+ - de
8
+ - fr
9
+ - it
10
+ - ja
11
+ - nl
12
+ - pl
13
+ - pt
14
+ - ru
15
+ - tr
16
+ - bg
17
+ - bn
18
+ - cs
19
+ - da
20
+ - el
21
+ - fa
22
+ - fi
23
+ - hi
24
+ - hu
25
+ - id
26
+ - ko
27
+ - no
28
+ - ro
29
+ - sk
30
+ - sv
31
+ - th
32
+ - uk
33
+ - vi
34
+ - am
35
+ - az
36
+ - bo
37
+ - he
38
+ - hr
39
+ - hy
40
+ - is
41
+ - jv
42
+ - ka
43
+ - kk
44
+ - km
45
+ - ky
46
+ - lo
47
+ - mn
48
+ - mr
49
+ - ms
50
+ - my
51
+ - ne
52
+ - ps
53
+ - si
54
+ - sw
55
+ - ta
56
+ - te
57
+ - tg
58
+ - tl
59
+ - ug
60
+ - ur
61
+ - uz
62
+ - yue
63
+ base_model:
64
+ - Qwen/Qwen3-0.6B-Base
65
+ license: apache-2.0
66
+ pipeline_tag: translation
67
+ ---
68
+
69
+ ## LMT
70
+ - Paper: [Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs](https://arxiv.org/abs/2511.07003)
71
+ - Github: [LMT](https://github.com/NiuTrans/LMT)
72
+
73
+ **LMT-60** is a suite of **Chinese-English-centric** MMT models trained on **90B tokens** mixed monolingual and bilingual tokens, covering **60 languages across 234 translation directions** and achieving **SOTA performance** among models with similar language coverage.
74
+ We release both the CPT and SFT versions of LMT-60 in four sizes (0.6B/1.7B/4B/8B). All checkpoints are available:
75
+ | Models | Model Link |
76
+ |:------------|:------------|
77
+ | LMT-60-0.6B-Base | [NiuTrans/LMT-60-0.6B-Base](https://huggingface.co/NiuTrans/LMT-60-0.6B-Base) |
78
+ | LMT-60-0.6B | [NiuTrans/LMT-60-0.6B](https://huggingface.co/NiuTrans/LMT-60-0.6B) |
79
+ | LMT-60-1.7B-Base | [NiuTrans/LMT-60-1.7B-Base](https://huggingface.co/NiuTrans/LMT-60-1.7B-Base) |
80
+ | LMT-60-1.7B | [NiuTrans/LMT-60-1.7B](https://huggingface.co/NiuTrans/LMT-60-1.7B) |
81
+ | LMT-60-4B-Base | [NiuTrans/LMT-60-4B-Base](https://huggingface.co/NiuTrans/LMT-60-4B-Base) |
82
+ | LMT-60-4B | [NiuTrans/LMT-60-4B](https://huggingface.co/NiuTrans/LMT-60-4B) |
83
+ | LMT-60-8B-Base | [NiuTrans/LMT-60-8B-Base](https://huggingface.co/NiuTrans/LMT-60-8B-Base) |
84
+ | LMT-60-8B | [NiuTrans/LMT-60-8B](https://huggingface.co/NiuTrans/LMT-60-8B) |
85
+
86
+ Our supervised fine-tuning (SFT) data are released at [NiuTrans/LMT-60-sft-data](https://huggingface.co/datasets/NiuTrans/LMT-60-sft-data)
87
+
88
+ ## Quickstart
89
+
90
+ ```python
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ model_name = "NiuTrans/LMT-60-8B"
94
+
95
+ tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
96
+ model = AutoModelForCausalLM.from_pretrained(model_name)
97
+
98
+ prompt = "Translate the following text from English into Chinese.\nEnglish: The concept came from China where plum blossoms were the flower of choice.\nChinese: "
99
+ messages = [{"role": "user", "content": prompt}]
100
+ text = tokenizer.apply_chat_template(
101
+ messages,
102
+ tokenize=False,
103
+ add_generation_prompt=True,
104
+ )
105
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
106
+
107
+ generated_ids = model.generate(**model_inputs, max_new_tokens=512, num_beams=5, do_sample=False)
108
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
109
+
110
+ outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
111
+
112
+ print("response:", outputs)
113
+ ```
114
+
115
+ ## Support Languages
116
+
117
+ | Resource Tier | Languages |
118
+ | :---- | :---- |
119
+ | High-resource Languages (13) | Arabic(ar), English(en), Spanish(es), German(de), French(fr), Italian(it), Japanese(ja), Dutch(nl), Polish(pl), Portuguese(pt), Russian(ru), Turkish(tr), Chinese(zh) |
120
+ | Medium-resource Languages (18) | Bulgarian(bg), Bengali(bn), Czech(cs), Danish(da), Modern Greek(el), Persian(fa), Finnish(fi), Hindi(hi), Hungarian(hu), Indonesian(id), Korean(ko), Norwegian(no), Romanian(ro), Slovak(sk), Swedish(sv), Thai(th), Ukrainian(uk), Vietnamese(vi) |
121
+ | Low-resouce Languages (29) | Amharic(am), Azerbaijani(az), Tibetan(bo), Modern Hebrew(he), Croatian(hr), Armenian(hy), Icelandic(is), Javanese(jv), Georgian(ka), Kazakh(kk), Central Khmer(km), Kirghiz(ky), Lao(lo), Chinese Mongolian(mn_cn), Marathi(mr), Malay(ms), Burmese(my), Nepali(ne), Pashto(ps), Sinhala(si), Swahili(sw), Tamil(ta), Telugu(te), Tajik(tg), Tagalog(tl), Uighur(ug), Urdu(ur), Uzbek(uz), Yue Chinese(yue) |
122
+
123
+ ## Citation
124
+
125
+ If you find our paper useful for your research, please kindly cite our paper:
126
+ ```bash
127
+ @misc{luoyf2025lmt,
128
+ title={Beyond English: Toward Inclusive and Scalable Multilingual Machine Translation with LLMs},
129
+ author={Yingfeng Luo, Ziqiang Xu, Yuxuan Ouyang, Murun Yang, Dingyang Lin, Kaiyan Chang, Tong Zheng, Bei Li, Peinan Feng, Quan Du, Tong Xiao, Jingbo Zhu},
130
+ year={2025},
131
+ eprint={2511.07003},
132
+ archivePrefix={arXiv},
133
+ primaryClass={cs.CL},
134
+ url={https://arxiv.org/abs/2511.07003},
135
+ }
136
+ ```