Add example to README
Browse files
README.md
CHANGED
|
@@ -208,6 +208,54 @@ chatbot = pipeline("text-generation", model="mistralai/Mistral-Nemo-Instruct-240
|
|
| 208 |
chatbot(messages)
|
| 209 |
```
|
| 210 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
> [!TIP]
|
| 212 |
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
|
| 213 |
|
|
|
|
| 208 |
chatbot(messages)
|
| 209 |
```
|
| 210 |
|
| 211 |
+
## Function calling with `transformers`
|
| 212 |
+
|
| 213 |
+
To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
|
| 214 |
+
[function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
|
| 215 |
+
in the `transformers` docs for more information.
|
| 216 |
+
|
| 217 |
+
```python
|
| 218 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 219 |
+
import torch
|
| 220 |
+
|
| 221 |
+
model_id = "mistralai/Mistral-Nemo-Instruct-2407"
|
| 222 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 223 |
+
|
| 224 |
+
def get_current_weather(location: str, format: str):
|
| 225 |
+
"""
|
| 226 |
+
Get the current weather
|
| 227 |
+
|
| 228 |
+
Args:
|
| 229 |
+
location: The city and state, e.g. San Francisco, CA
|
| 230 |
+
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
|
| 231 |
+
"""
|
| 232 |
+
pass
|
| 233 |
+
|
| 234 |
+
conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
|
| 235 |
+
tools = [get_current_weather]
|
| 236 |
+
|
| 237 |
+
# render the tool use prompt as a string:
|
| 238 |
+
tool_use_prompt = tokenizer.apply_chat_template(
|
| 239 |
+
conversation,
|
| 240 |
+
tools=tools,
|
| 241 |
+
tokenize=False,
|
| 242 |
+
add_generation_prompt=True,
|
| 243 |
+
)
|
| 244 |
+
|
| 245 |
+
inputs = tokenizer(tool_use_prompt, return_tensors="pt")
|
| 246 |
+
|
| 247 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
|
| 248 |
+
|
| 249 |
+
outputs = model.generate(**inputs, max_new_tokens=1000)
|
| 250 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 251 |
+
```
|
| 252 |
+
|
| 253 |
+
Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
|
| 254 |
+
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
|
| 255 |
+
see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
|
| 256 |
+
and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
|
| 257 |
+
exactly 9 alphanumeric characters.
|
| 258 |
+
|
| 259 |
> [!TIP]
|
| 260 |
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
|
| 261 |
|