acon96's picture
Upload Dataset
29ac1a8 verified
---
license: mit
task_categories:
- question-answering
- text-generation
tags:
- automation
- home
- assistant
language: ["en", "es", "fr", "de", "pl"]
pretty_name: Home Assistant Requests V2
size_categories:
- 10K<n<100k
---
# Home Assistant Requests V2 Dataset
This dataset contains a list of requests and responses for a user interacting with a personal assistant that controls an instance of [Home Assistant](https://www.home-assistant.io/).
The updated V2 of the dataset is now multilingual, containing data in English, German, French, Spanish, and Polish. The dataset also contains multiple "personalities" for the assistant to respond in, such as a formal assistant, a sarcastic assistant, and a friendly assistant. Lastly, the dataset has been updated to fully support modern tool-calling formats.
> NOTE: If you are viewing this dataset on HuggingFace, you can download the "small" dataset variant directly from the "Files and versions" tab.
## Assembling the dataset
The dataset is generated from the different CSV "piles". The "piles" contain different chunks of requests that are assembled into a final context that is presented to the LLM. For example, `piles/<language>/pile_of_device_names.csv` contains only names of various devices to be used as part of context as well as inserted into `piles/<language>/pile_of_templated_actions.csv` and `piles/<language>/pile_of_status_requests.csv`. The logic for assembling the final dataset from the piles is contained in [generate_data.py](./generate_data.py).
### Prepare environment
Start by installing system dependencies:
`sudo apt-get install python3-dev`
Then create a Python virtual environment and install all necessary library:
```
python3 -m venv .generate_data
source .generate_data/bin/activate
pip3 install -r requirements.txt
```
### Generating the dataset from piles
`python3 generate_data.py --train --test --small --language english german french spanish polish`
Supported dataset splits are `--test`, `--train`, & `--sample`
Arguments to set the train dataset size are `--small`, `--medium`, `--large`, & `--xl`.
Languages can be enabled using `--language english german french spanish polish`