# Reachy Mini conversation demo Conversational demo for the Reachy Mini robot combining OpenAI's realtime APIs, vision pipelines, and choreographed motion libraries. ## Overview - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming. - Camera capture can route to OpenAI multimodal vision or stay on-device with SmolVLM2 local analysis. - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and face-tracking. - Async tool dispatch integrates robot motion, camera capture, and optional facial-recognition helpers through a Gradio web UI with live transcripts. ## Installation ### Using uv You can set up the project quickly using [uv](https://docs.astral.sh/uv/): ```bash uv venv --python 3.12.1 # Create a virtual environment with Python 3.12.1 source .venv/bin/activate uv sync ``` To include optional vision dependencies: ``` uv sync --extra local_vision # For local PyTorch/Transformers vision uv sync --extra yolo_vision # For YOLO-based vision uv sync --extra mediapipe_vision # For MediaPipe-based vision uv sync --extra all_vision # For all vision features ``` You can combine extras or include dev dependencies: ``` uv sync --extra all_vision --group dev ``` ### Using pip (test on Ubuntu 24.04) ```bash python -m venv .venv # Create a virtual environment source .venv/bin/activate pip install -e . ``` Install optional extras depending on the feature set you need: ```bash # Vision stacks (choose at least one if you plan to run face tracking) pip install -e .[local_vision] pip install -e .[yolo_vision] pip install -e .[mediapipe_vision] pip install -e .[all_vision] # installs every vision extra # Tooling for development workflows pip install -e .[dev] ``` Some wheels (e.g. PyTorch) are large and require compatible CUDA or CPU builds—make sure your platform matches the binaries pulled in by each extra. ## Optional dependency groups | Extra | Purpose | Notes | |-------|---------|-------| | `local_vision` | Run the local VLM (SmolVLM2) through PyTorch/Transformers. | GPU recommended; ensure compatible PyTorch builds for your platform. | `yolo_vision` | YOLOv8 tracking via `ultralytics` and `supervision`. | CPU friendly; supports the `--head-tracker yolo` option. | `mediapipe_vision` | Lightweight landmark tracking with MediaPipe. | Works on CPU; enables `--head-tracker mediapipe`. | `all_vision` | Convenience alias installing every vision extra. | Install when you want the flexibility to experiment with every provider. | `dev` | Developer tooling (`pytest`, `ruff`). | Add on top of either base or `all_vision` environments. ## Configuration 1. Copy `.env.example` to `.env`. 2. Fill in the required values, notably the OpenAI API key. | Variable | Description | |----------|-------------| | `OPENAI_API_KEY` | Required. Grants access to the OpenAI realtime endpoint. | `MODEL_NAME` | Override the realtime model (defaults to `gpt-realtime`). | `HF_HOME` | Cache directory for local Hugging Face downloads. | `HF_TOKEN` | Optional token for Hugging Face models. ## Running the demo Activate your virtual environment, ensure the Reachy Mini robot (or simulator) is reachable, then launch: ```bash reachy-mini-conversation-demo ``` The app starts a Gradio UI served locally (http://127.0.0.1:7860/). When running on a headless host, use `--headless`. With a camera attached, captured frames can be analysed remotely through OpenAI multimodal models or locally via the YOLO/MediaPipe pipelines depending on the extras you installed. ### CLI options | Option | Default | Description | |--------|---------|-------------| | `--head-tracker {yolo,mediapipe}` | `None` | Select a face-tracking backend when a camera is available. Requires the matching optional extra. | | `--no-camera` | `False` | Run without camera capture or face tracking. | | `--gradio` | `False` | Launch the Gradio web UI. Without this flag, runs in console mode. Required when running in simulation mode. | | `--debug` | `False` | Enable verbose logging for troubleshooting. | ### Examples - Run on hardware with MediaPipe face tracking: ```bash reachy-mini-conversation-demo --head-tracker mediapipe ``` - Disable the camera pipeline (audio-only conversation): ```bash reachy-mini-conversation-demo --no-camera ``` ## LLM tools exposed to the assistant | Tool | Action | Dependencies | |------|--------|--------------| | `move_head` | Queue a head pose change (left/right/up/down/front). | Core install only. | | `camera` | Capture the latest camera frame and optionally query a vision backend. | Requires camera worker; vision analysis depends on selected extras. | | `head_tracking` | Enable or disable face-tracking offsets. | Camera worker with configured head tracker. | | `dance` | Queue a dance from `reachy_mini_dances_library`. | Core install only. | | `stop_dance` | Clear queued dances. | Core install only. | | `play_emotion` | Play a recorded emotion clip via Hugging Face assets. | Needs `HF_TOKEN` for the recorded emotions dataset. | | `stop_emotion` | Clear queued emotions. | Core install only. | | `get_person_name` | DeepFace-based recognition of the current person. | Disabled by default (`ENABLE_FACE_RECOGNITION=False`); requires `deepface` and a local face database. | | `do_nothing` | Explicitly remain idle. | Core install only. | ## Development workflow - Install the dev group extras: `uv sync --group dev` or `pip install -e .[dev]`. - Run formatting and linting: `ruff check .`. - Execute the test suite: `pytest`. - When iterating on robot motions, keep the control loop responsive => offload blocking work using the helpers in `tools.py`. ## License Apache 2.0