RoboManip-Traj-Demo / README.md
SkyTong's picture
Update README.md
9c153c8 verified
|
raw
history blame
4.78 kB
metadata
license: openrail
tags:
  - robotics
  - trajectory-prediction
  - manipulation
  - computer-vision
  - time-series
pretty_name: Codatta Robotic Manipulation Trajectory
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: id
      dtype: string
    - name: total_frames
      dtype: int32
    - name: annotations
      dtype: string
    - name: trajectory_image
      dtype: image
    - name: video_path
      dtype: string
  splits:
    - name: train
      num_bytes: 39054025
      num_examples: 50
  download_size: 38738419
  dataset_size: 39054025
language:
  - en
size_categories:
  - n<1K

Codatta Robotic Manipulation Trajectory (Sample)

Dataset Summary

This dataset contains high-quality annotated trajectories of robotic gripper manipulations. It is designed to train models for fine-grained control, trajectory prediction, and object interaction tasks.

Produced by Codatta, this dataset focuses on third-person views of robotic arms performing pick-and-place or manipulation tasks. Each sample includes the raw video, a visualization of the trajectory, and a rigorous JSON annotation of keyframes and coordinate points.

Note: This is a sample dataset containing 50 annotated examples.

Supported Tasks

  • Trajectory Prediction: Predicting the path of a gripper based on visual context.
  • Keyframe Extraction: Identifying critical moments in a manipulation task (e.g., contact, velocity change).
  • Robotic Control: Imitation learning from human-demonstrated or teleoperated data.

Dataset Structure

Data Fields

  • id (string): Unique identifier for the trajectory sequence.
  • total_frames (int32): Total number of frames in the video sequence.
  • video_path (string): Path to the source MP4 video file recording the manipulation action.
  • trajectory_image (image): A JPEG preview showing the overlaid trajectory path or keyframe visualization.
  • annotations (string): A JSON-formatted string containing the detailed coordinate data.
    • Structure: Contains lists of keyframes, timestamp, and the 5-point coordinates for the gripper in each annotated frame.

Data Preview

(Hugging Face's viewer will automatically render the trajectory_image here)

Annotation Standards

The data was annotated following a strict protocol to ensure precision and consistency.

1. Viewpoint Scope

  • Included: Third-person views (fixed camera recording the robot).
  • [cite_start]Excluded: First-person views (Eye-in-Hand) are explicitly excluded to ensure consistent coordinate mapping[cite: 5, 15].

2. Keyframe Selection

Annotations are not dense (every frame) but sparse, focusing on Keyframes that define the motion logic. [cite_start]A Keyframe is defined by the following events [cite: 20-25]:

  1. [cite_start]Start Frame: The gripper first appears in the screen[cite: 21].
  2. [cite_start]End Frame: The gripper leaves the screen[cite: 22].
  3. [cite_start]Velocity Change: Frames where the speed direction suddenly changes (marking the minimum speed point)[cite: 23].
  4. [cite_start]State Change: Frames where the gripper opens or closes[cite: 24].
  5. [cite_start]Contact: The precise moment the gripper touches the object[cite: 25].

3. The 5-Point Annotation Method

[cite_start]For every annotated keyframe, the gripper is labeled with 5 specific coordinate points to capture its pose and state accurately[cite: 27]:

Point ID Description Location Detail
Point 1 & 2 Fingertips [cite_start]Center of the bottom edge of the gripper tips[cite: 28, 29].
Point 3 & 4 Gripper Ends [cite_start]The rearmost points of the closing area (indicating the finger direction)[cite: 31].
Point 5 Tiger's Mouth [cite_start]The center of the crossbeam (base of the gripper)[cite: 32].

4. Quality Control

  • [cite_start]Accuracy: All datasets passed a rigorous quality assurance process with a minimum 95% accuracy rate[cite: 78].
  • Occlusion Handling: If the gripper is partially occluded, points are estimated based on object geometry. [cite_start]Sequences where the gripper is fully occluded or only shows a side profile without clear features are discarded[cite: 58, 63].

Usage Example

from datasets import load_dataset
import json

# Load the dataset
ds = load_dataset("Codatta/robotic-manipulation-trajectory", split="train")

# Access a sample
sample = ds[0]

# View the image
print(f"Trajectory ID: {sample['id']}")
sample['trajectory_image'].show()

# Parse annotations
annotations = json.loads(sample['annotations'])
print(f"Keyframes count: {len(annotations)}")