UrduMegaSpeech / README.md
humair025's picture
Update README.md
cb57252 verified
metadata
license: cc-by-4.0
language:
  - ur
  - en
tags:
  - TTS
  - ASR
  - Urdu
  - TextToSpeech
  - AutomaticSpeechRecognition
  - English
  - Transcribe
  - Translate
  - speech-recognition
  - urdu-speech
  - multilingual
task_categories:
  - text-to-speech
  - text-to-audio
  - translation
  - automatic-speech-recognition
  - audio-classification
size_categories:
  - 100K<n<1M
pretty_name: UrduMegaSpeech-1M

UrduMegaSpeech-1M

Dataset Summary

UrduMegaSpeech-1M is a large-scale Urdu-English parallel speech corpus designed for automatic speech recognition (ASR), text-to-speech (TTS), and speech translation tasks. This dataset contains high-quality audio recordings paired with Urdu transcriptions and English source text, along with quality metrics for each sample.

Dataset Composition

  • Language: Urdu (transcriptions), English (source text)
  • Total Samples: ~1M+ audio-text pairs
  • Audio Format: Various sampling rates
  • Domain: General domain speech covering diverse topics
  • Quality Metrics: Includes LID scores, LASER scores, and SONAR scores for quality assessment

Use Cases

This dataset is designed for:

  • 🎤 Automatic Speech Recognition (ASR) - Train models to transcribe Urdu speech
  • 🔊 Text-to-Speech (TTS) - Generate natural-sounding Urdu speech
  • 🌐 Speech Translation - English-to-Urdu and Urdu-to-English translation systems
  • 📊 Speech Analytics - Urdu language understanding and processing
  • 🧠 Multilingual Models - Cross-lingual speech applications
  • 🎯 Quality Filtering - Use quality scores to select high-quality samples

Data Fields

  • audio: Audio file containing Urdu speech data
  • audio_filepath: Original filepath reference
  • text: English Translation
  • transcription: Urdu transcription of the audio
  • text_lid_score: Language identification confidence score (string)
  • laser_score: LASER quality score for alignment (string)
  • duration: Audio duration in seconds (float)
  • sonar_score: SONAR embedding quality score (float)

Data Example

{
  'audio': {...},
  'audio_filepath': '1.00001',
  'text_lid_score': '1.4988528',
  'laser_score': 'What is it that we, as a company can do...',
  'text': '7.806',
  'duration': 7.806,
  'transcription': 'واٹ از اٹ ایزا کمپنیو دیزینگٹ سو ففشنج اینڈ ان رسپٹیو جاب',
  'sonar_score': 0.192786
}

Data Splits

The dataset is organized into training partitions for efficient loading and processing.

Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("humair025/UrduMegaSpeech")

# Load a specific split
train_data = dataset['train']

# Access a sample
sample = train_data[0]
print(f"Transcription: {sample['transcription']}")
print(f"Duration: {sample['duration']} seconds")
print(f"SONAR Score: {sample['sonar_score']}")

Filtering by Quality Scores

from datasets import load_dataset

# Load dataset
dataset = load_dataset("humair025/UrduMegaSpeech", split="train")

# Filter high-quality samples based on SONAR score
high_quality = dataset.filter(lambda x: x['sonar_score'] > 0.5)

print(f"Original samples: {len(dataset)}")
print(f"High-quality samples: {len(high_quality)}")

Example: Fine-tuning Whisper for Urdu ASR

from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset

# Load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")

# Load dataset
dataset = load_dataset("humair025/UrduMegaSpeech", split="train")

# Filter by duration (e.g., 2-15 seconds)
dataset = dataset.filter(lambda x: 2.0 <= x['duration'] <= 15.0)

# Preprocess function
def prepare_dataset(batch):
    audio = batch["audio"]
    batch["input_features"] = processor(
        audio["array"], 
        sampling_rate=audio["sampling_rate"], 
        return_tensors="pt"
    ).input_features[0]
    batch["labels"] = processor.tokenizer(batch["transcription"]).input_ids
    return batch

# Process dataset
dataset = dataset.map(prepare_dataset, remove_columns=["audio"])

Example: Speech Translation with Quality Filtering

from datasets import load_dataset

# Load dataset
dataset = load_dataset("humair025/UrduMegaSpeech", split="train")

# Filter high-quality samples
filtered_dataset = dataset.filter(lambda x: x['sonar_score'] > 0.6)

# Use for speech translation training
for sample in filtered_dataset:
    urdu_audio = sample['audio']
    urdu_text = sample['transcription']
    english_text = sample['text']
    # Train your speech translation model

Dataset Statistics

  • Total Audio Hours: Extensive coverage for robust model training
  • Average Duration: ~8 seconds per sample
  • Vocabulary Size: Comprehensive Urdu lexicon
  • Quality Scores: Pre-computed quality metrics for easy filtering
  • Speaker Diversity: Multiple speakers with varied accents

Quality Metrics Explained

  • text_lid_score: Language identification confidence
  • laser_score: Alignment quality between source and target
  • sonar_score: Semantic similarity score (0-1+ range, higher is better)

These scores allow researchers to filter and select high-quality samples based on their specific requirements.

Licensing & Attribution

This dataset is released under the CC-BY-4.0 license.

Source: This dataset is derived from publicly available multilingual speech data (AI4Bharat).

Citation: When using this dataset, please cite:

@dataset{urdumegaspeech2025,
  title        = {UrduMegaSpeech-1M: A Large-Scale Urdu Speech Corpus},
  author       = {Humair, Muhammad},
  year         = {2025},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/humair025/UrduMegaSpeech},
  note         = {Processed from multilingual speech collections}
}

Ethical Considerations

  • This dataset is intended for research and development purposes
  • Users should ensure compliance with privacy regulations when deploying models trained on this data
  • The dataset reflects natural speech patterns and may contain colloquialisms
  • Care should be taken to avoid bias when using this data for production systems
  • Quality scores should be used to filter samples for production applications

Limitations

  • Audio quality may vary across samples
  • Speaker diversity may not represent all Urdu dialects equally
  • Some samples may have lower alignment scores
  • Domain-specific terminology may be underrepresented
  • Dataset Viewer: HuggingFace dataset viewer may not be available due to the large size and format of this dataset. Please download and process locally.

Technical Specifications

  • Audio Encoding: Various formats (converted to standard format upon loading)
  • Sampling Rates: Multiple rates (resampling to 16kHz recommended)
  • Text Encoding: UTF-8
  • File Format: Parquet
  • Recommended Filtering: Filter by duration (2-15 seconds) and sonar_score (>0.5) for optimal results

Recommended Preprocessing

# Recommended filtering for high-quality training data
filtered = dataset.filter(
    lambda x: 2.0 <= x['duration'] <= 15.0 and x['sonar_score'] > 0.5
)

Acknowledgments

This dataset was compiled and processed to support Urdu language technology research and development. Data sourced from AI4Bharat multilingual collections.


Dataset Curated By: Humair Munir
Last Updated: December 2025
Version: 1.0