Year-Guessr-Dataset / README.md
Morris0401's picture
Update README.md
c3a63ff verified
metadata
license: cc-by-sa-4.0
language: en
task_categories:
  - image-text-to-text
tags:
  - image-classification
  - computer-vision
  - year-prediction
arxiv: 2512.21337
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: valid
        path: data/valid-*
      - split: wiki_dataset
        path: data/wiki_dataset-*
dataset_info:
  features:
    - name: Picture
      dtype: image
    - name: Year
      dtype: int64
    - name: Longitude
      dtype: string
    - name: Latitude
      dtype: string
    - name: Building
      dtype: string
    - name: Source
      dtype: string
    - name: Views
      dtype: int64
    - name: Country
      dtype: string
    - name: Description
      dtype: string
  splits:
    - name: train
      num_bytes: 635738767.016
      num_examples: 33337
    - name: test
      num_bytes: 218467609.984
      num_examples: 11087
    - name: valid
      num_bytes: 218291058.874
      num_examples: 11122
    - name: wiki_dataset
      num_bytes: 1094932232.506
      num_examples: 57934
  download_size: 2069974230
  dataset_size: 2167429668.38

YearGuessr Dataset

Project Page | Paper | GitHub

This dataset, Morris0401/Year-Guessr-Dataset, is a comprehensive and large-scale collection of architectural images and associated metadata, designed for global building age estimation, specifically treating age as an ordinal variable. It provides an unprecedented benchmark for evaluating building visual recognition, cross-regional generalization, and multi-modal reasoning tasks.

Motivation and Background

Building age is a critical indicator for sustainability, urban planning, cultural heritage audits, and post-disaster safety assessments. However, most existing public datasets suffer from significant limitations:

  1. Narrow Geographical Coverage: The majority of datasets are confined to single cities or countries, hindering models' understanding of global architectural diversity and their generalization capabilities.
  2. Coarse Age Annotation: Continuous years are often discretized into a few broad categories (e.g., by decades or centuries), neglecting the inherent ordinal relationship between years. This prevents models from precisely capturing subtle age-related changes.
  3. Lack of Multi-modal Information: Most datasets rely solely on building facade images, rarely integrating textual descriptions or precise geographical location information, thereby limiting the potential for multi-modal learning.
  4. Evaluation Metrics Mismatched to Task Nature: Commonly used classification accuracy metrics, instead of continuous error measures or metrics considering ordinality, fail to adequately assess the precision of age estimation tasks.
  5. Accessibility and Licensing Issues: Some datasets are difficult to publicly release in full due to copyright restrictions or large file sizes, limiting research reproducibility.

To address these challenges, we introduce YearGuessr, an innovative dataset designed to fill these gaps.

Key Features and Contributions of YearGuessr

YearGuessr is a dataset collected from Wikipedia, offering the following core features and contributions:

  • Global Scale and Coverage: Comprising 55,546 geotagged building facade images in its train/test/valid splits, spanning approximately from 1000 CE to 2024 CE, and covering 157 countries worldwide. This makes it the largest and most geographically diverse dataset for building age estimation to date. The wiki_dataset split contains a larger, raw collection of 57,934 entries from Wikipedia before filtering.
  • Ordinal Regression Targets for Age: YearGuessr is the first benchmark dataset to explicitly frame building age estimation as an ordinal regression task. This allows models to better understand the continuity and sequential relationship of ages, rather than treating it as a simple classification problem.
  • Rich Multi-modal Information: In addition to high-resolution images, each entry includes the precise construction year, longitude/latitude, building name, country, and comprehensive textual descriptions from Wikipedia (e.g., architectural style, material, purpose, historical period), providing a rich source for multi-modal learning and explainable AI.
  • High-Quality Data Collection: Data was collected via web scraping from Wikipedia and automatically filtered and de-duplicated using advanced Vision-Language Models (such as CLIP and LLM) to ensure data accuracy and relevance for the train/test/valid splits.
  • Enabling New Research Directions:
    • Facilitating more precise building age prediction, especially through ordinal regression methods.
    • Promoting research into cross-regional generalization capabilities and zero-shot learning.
    • Encouraging multi-modal feature fusion combining images, geographical coordinates, and textual descriptions.
    • Providing a foundation for analyzing model biases when handling popular versus less-known buildings.
  • High Accessibility and Open Licensing: YearGuessr dataset and associated code will be publicly released on Hugging Face under the CC-BY-SA 4.0 and MIT Licenses, ensuring free use, sharing, and modification by the research community.

Supported Tasks and Evaluation Metrics

YearGuessr is primarily designed to benchmark performance on the following tasks:

  • Building Age Ordinal Regression: Predicting the precise construction year of buildings using architectural images and optional geographical and textual descriptions.
  • Zero-shot Reasoning: Evaluating the inference capabilities of Vision-Language Models (VLMs) and Large Language Models (LLMs) in estimating building age without specific age-annotated training.

Recommended evaluation metrics include:

  • Mean Absolute Error (MAE): Measuring the average absolute difference between predicted and ground-truth ages.
  • Interval Accuracy: Assessing prediction accuracy within predefined error intervals (e.g., ±5 years, ±20 years, ±50 years, ±100 years).
  • Popularity-based Accuracy Analysis: Investigating differences in model performance across buildings of varying popularity (based on Wikipedia view counts) to reveal potential biases or memorization effects.

Dataset Structure

To accommodate all data splits and maintain consistency, we employ a superset of features to unify all Parquet files. Missing values are filled with None.

Splits

The dataset is divided into the following splits, each stored as a Parquet file with a unified feature set:

  • train: Training data with architectural images and metadata.
  • test: Testing data with architectural images and metadata.
  • valid: Validation data with architectural images and metadata.
  • wiki_dataset: Wikipedia-sourced raw data, including image metadata and geographical information (larger than the curated train/test/valid splits).

Unified Features

The complete feature set across all splits is:

  • Picture: Relative path to the image (e.g., images/part_1/1.jpg).
  • Year: Construction year (integer, if available). This is the primary ordinal regression target.
  • Longitude: Geographical longitude (string, if available).
  • Latitude: Geographical latitude (string, if available).
  • Country: The country where the building is located (string, if available).
  • Building: Name or description of the building (string, if available).
  • Source: Source URL (string, if available).
  • Views: Number of Wikipedia page views (integer, if available), useful for popularity analysis.
  • Description: A general textual description of the building (string, if available).

Additional Files

  • images_metadata.parquet: Metadata for all images in the images/ directory, including image_path, partition, and filename.
  • csv/ Directory: Contains raw CSV files (train.csv, test.csv, valid.csv, wiki_dataset.csv) for reference.

Image Directory

  • images/: Contains image files organized in subdirectories (e.g., part_1/, part_2/, ..., part_8/) with .jpg files.
    • Example path: images/part_1/1.jpg.

Usage

Loading the Dataset

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

dataset = load_dataset("Morris0401/Year-Guessr-Dataset")
print(dataset)

Citation

If you find this dataset helpful, please consider citing:

@misc{szutu2025memorizationmultimodalordinalregression,
      title={Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models},
      author={Li-Zhong Szu-Tu and Ting-Lin Wu and Chia-Jui Chang and He Syu and Yu-Lun Liu},
      year={2025},
      eprint={2512.21337},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={[https://arxiv.org/abs/2512.21337](https://arxiv.org/abs/2512.21337)},
}