You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

license: other license_name: cc-by-sa license_link: LICENSE task_categories:

  • text-to-image language:
  • en tags:
  • Disability
  • Inclusion
  • social justice pretty_name: Blinc videos size_categories:
  • n<1K ---This dataset is shared under CC-BY-SA, a copyleft license: Under this license, users are permitted to copy, share, and distribute the dataset in any medium or format, provided clear attribution is given to the original creators. Acceptable uses include research and scientific development (AI and machine learning training or evaluation) accessibility improvements, making public spaces more accessible, inclusive representation efforts, educational use, publications, the development of socially beneficial products and services and responsible use by non-profit organizations, charities, academic institutions or disability aligned for-profit entities. This license does not permit remixing, editing or transforming the raw dataset including altering image content, metadata or individual records. While users may build models using the dataset, they are not allowed to redistribute modified versions of the dataset itself. Remixing, adaptation, or creation of derivative works from this dataset is strictly prohibited without prior written consent. Where such consent is granted, the resulting material must be licensed under the same terms as the original The dataset and its derivatives may not be used for surveillance (including facial recognition or predictive policing), military or defence purposes, defamation, discrimination, exploitation or adult content generation. Users may not claim exclusive ownership over any model or system trained solely on this dataset and all use must include visible attribution. Unattributed or harmful use constitutes a direct violation of this license. All use must align with the principles of disability justice, inclusive representation, child safeguarding and non-extractive innovation. Users are expected to respect the dignity, privacy, values and choices of every individual represented in the dataset, with special care taken to protect the rights, identities, and safety of children featured in the dataset. Any use that exposes children to harm, misrepresentation, exploitation, or digital profiling will be considered a serious breach of this license. Similarly, any use that undermines, violates or threatens these values stated above will result in automatic termination of this license. Kindly note that attribution is mandatory. Giving proper credit is a key part of responsible use and helps ensure transparency and respect for the work behind this data set. Any publications, tools, or outputs resulting from the use of this dataset must include the following citation: Kilimanjaro Blind Trust Africa Data Set, licensed under the CC-BY-SA, a copyleft license.www.kilimanjaroblindtrust.org

Blind and Low vision Individuals Networks and communities (BLINC) Dataset Card

Dataset Description

Dataset Summary

This dataset was created as part of a research collaboration project that seeks to reduce the AI divide for marginalized communities by improving the representation of people with disabilities in text-to-videos model outputs. Specifically, the dataset captures important representation themes and subthemes for the community of individuals who are visually impaired through 101 videos and corresponding metadata. The videos are real-world photographs that primarily show individuals or groups of visually impaired people. The dataset was developed to demonstrate to AI systems/models how blind and low-vision persons define “good representation” of their community within videosry. The dataset seeks to capture the diversity of the visually impaired community by representing a wide range of experiences and contexts. It includes individuals with different levels of vision impairment (blindness, low vision, and partial sight) and showcases the use of various assistive technologies such as white canes, braille devices and screen reader. The videos also reflect a broad spectrum of activities, including playing sports, learning in school, using digital tools, farming, working in offices, attending conferences, and participating in cultural events. Finally,the dataset represents multiple age groups children, youth, adults, and older persons and features diverse environments such as homes, schools, workplaces, farms, urban spaces, rural areas, and conferences. The dataset was collected in five counties in Kenya that is Nairobi, Kisumu, Mombasa, Isiolo and Garissa and curated during the period of April to July 2025.

Supported Tasks

text-to-videos: The dataset can be used to train, evaluate or finetune text-to-videos generation models.

Languages

The annotations within each videos are in English. The associated BCP-47 code is en.

Dataset Structure

Data Instances

This is what a typical JSON-formatted example from the dataset looks like): { "theme_folder": { "name": "theme_name", "description": "theme_description", "sub_themes": { "sub_theme_folder": { "name": "sub_theme_name", "description": "sub_theme_description", " videos ": [ { "path": "path-to-file", "description": "videos_description", "prompt": "videos_prompt", "annotations": [ { "label": "label_for_box", "box": { "x": 0.25, "y": 0.25, "w": 0.5, "h": 0.5 } } ] }, ] } } } }

Data Fields

The meta data includes: (1) a hierarchy of themes (with name and description) and subthemes (title, name, description) that reflect desired representation aspirations of the community across the entirety of the 400-videos dataset. For each videos (path) in the dataset, the meta-data further includes: (2) a rationale for “why” an videos has been selected as a good instance of representation theme (description); (3) a prompt describing the videos (prompt); and (4) 1-5 videos text (label) and bounding-box annotations via x, y, w, h dimensions.

Dataset Creation

Curation Rationale

The dataset was curated to support adaptation and evaluation of text-to-videos generative models. The dataset is structured around five core themes, each selected through participatory input from the community to reflect important activities, characteristics, and objects that should be represented in AI-generated videosry. Each core theme is further subdivided into sub-themes, providing a hierarchical taxonomy. Within each sub-theme, the dataset includes a curated set of videos and corresponding annotations that exemplify the visual and semantic characteristics of the category. This structure enables targeted evaluation of model performance across diverse conceptual domains and supports research into theme-specific generation fidelity, compositional generalisation, and prompt grounding

Source Data

Initial Data Collection

The dataset was developed through a participatory process that combined community workshops, participant-led invitations, and digital crowed sourcing. The project team with a photographer engaged with participants across Nairobi, Kisumu, Mombasa, Isiolo, and Garissa, while additional videos were contributed remotely through WhatsApp by participants who wanted to take part but could not be reached in person. The process began with workshops in each county, where visually impaired individuals, caregivers, and representatives of organizations of persons with disabilities came together to discuss what “good representation” of their community should look like in videosry. During these sessions, participants co-created themes and subthemes that guided the dataset, including education, livelihoods, daily routines, sports, cultural life, and the use of assistive technologies. Many participants noted that they had never been photographed while engaged in certain key activities which made the process both meaningful and empowering. Following the workshops, participants invited the project team into their homes, schools, workplaces, farms, and community spaces to take photographs and also share already existing photographs. To broaden participation, some individuals who were unable to host the project team submitted videos through WhatsApp, making it possible for the dataset to include perspectives from participants in diverse areas. All videos were collected with informed consent, and metadata such as county, age group, activity, assistive technology, setting, and theme was recorded alongside each photograph. At each stage, the project team explained the purpose of the dataset, how the videos would be used, and where they would be stored. Consent was sought verbally and in writing, with adaptations made to ensure accessibility. All guardians provided consent on behalf of children, while adults gave consent directly. Participants were informed that taking part was voluntary, that they could choose which videos to be shared and that they could withdraw their videos from the project at any time. Once the videos were gathered, the project team reviewed them to determine which would be included. Selection was based on videos quality, alignment with workshop-defined themes, and the need to ensure diversity in age, gender, activity and setting. Any videos that were blurry, repetitive, or failed to reflect the values and dignity of visually impaired persons were excluded.

Who are the source data producers?

The dataset was produced entirely by human participants. The videos were created under two main conditions: A photographer, proficient in disability inclusive matters working with the project team and with visually impaired participants across five counties (Nairobi, Kisumu, Mombasa, Isiolo, and Garissa) to capture photographs. These videos were taken in environments such as homes, schools, workplaces, farms, and community spaces and were based on activities chosen by the participants themselves. Videos donation, some participants who wished to be represented but could not be reached in person contributed photographs through WhatsApp. These submissions were voluntary and treated as data donations, expanding the reach of the dataset to include more diverse voices and settings. In addition to newly collected material, the dataset also includes a small selection of videos that had been gathered during previous project activities with visually impaired individuals. These videos were reviewed, repurposed, and integrated into the dataset since they aligned with the themes identified in the workshops and provided valuable examples of representation. The primary subjects of the videos are persons who are blind or have low vision, along with some caregivers, peers or community members involved in their daily lives. While the dataset records general attributes such as age group, activity, setting, and use of assistive technology, detailed demographic information about the individuals who are captured in the dataset is unknown this includes also those who contributed videos digitally. Participation was voluntary, and individuals were informed of their right to withdraw at any point. While participants were not financially compensated for contributing videos, those who attended the workshops received transport reimbursements to support their participation and ensure accessibility.

Annotations

Annotation process

To structure their videos library, annotators grouped and categorised videos into 5 themes: travel, education, sports, economic activities and assistive devices with 3 to 5 sub-themes (e.g., team sport, competition, individual sporting achievements). Individual videos-level annotations Instructions for the creation of the annotations for each videos involved the annotators response to the following questions/ tasks: Why did you select this videos as a good representation for your community? Edits to an auto-generated prompt to ensure its accuracy in regards to the videos and in describing what is important for the community using their preferred language. A set of 1-5 bounding box annotations that either relate to: (i) Objects that are special or specific to the community (e.g., whitecane); or (ii) People and/or animals that are important for your community (e.g., Young woman who has low vision; Service dog wearing an orange vest).

Who are the annotators?

Annotations were made by a single person, who acted as the project lead in the dataset generation process. Demographic or identity information are not provided.

Personal and Sensitive Information

The dataset does not include any directly identifiable personal information. However, the metadata does contain sensitive information that reflects aspects of personal identity. Specifically, the metadata and videos capture characteristics such as racial or ethnic origins, gender, and disability status. These attributes are essential to the purpose of the dataset, as it was designed to showcase accurate and diverse representations of persons who are blind or have low vision. In addition, the dataset includes videos of children, which increases the sensitivity of the material.

Considerations for Using the Data

Social Impact of Dataset

Please discuss some of the ways you believe the use of this dataset will impact society. The statement can include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives and discuss the accompanying risks. For the risks, you can refer to your draft license (for e.g., the Restrictions section)

This dataset was created to reduce the AI divide for marginalized communities by improving how persons who are blind or have low vision are represented in text-to-videos models. By making this dataset publicly available, we hope to encourage the development of inclusive technologies that reflect the lived experiences, strengths, abilities and uphold the dignity of persons with visual impairments. Such use could positively impact society by: • Supporting the creation of AI-generated videos that portray persons with disabilities more accurately and respectfully. • Helping educators, researchers, and advocates access realistic and representative videosry for training, awareness, and advocacy purposes. • Enabling technology developers to design more inclusive systems and tools that do not exclude persons with disabilities from visual representation. • Contributing to broader social inclusion by challenging stereotypes and ensuring that persons with disabilities are visible in the data that powers AI systems. At the same time, we recognize potential risks associated with the dataset’s use: • If applied out of context or without sensitivity, the videos could unintentionally reinforce harmful stereotypes about disability through misrepresentation or misuse. • While no directly identifying personal information is included, the dataset contains sensitive attributes such as disability status, gender, and videos of children. • The dataset could be repurposed for applications that do not align with the project’s goals, such as exploitative advertising or entertainment. The inclusion of children with visual impairments in the dataset also presents several potential risks. These include misrepresentation, where children may be portrayed in ways that reinforce stereotypes; exploitation, where their videos could be applied in profit-driven or sensationalized contexts outside the intended scope of inclusive innovation; and digital profiling, where sensitive characteristics such as disability status could be inappropriately inferred or tracked. There is also the risk of indirect identifiability, particularly in smaller community settings where children may be more easily recognized even without personal identifiers. Users of the dataset are strongly encouraged to approach it with respect for the community it represents and to prioritize applications that advance inclusion and dignity.

Discussion of Biases

As with any dataset focused on a specific community, there are inherent biases in both the scope and content of this collection. The dataset primarily represents persons with visual impairments and does not capture the wider spectrum of disabilities such as physical, hearing, or cognitive disabilities. This focus was intentional to address a significant gap in AI datasets, but it also means that the dataset is not representative of all disability communities. Geographically, the dataset was collected in five out of 47 counties in Kenya (Nairobi, Kisumu, Mombasa, Isiolo, and Garissa). As a result, it may reflect cultural, social, and environmental contexts specific to Kenya and may not fully represent the experiences of visually impaired persons in other counties, regions or countries. There are also gaps in representation within the dataset itself. While videos of children, adults and older persons are included, some subgroups are less well captured. For example, children with multiple disabilities or additional access needs are underrepresented. Similarly, certain settings and activities such as healthcare environments, private family life, or highly specialized workplaces are less visible compared to more general contexts like schools, homes or community spaces. To reduce these biases, steps were taken to ensure diversity in age, gender, activity, and assistive technology use within the visually impaired community. Workshops and community engagement guided which themes and subthemes were prioritized, and participants were encouraged to suggest and contribute videos that filled important gaps. Nonetheless, users of the dataset should remain aware that not all experiences or subgroups are equally represented, and the dataset should be interpreted as a partial but meaningful contribution rather than a comprehensive resource

Additional Information

Dataset Curators

This dataset was created by Nicholas Ileve Kalovwe, Daniel Onyango, Zeina Mahmoud, Benson Masero and Olvan Omond in collaboration with Microsoft Research.

Licensing Information

This dataset is licensed under the [Creative Commons Attribution-ShareAlike 4.0] license.

Contributions

We gratefully acknowledge the support of special schools for the VIs, teachers, parents, and guardians who facilitated the participation of children, as well as the community members whose contributions shaped the dataset both in workshops and through videos sharing. We also recognize the commitment of colleagues within Kilimanjaro Blind Trust Africa who supported in participant mobilization and ensuring that the dataset could be responsibly shared for inclusive innovation.

Downloads last month
18