--- license: mit ---

CHRONOBERG: Capturing Language Evolution and Temporal Awareness in Foundation Models

🤗 Dataset | 🐙 GitHub 📖 Arxiv

We introduce CHRONOBERG, a temporally structured corpus of English book texts spanning 250 years, curated from Project Gutenberg and enriched with a variety of temporal annotations. We also introduce historically calibrated affective Valence-Arousal-Dominance (VAD) lexicons to support temporally grounded interpretation. With the lexicons at hand, we demonstrate a need for modern LLM-based tools to better situate their detection of discriminatory language and contextualization of sentiment across various time-periods. In fact, we show how language models trained sequentially on CHRONOBERG struggle to encode diachronic shifts in meaning, emphasizing the need for temporally aware training and evaluation pipelines, and positioning CHRONOBERG as a scalable resource for the study of linguistic change and temporal generalization. Disclaimer: This repository and dataset includes language and display of samples that could be offensive to readers. ## Dataset Dataset Catalog: - [x] ChronoBerg [Raw](https://huggingface.co/datasets/spaul25/ChronoBerg/tree/main/dataset): Raw literary text files grouped by their publication year - [x] ChronoBerg [Pre-processed](https://huggingface.co/datasets/spaul25/ChronoBerg/tree/main/dataset): Preprocessed sentence-splitted sentences grouped by their publication years - [x] ChronoBerg [Annotated](https://huggingface.co/datasets/spaul25/ChronoBerg/tree/main/dataset): sentence-level valence annotated (for each time interval: 50 year span) - [x] [Valence Lexicons](https://huggingface.co/datasets/spaul25/Chronoberg/tree/main/Lexicons) - [x] [Dominance Lexicons](https://huggingface.co/datasets/spaul25/Chronoberg/tree/main/Lexicons) - [x] [Arousal Lexicons](https://huggingface.co/datasets/spaul25/Chronoberg/tree/main/Lexicons) ## Load Dataset ``` from dataset import load_dataset Chronoberg_raw = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_raw.jsonl") ## Raw Chronoberg_preprocessed = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_preprocessed.jsonl") ## Pre-processed Chronoberg_annotated = load_dataset("spaul25/Chronoberg", data_files="dataset/Chronoberg_annotated.jsonl") ## Annotated ``` **Pretrained Checkpoints** : To construct VAD lexicons on your own, we have also made available the pretrained Word2vec models on the entire dataset and the time-interval-specific slices (50 year intervals) of the dataset. Model-Type | 1750-99 | 1800-49 | 1850-99 | 1900-49 | 1950-99 | --- | :---: | :---: | :---: |:---: |:---: word2vec | [word2vec_1750](https://huggingface.co/datasets/spaul25/ChronoBerg/blob/main/pretrained_models/word2vec_interval_1750.model) | [word2vec_1800](https://huggingface.co/datasets/spaul25/ChronoBerg/blob/main/pretrained_models/word2vec_interval_1800.model) | [word2vec_1850](https://huggingface.co/datasets/spaul25/ChronoBerg/blob/main/pretrained_models/word2vec_interval_1850.model) | [word2vec_1900](https://huggingface.co/datasets/spaul25/ChronoBerg/blob/main/pretrained_models/word2vec_interval_1900.model) | [word2vec_1950](https://huggingface.co/datasets/spaul25/ChronoBerg/blob/main/pretrained_models/word2vec_interval_1950.model) | **Recommended Dataset Splits** We have also made available the training and test sets to reproduce the LLM experiments in our [paper](https://arxiv.org/abs/2509.22360). More ways to produce train and tests can be found in our [github](https://github.com/paulsubarna/Chronoberg) **Main Results** Here are a few of the main results from our paper. A comparison of all continual learning strategies used to train an LLM model sequentially on ChronoBerg can be found below: Method | Perplexity | Forward Gen. | Best Case | Worst Case --- | :---: | :---: | :---: |:---: Sequential FT | 34\% ↑ | 33\% ↑ | 4.58 (1750--99) | 6.64 (1950--2000) EWC | 12\% ↑ | 29\% ↑ | 4.65 (1800--49) | 6.77 (1950--2000) LoRA | 15\% ↑ | 27\% ↑ | 4.48 (1850--99) | 6.19 (1950--2000) #### Lexical Analysis We have used our lexicons to analyze words that have undergone shifts from being positive to negative or negative to positive. Here are few instances of such words. ![Lexical](figures/Lexical_Analysis.png) **How to cite us** ```bibtex @misc{hegde2025chronobergcapturinglanguageevolution, title={CHRONOBERG: Capturing Language Evolution and Temporal Awareness in Foundation Models}, author={Niharika Hegde and Subarnaduti Paul and Lars Joel-Frey and Manuel Brack and Kristian Kersting and Martin Mundt and Patrick Schramowski}, year={2025}, eprint={2509.22360}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2509.22360}, } ```