--- license: cc-by-4.0 task_categories: - text-classification - feature-extraction language: - en tags: - computational-literary-studies - narrative-shape - semantic-novelty - time-series - sentence-transformers - digital-humanities - PG19 size_categories: - 10K1 = divergent, <1 = convergent) | | `speed`, `volume`, `circuitousness` | Toubia et al. (2021) narrative shape metrics | | `mean_compression_progress` | Schmidhuber (2009) compression progress proxy | ## Dataset Statistics - **Books**: 28,535 - **Publication years**: 1531\u20132014 - **Mean paragraphs per book**: 1128 - **Mean semantic novelty**: 0.4590 - **Books with novelty curves**: 28,535 - **Books with derived metrics**: 28,433 ### 3-Type Distribution | Type | Count | Description | |------|-------|-------------| | green (convergent) | 2,573 | Novelty decreases - ideas consolidate | | blue (parallel) | 12,338 | Novelty stays steady | | red (divergent) | 13,624 | Novelty increases - ideas expand | ### 8-Cluster Taxonomy | Cluster | Count | % | |---------|-------|---| | Flat | 7442 | 26.1% | | Late Plateau | 6540 | 22.9% | | Early Plateau | 4486 | 15.7% | | U-Shape | 2791 | 9.8% | | Gradual Ascent | 2638 | 9.2% | | Steep Ascent | 2637 | 9.2% | | Steep Descent | 1677 | 5.9% | | Gradual Descent | 222 | 0.8% | ## Usage ```python from datasets import load_dataset import numpy as np ds = load_dataset("wfzimmerman/pg19-semantic-novelty", split="train") # Get a book's novelty curve book = ds[0] print(f"{book['title']} by {book['authors']}") print(f"Cluster: {book['cluster_name']} ({book['curve_type_3']})") print(f"Paragraphs: {book['paragraph_count']}, Mean novelty: {book['mean_novelty']:.4f}") # Plot a novelty curve import matplotlib.pyplot as plt curve = book["novelty_curve"] plt.plot(curve) plt.xlabel("Paragraph") plt.ylabel("Semantic Novelty") plt.title(book["title"]) plt.show() # Filter by cluster steep_descent = ds.filter(lambda x: x["cluster_name"] == "Steep Descent") print(f"Steep Descent books: {len(steep_descent)}") # Use PAA for fixed-length comparison paa_matrix = np.array([x["paa_16"] for x in ds if x["paa_16"] is not None]) print(f"PAA matrix shape: {paa_matrix.shape}") # (N, 16) ``` ## Methodology 1. **Corpus**: PG19 (Rae et al., 2020) - 28,535 English-language books from Project Gutenberg published before 1920 2. **Embeddings**: Sentence-BERT `all-mpnet-base-v2` (Reimers & Gurevych, 2019), 768 dimensions 3. **Novelty computation**: `1 - cosine_similarity(paragraph_embedding, running_centroid)` where centroid accumulates all prior paragraphs 4. **Clustering**: Ward-linkage hierarchical clustering on 16-segment PAA vectors, k=8 5. **Toubia metrics**: Speed, volume, circuitousness following Toubia et al. (2021) 6. **SAX encoding**: Lin et al. (2003) Symbolic Aggregate Approximation, 16 segments, 5-letter alphabet ## Citation If you use this dataset, please cite: ```bibtex @dataset{zimmerman2026pg19novelty, title={PG19 Semantic Novelty: Paragraph-Level Information Curves for 28,000+ Books}, author={Zimmerman, W. Frederick}, year={2026}, publisher={Hugging Face}, url={https://huggingface.co/datasets/wfzimmerman/pg19-semantic-novelty} } ``` ## Related Work - Toubia, O., et al. (2021). How quantifying the shape of stories predicts their success. *PNAS*. - Reagan, A. J., et al. (2016). The emotional arcs of stories. *EPJ Data Science*. - Schmidhuber, J. (2009). Simple algorithmic theory of subjective beauty. *arXiv:0812.4360*. - Reimers, N. & Gurevych, I. (2019). Sentence-BERT. *EMNLP*. - Rae, J. W., et al. (2020). Compressive Transformers for Long-Range Sequence Modelling. ## License CC-BY-4.0. The underlying texts are public domain (Project Gutenberg). The novelty analysis, clustering, and derived metrics are original contributions.