wfzimmerman's picture
Upload README.md with huggingface_hub
2efa152 verified
metadata
license: cc-by-4.0
task_categories:
  - text-classification
  - feature-extraction
language:
  - en
tags:
  - computational-literary-studies
  - narrative-shape
  - semantic-novelty
  - time-series
  - sentence-transformers
  - digital-humanities
  - PG19
size_categories:
  - 10K<n<100K
pretty_name: PG19 Semantic Novelty
dataset_info:
  features:
    - name: gutenberg_id
      dtype: int64
    - name: title
      dtype: string
    - name: authors
      sequence: string
    - name: pub_year
      dtype: int64
    - name: subjects
      sequence: string
    - name: bookshelves
      sequence: string
    - name: download_count
      dtype: int64
    - name: primary_genre
      dtype: string
    - name: paragraph_count
      dtype: int64
    - name: mean_novelty
      dtype: float64
    - name: std_novelty
      dtype: float64
    - name: ti_ratio
      dtype: float64
    - name: trend_slope
      dtype: float64
    - name: mean_compression_progress
      dtype: float64
    - name: curve_type_3
      dtype: string
    - name: cluster_8
      dtype: int64
    - name: cluster_name
      dtype: string
    - name: speed
      dtype: float64
    - name: volume
      dtype: float64
    - name: circuitousness
      dtype: float64
    - name: reversal_count
      dtype: int64
    - name: sax_16_5
      dtype: string
    - name: novelty_curve
      sequence: float64
    - name: paa_16
      sequence: float64
  splits:
    - name: train
      num_examples: 28535

PG19 Semantic Novelty Dataset

Paragraph-by-paragraph semantic novelty curves for 28,535 books from the PG19 corpus (Project Gutenberg, pre-1920 English literature).

What is Semantic Novelty?

For each paragraph in a book, we compute:

novelty(p) = 1 - cosine_similarity(embedding(p), running_centroid)

where embedding() uses SBERT all-mpnet-base-v2 (768-dimensional) and running_centroid is the mean of all preceding paragraph embeddings. This measures how much new information each paragraph introduces relative to everything before it.

The resulting novelty curve captures the information-delivery shape of a narrative — whether a book front-loads its ideas (convergent/green), sustains steady novelty (parallel/blue), or builds to increasingly novel content (divergent/red).

Key Features

Feature Description
novelty_curve Full paragraph-level novelty series (variable length, typically 100-5000 values)
paa_16 16-segment Piecewise Aggregate Approximation (fixed-length summary)
sax_16_5 SAX string representation (16 chars, 5-letter alphabet)
cluster_8 8-cluster Ward-linkage taxonomy (1-8)
cluster_name Human-readable archetype label
curve_type_3 Legacy 3-type classification (green=convergent, blue=parallel, red=divergent)
ti_ratio Tail/initial novelty ratio (>1 = divergent, <1 = convergent)
speed, volume, circuitousness Toubia et al. (2021) narrative shape metrics
mean_compression_progress Schmidhuber (2009) compression progress proxy

Dataset Statistics

  • Books: 28,535
  • Publication years: 1531\u20132014
  • Mean paragraphs per book: 1128
  • Mean semantic novelty: 0.4590
  • Books with novelty curves: 28,535
  • Books with derived metrics: 28,433

3-Type Distribution

Type Count Description
green (convergent) 2,573 Novelty decreases - ideas consolidate
blue (parallel) 12,338 Novelty stays steady
red (divergent) 13,624 Novelty increases - ideas expand

8-Cluster Taxonomy

Cluster Count %
Flat 7442 26.1%
Late Plateau 6540 22.9%
Early Plateau 4486 15.7%
U-Shape 2791 9.8%
Gradual Ascent 2638 9.2%
Steep Ascent 2637 9.2%
Steep Descent 1677 5.9%
Gradual Descent 222 0.8%

Usage

from datasets import load_dataset
import numpy as np

ds = load_dataset("wfzimmerman/pg19-semantic-novelty", split="train")

# Get a book's novelty curve
book = ds[0]
print(f"{book['title']} by {book['authors']}")
print(f"Cluster: {book['cluster_name']} ({book['curve_type_3']})")
print(f"Paragraphs: {book['paragraph_count']}, Mean novelty: {book['mean_novelty']:.4f}")

# Plot a novelty curve
import matplotlib.pyplot as plt
curve = book["novelty_curve"]
plt.plot(curve)
plt.xlabel("Paragraph")
plt.ylabel("Semantic Novelty")
plt.title(book["title"])
plt.show()

# Filter by cluster
steep_descent = ds.filter(lambda x: x["cluster_name"] == "Steep Descent")
print(f"Steep Descent books: {len(steep_descent)}")

# Use PAA for fixed-length comparison
paa_matrix = np.array([x["paa_16"] for x in ds if x["paa_16"] is not None])
print(f"PAA matrix shape: {paa_matrix.shape}")  # (N, 16)

Methodology

  1. Corpus: PG19 (Rae et al., 2020) - 28,535 English-language books from Project Gutenberg published before 1920
  2. Embeddings: Sentence-BERT all-mpnet-base-v2 (Reimers & Gurevych, 2019), 768 dimensions
  3. Novelty computation: 1 - cosine_similarity(paragraph_embedding, running_centroid) where centroid accumulates all prior paragraphs
  4. Clustering: Ward-linkage hierarchical clustering on 16-segment PAA vectors, k=8
  5. Toubia metrics: Speed, volume, circuitousness following Toubia et al. (2021)
  6. SAX encoding: Lin et al. (2003) Symbolic Aggregate Approximation, 16 segments, 5-letter alphabet

Citation

If you use this dataset, please cite:

@dataset{zimmerman2026pg19novelty,
  title={PG19 Semantic Novelty: Paragraph-Level Information Curves for 28,000+ Books},
  author={Zimmerman, W. Frederick},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/wfzimmerman/pg19-semantic-novelty}
}

Related Work

  • Toubia, O., et al. (2021). How quantifying the shape of stories predicts their success. PNAS.
  • Reagan, A. J., et al. (2016). The emotional arcs of stories. EPJ Data Science.
  • Schmidhuber, J. (2009). Simple algorithmic theory of subjective beauty. arXiv:0812.4360.
  • Reimers, N. & Gurevych, I. (2019). Sentence-BERT. EMNLP.
  • Rae, J. W., et al. (2020). Compressive Transformers for Long-Range Sequence Modelling.

License

CC-BY-4.0. The underlying texts are public domain (Project Gutenberg). The novelty analysis, clustering, and derived metrics are original contributions.