Dataset Viewer
Auto-converted to Parquet Duplicate
version
int64
1
1
total_vectors
int64
29.3M
56.5M
shards
listlengths
4
7
1
56,548,100
[ { "index": 0, "filename": "people_identity_usearch_v6_shard0.bin", "count": 3000000, "hot": true }, { "index": 1, "filename": "people_identity_usearch_v6_shard1.bin", "count": 10000000, "hot": false }, { "index": 2, "filename": "people_identity_usearch_v6_shard2.bin",...
1
29,253,380
[ { "index": 0, "filename": "people_usearch_v6_shard0.bin", "count": 3000000, "hot": true }, { "index": 1, "filename": "people_usearch_v6_shard1.bin", "count": 10000000, "hot": false }, { "index": 2, "filename": "people_usearch_v6_shard2.bin", "count": 10000000, ...

Entity References Database

A comprehensive entity database for organizations, people, roles, and locations with embedding-based semantic search. Built from authoritative sources (GLEIF, SEC, Companies House, Wikidata) for entity linking and named entity disambiguation.

Dataset Summary

This dataset provides fast lookup and qualification of named entities using vector similarity search. It stores records from authoritative global sources with embeddings generated by google/embeddinggemma-300m (768 dimensions), stored in USearch HNSW indexes for sub-millisecond approximate nearest neighbor search.

Key Features:

  • 9.7M+ organization records from GLEIF, SEC Edgar, Companies House, and Wikidata
  • 63M+ people records including executives, politicians, athletes, artists, and more
  • 139K+ roles and 25K+ locations with hierarchical relationships
  • USearch HNSW indexes for sub-millisecond vector search (embeddings never stored in SQLite)
  • Canonical linking across sources (same entity from multiple sources linked)
  • Three-tier people search achieving 100% acc@1 on 280 queries across 12 person types

Supported Tasks

  • Entity Linking: Match extracted entity mentions to canonical database records
  • Named Entity Disambiguation: Distinguish between entities with similar names
  • Knowledge Base Population: Enrich extracted entities with identifiers and metadata

Languages

English (en)

Dataset Structure

Schema (v5)

The database uses SQLite with normalized FK references (INTEGER FKs replace TEXT enums). All embeddings exist only in USearch HNSW indexes, never in SQLite.

Organizations Table

Column Type Description
id INTEGER Primary key
qid INTEGER Wikidata QID as integer (e.g., 2283 for Q2283)
name TEXT Organization name
name_normalized TEXT Normalized name (via corp-names)
source_id INTEGER FK Reference to source_types
source_identifier TEXT LEI, CIK, Company Number, etc.
region_id INTEGER FK Reference to locations
entity_type_id INTEGER FK Reference to organization_types
from_date TEXT Founding/registration date (ISO format)
to_date TEXT Dissolution date (ISO format)
canon_id INTEGER ID of canonical record
canon_size INTEGER Size of canonical group
record JSON Full source record (stripped in lite)

People Table

Column Type Description
id INTEGER Primary key
qid INTEGER Wikidata QID as integer
name TEXT Display name
name_normalized TEXT Normalized name (via corp-names)
source_id INTEGER FK Reference to source_types
source_identifier TEXT QID, Owner CIK, Person number
country_id INTEGER FK Reference to locations
person_type_id INTEGER FK Reference to people_types
known_for_role_id INTEGER FK Reference to roles
known_for_org TEXT Organization name
known_for_org_id INTEGER FK Reference to organizations
from_date TEXT Role start date (ISO format)
to_date TEXT Role end date (ISO format)
birth_date TEXT Date of birth (ISO format)
death_date TEXT Date of death (ISO format)
canon_id INTEGER ID of canonical record
canon_size INTEGER Size of canonical group
record JSON Full source record (stripped in lite)

Roles Table

Column Type Description
id INTEGER Primary key
qid INTEGER Wikidata QID (e.g., 484876 for CEO Q484876)
name TEXT Role name (e.g., "Chief Executive Officer")
name_normalized TEXT Normalized name
source_id INTEGER FK Reference to source_types
canon_id INTEGER ID of canonical role

Locations Table

Column Type Description
id INTEGER Primary key
qid INTEGER Wikidata QID (e.g., 30 for USA Q30)
name TEXT Location name
name_normalized TEXT Normalized name
source_id INTEGER FK Reference to source_types
source_identifier TEXT ISO code (e.g., "US", "CA")
parent_ids TEXT JSON Parent location IDs in hierarchy
location_type_id INTEGER FK Reference to location_types

USearch HNSW Indexes

All embeddings are generated on-the-fly during index building and stored only in versioned USearch HNSW index files:

File Dimensions Description
organizations_usearch_v5.bin 768 Organization embeddings
people_usearch_v5.bin 768 Composite person embeddings (name|role|org as 3×256-dim segments)
people_identity_usearch_v5.bin 256 Name-only person embeddings (Matryoshka-truncated)

The composite people index only includes people with org associations. Name, role, and org are embedded separately, independently L2-normalized, weighted (name=8, role=1, org=4), and concatenated into a 768-dim vector for AND-style matching.

Metadata Table

Table Columns
db_info key TEXT PRIMARY KEY, value TEXT NOT NULL

Contains schema_version = 5 for version detection.

Enum Lookup Tables

Table Values
source_types gleif, sec_edgar, companies_house, wikidata, pycountry
people_types executive, politician, government, military, legal, professional, academic, artist, media, athlete, journalist, activist, unknown
organization_types business, fund, branch, nonprofit, ngo, foundation, government, international_org, political_party, trade_union, educational, research, healthcare, media, sports, religious, unknown
simplified_location_types continent, country, subdivision, city, district, historic, other

Data Splits

Config Contents
entities-v5-lite.db Core fields, record content stripped, name_normalized kept
entities-v5.db Full records with source metadata
organizations_usearch_v5.bin USearch HNSW index for organization search
people_usearch_v5.bin Composite USearch HNSW index for people search
people_identity_usearch_v5.bin Identity USearch HNSW index for people name-only search

The lite version + USearch indexes is recommended for most use cases.

Dataset Creation

Source Data

Organizations

Source Records Identifier Coverage
Companies House 5.5M Company Number UK registered companies
GLEIF 2.6M LEI (Legal Entity Identifier) Global companies with LEI
Wikidata 1.5M QID Notable companies worldwide
SEC Edgar 73K CIK (Central Index Key) All SEC filers

People

Source Records Identifier Coverage
Companies House 27.5M Person number UK company officers
Wikidata 36M QID Notable people with English Wikipedia

Embedding Model

Property Value
Model google/embeddinggemma-300m
Dimensions 768 (256 with Matryoshka truncation)
Framework sentence-transformers
Size ~300M parameters

Canonicalization

Records are linked across sources based on:

Organizations:

  1. Same LEI (globally unique)
  2. Same ticker symbol
  3. Same CIK
  4. Same normalized name + region

People:

  1. Same Wikidata QID
  2. Same normalized name + same organization
  3. Same normalized name + overlapping date ranges

Source priority: wikidata > sec_edgar > companies_house

Usage

Installation

pip install corp-entity-db

Download

# Download lite version + USearch indexes (recommended)
corp-entity-db download

# Download full version + USearch indexes
corp-entity-db download --full

Storage location: ~/.cache/corp-extractor/entities-v5.db (or entities-v5-lite.db for lite)

Search

# Search organizations
corp-entity-db search "Microsoft"

# Search people (composite embedding + name fallback + identity fallback)
corp-entity-db search-people "Tim Cook" --role CEO --org Apple

# Search roles
corp-entity-db search-roles "CEO"

# Search locations
corp-entity-db search-locations "California"

# Check database status
corp-entity-db status

Python API

from corp_entity_db import OrganizationDatabase, CompanyEmbedder, get_database_path

# Search organizations
db = OrganizationDatabase(get_database_path())
embedder = CompanyEmbedder()
matches = db.search(embedder.embed("Microsoft"), top_k=10)
for record, score in matches:
    print(f"{record.name} ({record.entity_type}) - score: {score:.3f}")

# Search people (composite embeddings + name fallback + identity fallback)
from corp_entity_db import PersonDatabase, get_person_database
person_db = get_person_database()
query_emb = embedder.embed_composite_person("Tim Cook", role="CEO", org="Apple")
matches = person_db.search(
    query_emb, top_k=5, query_name="Tim Cook",
    embedder=embedder, query_role="CEO", query_org="Apple",
)

Technical Details

Vector Search Performance

Using USearch HNSW indexes with int8 quantization:

Database Size Search Time Memory
100K records <1ms ~200MB
1M records <5ms ~1GB
10M records <10ms ~8GB
50M records <20ms ~40GB

Similarity Thresholds

Score Interpretation
> 0.85 Strong match (likely same entity)
0.70 - 0.85 Good match (probable same entity)
0.55 - 0.70 Moderate match (may need verification)
< 0.55 Weak match (likely different entity)

Building from Source

# Import data sources
corp-entity-db import-gleif --download
corp-entity-db import-sec --download
corp-entity-db import-companies-house
corp-entity-db import-people --all
corp-entity-db import-wikidata-dump --download --limit 50000

# Post-import: build USearch indexes, VACUUM
corp-entity-db post-import

# Link equivalent records
corp-entity-db canonicalize

# Create lite version for deployment
corp-entity-db create-lite ~/.cache/corp-extractor/entities-v5.db

Wikidata Dump Import (Recommended for Large Imports)

# Download and import from Wikidata dump (~100GB)
corp-entity-db import-wikidata-dump --download --limit 50000

# Import only people
corp-entity-db import-wikidata-dump --download --people --no-orgs

# Import only locations
corp-entity-db import-wikidata-dump --dump dump.json.bz2 --locations --no-people --no-orgs

Build extras (optional):

pip install "corp-entity-db[build]"  # Adds orjson + indexed_bzip2

The dump import uses a 3-thread parallel pipeline (reader → embedder → writer) and creates multiple records per person (one per position+org combination). Canonicalization runs automatically at the end. Supports .bz2, .zst, and .gz compressed dumps.

Considerations for Using the Data

Social Impact

This dataset enables entity linking for NLP applications. Users should be aware that:

  • Organization and people records may be incomplete or outdated
  • Historic people (deceased) are included with death_date field
  • Not all notable entities are covered

Biases

  • Coverage is weighted toward English-speaking countries (US, UK) due to source availability
  • Wikidata coverage depends on Wikipedia notability criteria
  • SEC and Companies House data is limited to their respective jurisdictions

Limitations

  • Embedding similarity is not perfect for entity disambiguation
  • Updates require re-importing from source data

License

Apache 2.0

Citation

If you use this dataset, please cite:

@dataset{entity_references_2025,
  title = {Entity References Database},
  author = {Corp-o-Rate-Community},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/Corp-o-Rate-Community/entity-references}
}

Dataset Card Authors

Corp-o-Rate-Community

Dataset Card Contact

Open an issue on the GitHub repository for questions or feedback.

Downloads last month
58