text string | label int64 |
|---|---|
Passion Nette | 1 |
Daniel G Trudel | 1 |
Tony Sgro | 0 |
Centre De La Petite Enfance Les Bourgeons-Soleil | 1 |
Mythily Satheesh | 0 |
Scully, De Lorimier Et Associes Ltee | 1 |
Digitopuncture | 1 |
Groupe Malouf Inc. | 1 |
Dore Joe | 0 |
Drake Costello | 0 |
Erick Mah | 0 |
Pi Tech (Canada) Inc. | 1 |
Taxi Sam | 1 |
Formation Transport Outaouais | 1 |
Melissa Bailey | 0 |
Alisa Batic | 0 |
Brett Pinto | 0 |
Centre Du Cerveau Actif Montreal | 1 |
Boulangerie Nguyen | 1 |
Les Entreprises De Sécurité Scorpion | 1 |
Graphiques A C | 1 |
Association Hypertension Diabète Et Cancer Au Canada (A.H.D.C.C) | 1 |
Nasreen Chowdhury Mithu | 0 |
Leandro Rocha | 0 |
Les Consultants Sodigep Inc. | 1 |
Publications Beauce-Nord Inc. | 1 |
S.O.S Ménage Plus | 1 |
Evelyne Berube | 0 |
Gestion Trh Inc. | 1 |
Morgan Dave | 0 |
Commerce De Métaux International, S.E.C. | 1 |
Productions Cabina Obscura | 1 |
Bernadette Tshiamala | 0 |
Francis Lavoie | 0 |
Pacheco Marcela | 0 |
Quigley Michael | 0 |
Bijouterie Iranienne Isfahan | 1 |
Bodo Jonny | 0 |
Marina Radonjic | 0 |
Casey Yu | 0 |
Stratagème - Club D'Échecs De Rouyn-Noranda | 1 |
Impact Recherche (Lmc582,360) | 1 |
Anthony Currie | 0 |
Myers-Smith Law | 1 |
Gestion Charette (2017) Inc. | 1 |
Maintenance Reck | 1 |
Scott Kelly | 0 |
Claudia Coelho | 0 |
Sonar Msi3D | 1 |
Mamun Abdulla Al | 0 |
Katherine Pereira | 0 |
Jennifer Hoeve | 0 |
Neville Kevin | 0 |
Entreprise Jk Trading | 1 |
Pompe À Béton Lanaudière | 1 |
Bernard Wynter | 0 |
Aamodit Acharya | 0 |
Giedre Mcalee | 0 |
Al Daraji Sinan | 0 |
Électronique Bolton Pass Inc. | 1 |
Kate Manning | 0 |
Miller Guy | 0 |
Susanna Lam | 0 |
Kan Yam | 0 |
Jar Inc. | 1 |
Bibliotheque Municipale De St-Jerome | 1 |
Isabel Escobar | 0 |
Atelier D'Écriture "Le Petit Écrivain" | 1 |
Restaurant Cuisine Cantonaise | 1 |
A A A A A A A Prêt. | 1 |
Wenjuan Yu | 0 |
Kresnt Moon | 0 |
Jordan Hougen | 0 |
Carl Calandra | 0 |
Reza Mi | 0 |
Amélie Léveillé | 0 |
Metacis Inc. | 1 |
Faro Enr. | 1 |
Sandy Koberinski | 0 |
Dylan Witt | 0 |
Isabel Mateus | 0 |
Station Sexe | 1 |
Barna Abdul | 0 |
Dereje Abebe | 0 |
L & G Salon De Beauté Inc. | 1 |
Remorquage Transport 309 | 1 |
Dame Jeanne, La Demoiselle | 1 |
Ibrahim Ali | 0 |
Couvreur Des Patriotes Inc. | 1 |
Mahavir Auto Diagnostics | 1 |
Marvin Razon | 0 |
Habitat Solutions Inc. | 1 |
Dhaliwal & Fils | 1 |
Authorité Globale | 1 |
Harvinder Singh | 0 |
Bourque Desjardins Painting | 1 |
Les Entretiens Archambault | 1 |
Omar Angelino | 0 |
Transport Prosyan | 1 |
George Boese | 0 |
Dataset Card for Person vs. Company Names Classification
This is a large-scale, highly curated dataset containing ~7,5 million examples, designed specifically for training a binary text classification model. The primary task is to distinguish between a person's full name and a company/organization name with high precision.
Dataset Details
Dataset Description
This dataset was built to train a robust "Privacy Gatekeeper" classifier. It is designed to handle real-world database inconsistencies (like swapped first/last names) and difficult edge cases (like companies named after people).
The data has undergone a rigorous cleaning process using NER models, manual curation, and synthetic augmentation to ensure the model learns semantic patterns rather than simple heuristics.
- Curated by: ele-sage
- Language(s) (NLP): English, French
- License: MIT
- Task: Binary Classification (
0: Person,1: Company)
Uses
Direct Use
This dataset is intended for training models to identify if a string represents a legal entity (Company) or a human being (Person). It is particularly optimized for:
- Database Cleaning: Standardizing mixed columns of names.
from datasets import load_dataset
dataset = load_dataset("ele-sage/person-company-names-classification")
print(dataset["train"][0])
# {'text': 'Entreprise Shopicar Inc.', 'label': 1}
Dataset Structure
The dataset is provided in a train / test split (approx 95/5). Each record contains:
text: (string) The cleaned, title-cased name.label: (integer)0: Person (e.g., "Jean Tremblay")1: Company (e.g., "Tremblay Construction Inc.")
Dataset Creation
Curation Rationale
Standard NER datasets often lack the specific edge cases found in administrative databases. A simple model might learn that "starts with a digit" means Company, or "looks like a name" means Person. This dataset was engineered to break those lazy patterns by:
- Removing "easy" signals: Drastically reducing numbered companies.
- Adding "hard" signals: Generating synthetic companies using real person names (Hard Negatives).
Source Data & Processing
The dataset combines two distinct sources, processed through a multi-stage pipeline.
1. Person Names (Label 0)
- Source: A large dataset of ~3.5M names (Facebook leak origin).
- Filtering Pipeline:
- Character Validation: Removed all entries containing characters outside standard English/French diacritics (removed Cyrillic, Chinese, Emojis, etc.).
- Semantic Blacklisting (NER/POS): Used a Named Entity Recognition (NER) and Part-of-Speech model to analyze all words. Identified common non-name tokens (verbs, objects, junk).
- Blacklist Cleanup: Filtered the dataset using the final curated blacklist.
- Formatting:
- 75%
FirstName LastName - 25%
LastName FirstName(To force the model to learn word semantics rather than just position).
- 75%
2. Company Names (Label 1)
- Source: Quebec Enterprise Register (public government data).
- Downsampling Numbered Companies:
- Raw data contained millions of entries like "9123-4567 QUEBEC INC".
- Action: These were downsampled to ~30,000 examples.
- Reasoning: To prevent the model from over-fitting on digits. We want the model to read the text, not just check for numbers.
- Augmentation (Hard Negatives):
- Generated ~300,000 synthetic company names by combining real Person Names with business suffixes.
- Examples: "Jean Tremblay" (Person) vs "Tremblay Plumbing" (Company).
- Reasoning: To teach the model that a valid person's name can be part of a company name, forcing it to pay attention to the suffix.
3. Final Processing
- Merging: All sources combined.
- Cleaning: Applied Title Case and removed excess whitespace.
- Deduping: Removed exact string duplicates.
- Splitting: Stratified split into Train/Test.
Annotations
Labels were inferred from the source provenance:
- Curated Person Data ->
0 - Curated Company Data (Real + Synthetic) ->
1
Bias, Risks, and Limitations
- Geographic Bias: The dataset is heavily skewed towards Quebec/Canada naming conventions (French/English mix).
- Synthetic Data: The "Hard Negative" companies are synthetically generated. While they follow realistic patterns, they may not represent real legal entities.
- Ambiguity: Certain strings (e.g., "Marshall & Co") are inherently ambiguous without further context. The model is trained to treat these as Companies (Label 1).
- Downloads last month
- 547