PyLate model based on BAAI/bge-small-en-v1.5
This is a PyLate model finetuned from BAAI/bge-small-en-v1.5 on the msmarco-10m-triplets dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
Model Details
Model Description
- Model Type: PyLate model
- Base model: BAAI/bge-small-en-v1.5
- Document Length: 300 tokens
- Query Length: 32 tokens
- Output Dimensionality: 128 tokens
- Similarity Function: MaxSim
- Training Dataset:
Model Sources
- Documentation: PyLate Documentation
- Repository: PyLate on GitHub
- Hugging Face: PyLate models on Hugging Face
Full Model Architecture
ColBERT(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': True, 'architecture': 'BertModel'})
(1): Dense({'in_features': 384, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity', 'use_residual': False})
)
Usage
First install the PyLate library:
pip install -U pylate
Retrieval
Use this model with PyLate to index and retrieve documents. The index uses FastPLAID for efficient similarity search.
Indexing documents
Load the ColBERT model and initialize the PLAID index, then encode and index your documents:
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path="pylate_model_id",
)
# Step 2: Initialize the PLAID index
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.PLAID(
index_folder="pylate-index",
index_name="index",
)
Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path="pylate_model_id",
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
Evaluation
Metrics
Col BERTTriplet
- Evaluated with
pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator
| Metric | Value |
|---|---|
| accuracy | 0.991 |
Training Details
Training Dataset
msmarco-10m-triplets
- Dataset: msmarco-10m-triplets at 8c5139a
- Size: 9,998,000 training samples
- Columns:
query,positive, andnegative - Approximate statistics based on the first 1000 samples:
query positive negative type string string string details - min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- Samples:
query positive negative what kind of carbohydrates can i eat in a gluten free diet?What Can I Eat That is Gluten-Free? Even though going gluten-free can be difficult, you still have many food choices! Focus on eating a variety of fruits, vegetables, low-fat dairy products (those that do not have gluten-containing additives), beans, eggs, nuts, and lean meat, poultry, and fish. There are still many healthy whole grains and starchy carbohydrate foods to choose from that do not contain gluten: Amaranth. Arrowroot.Gluten-free crust option upon request. While we try hard to maintain the integrity of our gluten free crust, please be aware that it does run the risk of exposure to wheat-based products. Due to the risk of cross contamination, MOD DOES NOT RECOMMEND this pizza for those with celiac disease or other gluten allergies. Feeling Inspired? Express Yourself Through Pizzaremsen area codeRemsen, NY Area Codes are. Remsen, NY is currently using two area codes which are area codes 315 and 680. In addition to Remsen, NY area code information read more details about area code 315, area code 680 and New York area codes. Remsen, NY is located in Oneida County and observes the Eastern Time Zone.313 Area Code. AreaCode.org is an area code finder with detailed information on the 313 area code including 313 area code map. Major cities like Dearborn within area code 313 are also listed on this page.when was betsy ross bornEarly Life. Betsy Ross, best known for making the first American flag, was born Elizabeth Griscom in Philadelphia, Pennsylvania, on January 1, 1752. A fourth-generation American, and the great-granddaughter of a carpenter who had arrived in New Jersey in 1680 from England, Betsy was the eighth of 17 children.ynopsis. Betsy Ross, a fourth-generation America born in 1752 in Philadelphia, Pennsylvania, apprenticed with an upholsterer before irrevocably splitting with her family to marry outside the Quaker religion. She and her husband John Ross started their own upholstery business.Katharine Ross (I) Katharine Juliet Ross was born on January 29, 1940 in Hollywood, California, to Katharine W. (Hall) and Dudley T. Ross. Her father, who also worked for the Associated Press, was away in the US Navy when she was born. - Loss:
pylate.losses.contrastive.Contrastive
Evaluation Dataset
msmarco-10m-triplets
- Dataset: msmarco-10m-triplets at 8c5139a
- Size: 2,000 evaluation samples
- Columns:
query,positive, andnegative - Approximate statistics based on the first 1000 samples:
query positive negative type string string string details - min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- min: 32 tokens
- mean: 32.0 tokens
- max: 32 tokens
- Samples:
query positive negative pikas are closely related to which typeb of animalThe pika is a small-sized mammal that is found across the Northern Hemisphere. Despite their rodent-like appearance, pikas are actually closely related to rabbits and hares.Pikas are most commonly identified by their small, rounded body and lack of tail. Pikas prefer the colder climates and are generally found in mountainous regions and rocky areas where there tend to be fewer predators.ikas defend their territory by whistling to one another, and their large, rounded ears come in useful to hear the calls from competing pikas. Pikas are herbivorous animals and the pika therefore has a diet based on vegetation.Alpacas are very closely related to llamas. They are both from a group of four species known as South American Camelids. The llama is approximately twice the size of an alpaca with banana shaped ears and is principally used as a pack animal. Alpacas are exclusively bred as fleece animals in Australia.when can we see northern lights in norwayThe Northern Lights can appear at any time, but they usually grace the sky between 6 o’clock in the evening and 1 o’clock in the morning. 1 It is rare to see the Northern Lights before 18. 00/6pm, even during the dark months. 2 The highest frequency is around 22. 00–23. 3 If you see the Northern Lights at 19.Transfer points on the Northern lights & Norway in a nutshell® trip. Oslo: Arrival/departure by plane to/from Oslo Airport Gardermoen, 28 mi./45 km north of city center. Transport by airport train or airport bus. Tromsø: Arrival/departure by plane to/from Tromsø Airport, 1.8 mi./3 km west of city center.what games do markiplier playList of Games. Markiplier is a professional gamer, who is best known for playing horror-themed video games. Along with many other types of games, including, but not limited to: flash games, indie point-and-click games and adventure games.Stop wasting your time for playing games when you can play games and be paid for it. Be the one of the game testers and start earning money from something that makes you happy. Visit http://goo.gl/pT87xF, become a game tester today and get paid to play video games. Felisha · 1 year ago. - Loss:
pylate.losses.contrastive.Contrastive
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy: stepsper_device_train_batch_size: 196per_device_eval_batch_size: 196learning_rate: 3e-05max_grad_norm: 10.0num_train_epochs: 0max_steps: 50000warmup_ratio: 0.01bf16: Truetorch_compile: Truetorch_compile_backend: inductoreval_on_start: True
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 196per_device_eval_batch_size: 196per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 3e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 10.0num_train_epochs: 0max_steps: 50000lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.01warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Truefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Truetorch_compile_backend: inductortorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Trueuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
- Downloads last month
- 9
Model tree for xtr-replicability/bge_small_colbert_contrastive
Base model
BAAI/bge-small-en-v1.5Dataset used to train xtr-replicability/bge_small_colbert_contrastive
Evaluation results
- Accuracy on Unknownself-reported0.991