Fill-Mask
Transformers
PyTorch
Safetensors
English
nomic_bert
custom_code
Pringled commited on
Commit
daf7a44
·
verified ·
1 Parent(s): 7710840

fix: Missing get_input_embeddings / set_input_embeddings on NomicBertModel

Browse files

NomicBertModel doesn't implement `get_input_embeddings()` or `set_input_embeddings()`, so the transformers fallback in `EmbeddingAccessMixi` tries to resolve them automatically. This fails because:
- `_input_embed_layer` defaults to `embed_tokens`, but NomicBERT uses `word_embeddings`
- `base_model_prefix = "model"`, but `__init__` creates `self.embeddings`, not `self.model`

Adding these two methods fixes this.

Files changed (1) hide show
  1. modeling_hf_nomic_bert.py +6 -0
modeling_hf_nomic_bert.py CHANGED
@@ -1890,6 +1890,12 @@ class NomicBertModel(NomicBertPreTrainedModel):
1890
 
1891
  self.apply(partial(_init_weights, initializer_range=config.initializer_range))
1892
 
 
 
 
 
 
 
1893
  def forward(
1894
  self,
1895
  input_ids=None,
 
1890
 
1891
  self.apply(partial(_init_weights, initializer_range=config.initializer_range))
1892
 
1893
+ def get_input_embeddings(self):
1894
+ return self.embeddings.word_embeddings
1895
+
1896
+ def set_input_embeddings(self, value):
1897
+ self.embeddings.word_embeddings = value
1898
+
1899
  def forward(
1900
  self,
1901
  input_ids=None,