Make sure `_init_weigths` is implemented for (self.__class__)
Package:
transformers
50617

Exception Class:
NotImplementedError
Raise code
"""
"""
return None # Overwrite for models with output embeddings
def _init_weights(self, module):
"""
Initialize the weights. This method should be overridden by derived class.
"""
raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")
def tie_weights(self):
"""
Tie the weights between the input embeddings and the output embeddings.
If the :obj:`torchscript` flag is set in the configuration, can't handle parameter sharing so we are cloning
the weights instead. """
Comment explaining raise
Initialize the weights. This method should be overridden by derived class.
Links to the raise (1)
https://github.com/huggingface/transformers/blob/bd9871657bb9500a9f4437a873db6df5f1ae6dbb/src/transformers/modeling_utils.py#L517Ways to fix
def _init_weights(self, module):
"""
Initialize the weights. This method should be overridden by derived class.
"""
raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")
Initialize weights are overridden for PreTrainedModel by a derived class. Because it doesn't matter what we are passing to the method, it gives an error.
Error to reproduce:
from transformers import PreTrainedModel,PretrainedConfig
import torch
# FloatTensor containing pretrained weights
weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
embedding = nn.Embedding.from_pretrained(weight)
# Download model and configuration from huggingface.co and cache.
config = PretrainedConfig()
moda = PreTrainedModel(config)
moda._init_weights(embedding)
For other transformers Model, init_weights are used and you can check here why we need
Add a possible fix
Please authorize to post fix