votes up 5

Make sure `_init_weigths` is implemented for (self.__class__)

Package:
Exception Class:
NotImplementedError

Raise code

""" 
        """
        return None  # Overwrite for models with output embeddings

    def _init_weights(self, module):
        """
        Initialize the weights. This method should be overridden by derived class.
        """
        raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")

    def tie_weights(self):
        """
        Tie the weights between the input embeddings and the output embeddings.

        If the :obj:`torchscript` flag is set in the configuration, can't handle parameter sharing so we are cloning
        the weights instead. """

Comment explaining raise

Initialize the weights. This method should be overridden by derived class.

😲  Walkingbet is Android app that pays you real bitcoins for a walking. Withdrawable real money bonus is available now, hurry up! 🚶

Ways to fix

votes up 2 votes down
def _init_weights(self, module):
    """
    Initialize the weights. This method should be overridden by derived class.
    """
    raise NotImplementedError(f"Make sure `_init_weigths` is implemented for {self.__class__}")

Initialize weights are overridden for PreTrainedModel by a derived class. Because it doesn't matter what we are passing to the method, it gives an error.

Error to reproduce:

from transformers import PreTrainedModel,PretrainedConfig
import torch 

# FloatTensor containing pretrained weights
weight = torch.FloatTensor([[12.33], [45.16.3]])
embedding = nn.Embedding.from_pretrained(weight)

# Download model and configuration from huggingface.co and cache.
config = PretrainedConfig()
moda = PreTrainedModel(config)
moda._init_weights(embedding)

For other transformers Model, init_weights are used and you can check here why we need

Jul 15, 2021 anonim answer
anonim 13.0k

Add a possible fix

Please authorize to post fix