votes up 6

SyncBatchNorm number of input channels should be non-zero

Package:
torch
github stars 50580
Exception Class:
ValueError

Raise code

nput.dim() < 2:
            raise ValueError(
                "expected at least 2D input (got {}D input)".format(input.dim())
            )

    def _check_non_zero_input_channels(self, input):
        if input.size(1) == 0:
            raise ValueError(
                "SyncBatchNorm number of input channels should be non-zero"
            )

    def forward(self, input: Tensor) -> Tensor:
        # currently only GPU input is supported
        if not input.is_cuda:
            raise ValueError("SyncBatchNorm expected input tensor to be on GPU")

        sel
­čÖĆ Scream for help to Ukraine
Today, 2nd July 2022, Russia continues bombing and firing Ukraine. Don't trust Russia, they are bombing us and brazenly lying in same time they are not doing this ­čśá, civilians and children are dying too! We are screaming and asking exactly you to help us, we want to survive, our families, children, older ones.
Please spread the information, and ask your governemnt to stop Russia by any means. We promise to work extrahard after survival to make the world safer place for all.

Ways to fix

votes up 3 votes down

SyncBatchNorm is a type of batch normalization used for multi-GPU training. Standard batch normalization only normalizes the data within each device (GPU). 

Error code:

from torch import nn
import torch,os
from torch.distributed import init_process_group

item = torch.randn(0,0).cuda()
#Initialize process group
os.environ['MASTER_ADDR'] = '127.168.1.1'
os.environ['MASTER_PORT'] = '8001'
init_process_group("gloo",world_size=1,rank=2)

m = nn.SyncBatchNorm(100)
m.forward(item)

forward method take tensor as input and checks if it has 0 sizes. Here our item has 0 size. So, error raises.

Fix code:

from torch import nn
import torch,os
from torch.distributed import init_process_group

t = torch.randn(2,2).cuda()  #<---- it has 2,2 size
#Initialize process group
os.environ['MASTER_ADDR'] = '127.168.1.1'
os.environ['MASTER_PORT'] = '8001'
init_process_group("gloo",world_size=1,rank=2)

m = nn.SyncBatchNorm(100)
m.forward(t)
Jul 03, 2021 anonim answer
anonim 13.0k

Add a possible fix

Please authorize to post fix