You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use a different savefile which was generated for CULane here. I have changed parameters.py and test.py files to indicate the new tensor from lane_agent.load_weights(640, "tensor(0.2298)") to lane_agent.load_weights(296, "tensor(1.6947)"). However I get a size mismatch error when running the test.py in mode 2.
(tdata2) C:\Users\...\PINet>python test.py -f images/BB005_20200224_061813_fog.jpg Traceback (most recent call last): File "test.py", line 458, in <module> tester = PINet_Tester() File "test.py", line 38, in __init__ lane_agent.load_weights(296, "tensor(1.6947)") File "C:\Users\...\PINet\agent.py", line 302, in load_weights ), False File "C:\Users\...\Anaconda3\envs\tdata2\lib\site-packages\torch\nn\modules\module.py", line 1045, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for lane_detection_network:
size mismatch for layer1.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer1.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer2.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer3.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]). size mismatch for layer4.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
Could someone help me with this?
I have tried converting the input image to grayscale and including 4 hourglass by modifying test.py (line 52 and 129) and hourglass_network.py but doesn't fix the error
---modifications in test.py---- (line 52) def test_image(self, filename, output_filename=None, threshold=0.81, color=None): test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) [...] (line129) elif p.mode == 2: # check model with a picture #test_image = cv2.imread(p.test_root_url+"clips/0530/1492720840345996040_0/20.jpg") test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
---modifications in hourglass_network.py--- class lane_detection_network(nn.Module): def __init__(self): super(lane_detection_network, self).__init__()
PINet and PINet_new have a different network, so the weight file of PINet_new repository does not work in PINet.
Also, because both networks are designed for RGB images, you need to modify the resizing network if you want to use grayscale.
Thank you!
PINet and PINet_new have a different network, so the weight file of PINet_new repository does not work in PINet.
Also, because both networks are designed for RGB images, you need to modify the resizing network if you want to use grayscale.
Thank you!
Hello all,
I am trying to use a different savefile which was generated for CULane here. I have changed parameters.py and test.py files to indicate the new tensor from lane_agent.load_weights(640, "tensor(0.2298)") to lane_agent.load_weights(296, "tensor(1.6947)"). However I get a size mismatch error when running the test.py in mode 2.
(tdata2) C:\Users\...\PINet>python test.py -f images/BB005_20200224_061813_fog.jpg
Traceback (most recent call last):
File "test.py", line 458, in <module>
tester = PINet_Tester()
File "test.py", line 38, in __init__
lane_agent.load_weights(296, "tensor(1.6947)")
File "C:\Users\...\PINet\agent.py", line 302, in load_weights
), False
File "C:\Users\...\Anaconda3\envs\tdata2\lib\site-packages\torch\nn\modules\module.py", line 1045, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for lane_detection_network:
size mismatch for layer1.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
Could someone help me with this?
I have tried converting the input image to grayscale and including 4 hourglass by modifying test.py (line 52 and 129) and hourglass_network.py but doesn't fix the error
---modifications in test.py----
(line 52) def test_image(self, filename, output_filename=None, threshold=0.81, color=None): test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
[...]
(line129) elif p.mode == 2: # check model with a picture #test_image = cv2.imread(p.test_root_url+"clips/0530/1492720840345996040_0/20.jpg") test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
---modifications in hourglass_network.py---
class lane_detection_network(nn.Module):
def __init__(self):
super(lane_detection_network, self).__init__()
The text was updated successfully, but these errors were encountered: