Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing a different savefile #45

Open
oipa opened this issue Dec 15, 2020 · 2 comments
Open

testing a different savefile #45

oipa opened this issue Dec 15, 2020 · 2 comments

Comments

@oipa
Copy link

oipa commented Dec 15, 2020

Hello all,

I am trying to use a different savefile which was generated for CULane here. I have changed parameters.py and test.py files to indicate the new tensor from lane_agent.load_weights(640, "tensor(0.2298)") to lane_agent.load_weights(296, "tensor(1.6947)"). However I get a size mismatch error when running the test.py in mode 2.

(tdata2) C:\Users\...\PINet>python test.py -f images/BB005_20200224_061813_fog.jpg
Traceback (most recent call last):
File "test.py", line 458, in <module>
tester = PINet_Tester()
File "test.py", line 38, in __init__
lane_agent.load_weights(296, "tensor(1.6947)")
File "C:\Users\...\PINet\agent.py", line 302, in load_weights
), False
File "C:\Users\...\Anaconda3\envs\tdata2\lib\site-packages\torch\nn\modules\module.py", line 1045, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for lane_detection_network:

size mismatch for layer1.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer1.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer2.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer3.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.down4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual1.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual2.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual3.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).
size mismatch for layer4.layer1.residual4.conv1.cbr_unit.0.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 128, 1, 1]).

Could someone help me with this?

I have tried converting the input image to grayscale and including 4 hourglass by modifying test.py (line 52 and 129) and hourglass_network.py but doesn't fix the error

---modifications in test.py----
(line 52) def test_image(self, filename, output_filename=None, threshold=0.81, color=None): test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
[...]
(line129) elif p.mode == 2: # check model with a picture #test_image = cv2.imread(p.test_root_url+"clips/0530/1492720840345996040_0/20.jpg") test_image = cv2.imread(filename) test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)

---modifications in hourglass_network.py---
class lane_detection_network(nn.Module):
def __init__(self):
super(lane_detection_network, self).__init__()

   `self.resizing = resize_layer(3, 128)`

   ` #feature extraction
    self.layer1 = hourglass_block(128, 128)
    self.layer2 = hourglass_block(128, 128)
    self.layer3 = hourglass_block(128, 128) #Olatz
    self.layer4 = hourglass_block(128, 128) #Olatz`


`def forward(self, inputs):
    #feature extraction
    out = self.resizing(inputs)
    result1, out = self.layer1(out)
    result2, out = self.layer2(out)
    result3, out = self.layer3(out) #Olatz
    result4, out = self.layer4(out) #Olatz

    return [result1, result2, result3, result4]`
@koyeongmin
Copy link
Owner

PINet and PINet_new have a different network, so the weight file of PINet_new repository does not work in PINet.
Also, because both networks are designed for RGB images, you need to modify the resizing network if you want to use grayscale.
Thank you!

@oipa
Copy link
Author

oipa commented Dec 16, 2020

PINet and PINet_new have a different network, so the weight file of PINet_new repository does not work in PINet.
Also, because both networks are designed for RGB images, you need to modify the resizing network if you want to use grayscale.
Thank you!

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants