Skip to content
This repository has been archived by the owner on Jul 2, 2023. It is now read-only.

The learning rate is too small #54

Closed
youngfly11 opened this issue Sep 22, 2017 · 5 comments
Closed

The learning rate is too small #54

youngfly11 opened this issue Sep 22, 2017 · 5 comments
Assignees
Labels

Comments

@youngfly11
Copy link

I found that when you trained the FCN8s model, the learning rate is too small (1e-14). I remember that the learning rate is set to 1e-4 in the original FCN paper. I am a little confused.
Can you give me some answer? Thank you for advance

@wkentaro
Copy link
Owner

@wkentaro wkentaro self-assigned this Sep 23, 2017
@youngfly11
Copy link
Author

Thank you for your answer. The loss does not decay when i train the FCN8s on the CamVid dataset. i am so confused. I have another question. Do you use the pretrained model or not?

@wkentaro
Copy link
Owner

Yeah, I use pretrained model of fcn16s: https://github.com/wkentaro/pytorch-fcn/blob/master/examples/voc/train_fcn8s.py#L76
which is required to train fcn8s.

@youngfly11
Copy link
Author

Thanks you so much. Can i directly use the pretrained model of fcn8s? i remember that you give a url to download the fcn8s_from_caffe.pth in FCN8s.py?

@wkentaro
Copy link
Owner

You can do like below:

import torchfcn
model = torchfcn.FCN8s()
pretrained_model = model.download()
model.load_state_dict(torch.load(pretrained_model))

@wkentaro wkentaro pinned this issue Sep 4, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants