-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
XFeat + LightGlue #34
Comments
That's great! Looking forward to your work |
Hey guys, just released a version of LightGlue matcher, please check out in README |
Hello, author, didn't you say that XFEAT performance is better than Superpoint, why look at the data this time Superpoint is stronger |
Hi @muchluv525, Please notice that we trained a smaller and faster version of LightGlue. Other than that, there are still a few reasons that SuperPoint + LightGlue might be still better than XFeat + LG (full size) |
And the number of layers is just 6. BTW, would you upload the training code of LightGlue for XFeat? |
我明白了,谢谢你的回答,我是初学者,期待对xfeat进行更详细的解释 |
@guipotje Have you tried training Xfeat and LightGlue end-to-end? |
Hello @noahzn, I haven't tried to train it end-to-end. It might deliver some improvements, as mentioned in SuperGlue's paper (section 5.4), when backpropagating through the descriptors. However, it also might lead to less generalization to different scenes. |
Hello, Thank you very much for your great work. I am curious about the modifications made to the lightglue network structure to achieve the balance between inference accuracy and speed mentioned in the README. Will this part of the code be made publicly available? I checked the match_lighterglue.py file, but it does not provide more information on this aspect. |
Hi @guipotje I might try end-to-end training, do you have any idea about the implementation? Since XFeat and LightGlue use different homography code for training, I'm wondering if it's possible to keep their own random homography code, but only optimize two networks together. This may need XFeat to return a loss and then we add it to LightGlue's loss and backprop together. Do you think this is the minimal effort for an end-to-end training of the two networks? |
Hello everyone,
I'm training some LightGlue variations (finding a neat trade-off between model size vs accuracy) and soon I will update the repo with the model and weights in the next weeks!
You can follow this issue if you are interested.
Best,
Guilherme
The text was updated successfully, but these errors were encountered: