-
Notifications
You must be signed in to change notification settings - Fork 514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support convolution with valid
padding.
#3804
Conversation
Hi @vivekkhandelwal1 and @qingyunqu tagging you as potential reviewers since you've contributed to the convolution lowering. Can you please take a look at this PR or add other reviewers? Thanks! |
Hello @vivekkhandelwal1 any thoughts on this PR? Also, I am a bit puzzled about the CI failure as I cannot reproduce it locally. Any suggestions on debugging it? Thanks! |
Hi @sahas3, sorry for the late reply. Why did you close this PR? Don't you need it now? |
Ah sorry @vivekkhandelwal1, I thought I commented when I closed the PR. Looks like some pytorch behavior has changed which is now leaving the convolution op with |
Yeah, the error description that you gave means that the op |
Opened new PR with these changes in #3883 |
Convolution created with
valid
padding produces theaten.convolution
op in the following fashion:Note that the
padding
input toaten.convolution
is 1-element whereas the lowerings expect them to be same as number of spatial dims in the input. This results in hitting the assertion in https://github.com/sahas3/torch-mlir/blob/dc7a1ff7d9134758128a637dca976f72c2366e59/lib/Conversion/TorchToLinalg/Utils.cpp#L78 for theTorchToLinalg
pass. The failure modes for lowering totosa
andstablehlo
are different but stems from the same root cause.