Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support convolution with valid padding. #3804

Closed
wants to merge 1 commit into from
Closed

Conversation

sahas3
Copy link
Contributor

@sahas3 sahas3 commented Oct 18, 2024

Convolution created with valid padding produces the aten.convolution op in the following fashion:

module {
  func.func @main(%arg0: !torch.vtensor<[1,64,57],f32>) -> !torch.vtensor<[1,64,57],f32> attributes {torch.assume_strict_symbolic_shapes} {
    %false = torch.constant.bool false
    %int1 = torch.constant.int 1
    %0 = torch.vtensor.literal(dense<0.536443591> : tensor<1xf32>) : !torch.vtensor<[1],f32>
    %1 = torch.vtensor.literal(dense<-7.486820e-03> : tensor<1x1x1x1xf32>) : !torch.vtensor<[1,1,1,1],f32>
    %int0 = torch.constant.int 0
    %2 = torch.aten.unsqueeze %arg0, %int0 : !torch.vtensor<[1,64,57],f32>, !torch.int -> !torch.vtensor<[1,1,64,57],f32>
    %3 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
    %4 = torch.prim.ListConstruct %int0 : (!torch.int) -> !torch.list<int>
    %5 = torch.prim.ListConstruct %int1, %int1 : (!torch.int, !torch.int) -> !torch.list<int>
    %6 = torch.prim.ListConstruct %int0 : (!torch.int) -> !torch.list<int>
    %7 = torch.aten.convolution %2, %1, %0, %3, %4, %5, %false, %6, %int1 : !torch.vtensor<[1,1,64,57],f32>, !torch.vtensor<[1,1,1,1],f32>, !torch.vtensor<[1],f32>, !torch.list<int>, !torch.list<int>, !torch.list<int>, !torch.bool, !torch.list<int>, !torch.int -> !torch.vtensor<[1,1,64,57],f32>
    %8 = torch.aten.squeeze.dim %7, %int0 : !torch.vtensor<[1,1,64,57],f32>, !torch.int -> !torch.vtensor<[1,64,57],f32>
    return %8 : !torch.vtensor<[1,64,57],f32>
  }
}

Note that the padding input to aten.convolution is 1-element whereas the lowerings expect them to be same as number of spatial dims in the input. This results in hitting the assertion in https://github.com/sahas3/torch-mlir/blob/dc7a1ff7d9134758128a637dca976f72c2366e59/lib/Conversion/TorchToLinalg/Utils.cpp#L78 for the TorchToLinalg pass. The failure modes for lowering to tosa and stablehlo are different but stems from the same root cause.

@sahas3
Copy link
Contributor Author

sahas3 commented Oct 22, 2024

Hi @vivekkhandelwal1 and @qingyunqu tagging you as potential reviewers since you've contributed to the convolution lowering. Can you please take a look at this PR or add other reviewers? Thanks!

@sahas3
Copy link
Contributor Author

sahas3 commented Nov 8, 2024

Hello @vivekkhandelwal1 any thoughts on this PR? Also, I am a bit puzzled about the CI failure as I cannot reproduce it locally. Any suggestions on debugging it? Thanks!

@sahas3 sahas3 closed this Nov 12, 2024
@vivekkhandelwal1
Copy link
Collaborator

Hi @sahas3, sorry for the late reply. Why did you close this PR? Don't you need it now?

@sahas3
Copy link
Contributor Author

sahas3 commented Nov 12, 2024

Ah sorry @vivekkhandelwal1, I thought I commented when I closed the PR.

Looks like some pytorch behavior has changed which is now leaving the convolution op with same or valid padding as torch.aten.conv2d.padding. During fx.import_and_export this op is being left as torch.operator "torch.aten.conv2d.padding" resulting in "Lowering TorchFX IR -> Torch Backend IR" pipeline to fail. I think this can be handled by first adding "torch.aten.conv2d.padding" to torch_ods_gen.py so that lowering to Torch Backend IR doesn't fail. And then "torch.aten.conv2d.padding" can be expressed as ""torch.aten.conv2d". Is that the correct way forward?

@vivekkhandelwal1
Copy link
Collaborator

Ah sorry @vivekkhandelwal1, I thought I commented when I closed the PR.

Looks like some pytorch behavior has changed which is now leaving the convolution op with same or valid padding as torch.aten.conv2d.padding. During fx.import_and_export this op is being left as torch.operator "torch.aten.conv2d.padding" resulting in "Lowering TorchFX IR -> Torch Backend IR" pipeline to fail. I think this can be handled by first adding "torch.aten.conv2d.padding" to torch_ods_gen.py so that lowering to Torch Backend IR doesn't fail. And then "torch.aten.conv2d.padding" can be expressed as ""torch.aten.conv2d". Is that the correct way forward?

Yeah, the error description that you gave means that the op torch.aten.conv2d.padding is missing torch-mlir lowering. You've add to add the op lowering or decomposition to support this. You can follow the steps here: https://github.com/llvm/torch-mlir/blob/main/docs/add_ops.md or let me know if you face any issue in doing this.

@sahas3
Copy link
Contributor Author

sahas3 commented Nov 19, 2024

Ah sorry @vivekkhandelwal1, I thought I commented when I closed the PR.
Looks like some pytorch behavior has changed which is now leaving the convolution op with same or valid padding as torch.aten.conv2d.padding. During fx.import_and_export this op is being left as torch.operator "torch.aten.conv2d.padding" resulting in "Lowering TorchFX IR -> Torch Backend IR" pipeline to fail. I think this can be handled by first adding "torch.aten.conv2d.padding" to torch_ods_gen.py so that lowering to Torch Backend IR doesn't fail. And then "torch.aten.conv2d.padding" can be expressed as ""torch.aten.conv2d". Is that the correct way forward?

Yeah, the error description that you gave means that the op torch.aten.conv2d.padding is missing torch-mlir lowering. You've add to add the op lowering or decomposition to support this. You can follow the steps here: https://github.com/llvm/torch-mlir/blob/main/docs/add_ops.md or let me know if you face any issue in doing this.

Opened new PR with these changes in #3883

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants