Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Apply the trait NoMemoryEffect to most ReadOnly ops #3891

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

zjgarvey
Copy link
Collaborator

In my understanding, the trait ReadOnly implies that the memory for input values does not get modified by the operation.

This is similar to NoMemoryEffect, with a few exceptions. Ops like prim.RaiseException and prim.Print are ReadOnly, but have effects like terminating the program and printing, respectively.

There may be other ReadOnly ops with side effects. As a hypothetical example, there might be a comparison op that checks if two tensors are the same type, prints some debug info, and also returns a boolean value. In the event that such an op is discovered, I've added a keyword argument to the emit_op function to explicitly label an operation as has_memory_effects=True to avoid adding the NoMemoryEffect trait to the tablegen definition.

My primary motivation for this change is to remove the need to have one-off RemoveUnused patterns for ops that we want to remove when dead, but adding this trait also has the very appealing effect of dramatically simplifying torch-IR for some models. Specifically for ONNX models, it is not uncommon to have patterns that consistently call "onnx.Shape" on the same exact tensor, only to extract a different element from the shape. That redundant IR doesn't get CSE'd until converting to a core mlir dialect like linalg/tensor/arith.

@zjgarvey zjgarvey marked this pull request as draft November 22, 2024 23:15
@zjgarvey
Copy link
Collaborator Author

zjgarvey commented Nov 22, 2024

I'm wondering if it is more appropriate to label ops more carefully with specific MemoryEffects? Perhaps NoMemoryEffect would imply that it also doesn't allocate memory for a new output tensor, or write to the new output tensor?

Please give some feedback if you have some opinions on what would be the best way forward on this.

@zjgarvey zjgarvey marked this pull request as ready for review November 23, 2024 00:08
@IanWood1
Copy link
Contributor

I'm wondering if it is more appropriate to label ops more carefully with specific MemoryEffects? Perhaps NoMemoryEffect would imply that it also doesn't allocate memory for a new output tensor, or write to the new output tensor?

Please give some feedback if you have some opinions on what would be the best way forward on this.

I looked into this a bit ago and one thing I found is that there are several ops that can throw exceptions that aren't as obvious as prim.RaiseException (e.g. torch.aminmax). But I'm not sure if torch-mlir's goal is to respect these errors so it may not be a problem.

Copy link
Member

@kuhar kuhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I skimmed it and it looks fine to me. I'd probably put all of these standard properties in a new def and give them a name instead of repeating everywhere. This should make it easier to update if you realize you need to tweak it later on. For example, see how Pure is defined here: https://github.com/llvm/llvm-project/blob/132de3a71f581dcb008a124d52c83ccca8158d98/mlir/include/mlir/Interfaces/SideEffectInterfaces.td#L144C1-L146C60


Edit:

Actually, I don't think you need ReadOnly on most/any of these. If they only operate on tensors, there's nor memory accessed. Printing, however, would be considered a side effect.


Edit 2:

I don't know where ReadOnly comes from, I don't see it in MLIR. Is this a torch thing?

@kuhar
Copy link
Member

kuhar commented Nov 23, 2024

I'm wondering if it is more appropriate to label ops more carefully with specific MemoryEffects? Perhaps NoMemoryEffect would imply that it also doesn't allocate memory for a new output tensor, or write to the new output tensor?

Tensors are not considered memory. See https://mlir.llvm.org/docs/Rationale/SideEffectsAndSpeculation/.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants