-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process: Add documentation for labels, current and proposed #533
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @inexorabletash!
I dropped some unused labels, did some renames and proposed some changes to this PR accordingly. This doc will be very helpful for the group.
Given that it already warrants special consideration, we should decide and document how new operator proposals should be labeled. Ignoring operation/operator spelling for the moment, is that operation set (because it's claiming the current op set is incomplete, and an addition to the op set), operator specific (because it's about specific new ops), or a new label e.g. "new operator" ? If we expect this to be common, we could introduce an issue template that automatically applies labels, which would ease the triage burden. |
7caa3b1
to
00e8818
Compare
How about if we settle on short and sweet "opset" and "op" to sidestep the naming discussion, while allow to filter out global from specific? |
I worry that "op" without a qualifier like "specific" could end up being applied to everything relating to ops. I guess that's why we have docs. I'll defer to others who know the space better than I do re: "opset" vs. "op set" vs. "operator set" etc. |
IssueTriage.md
Outdated
|
||
TODO: Consider removing these, or fit these into the documentation below. | ||
|
||
- **enhancement** - unclear if this is about the spec text, the proposed API, etc. Prefer **feature request** etc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some further triage, and the issues with enhancement are only a few. We should probably drop this label entirely.
A good process question: do we actually want to tag "the normative spec text has a flaw" with bug, or is that considered the default? |
Tagging "the normative spec text has a flaw" with bug sounds good to me. I think "lack of normative spec text" is also a bug, for example, #283 |
There was a request for a new webgpu interop label so I created one, @inexorabletash to be reflected in this PR. |
MLBuffer is also useful for chained inference use case. Should we also create a tag for it? |
00e8818
to
117cf22
Compare
Added! (I don't get the nice formatting of labels in comments, it seems.) And renamed the section to "Workstreams" but ideas welcome. Also removed the normative suggestion and added a link to the Process doc. New suggestion: drop cr and v2 and use Milestones instead, which have nice progress indicators. |
Thanks, this would be a good feature-specific label for consideration. @inexorabletash would it makes sense to use a combo of type: feature request + feature: "name" label for significant new features? This suggest we'd have a few feature-specific label for bigger features. Are there good experiences from the spec-land for this?
No open issue associated with cr label (we've addressed them!) so I dropped https://github.com/webmachinelearning/webnn/labels/cr 👋 v2 label is quite popular. This is "work deferred to the next version" which means this collection becomes a mixed bag. It is something we should look from time to time and especially after each major spec milestone to plan for the next.
What is the best example of the GH Milestones feature used right in a spec project so we could learn from it and evaluate if this feature would help the broader group? I've used this feature for some spec projects long time ago so asking for more recent view on how things have improved. |
Was looking at #531 and thinking: Do we want to label proposed feature removals, the reverse of "feature request"? On the impact scale they're similar. Proposal: expand the semantics of "feature request" to repurpose it as follows: "suggestion for an addition to or removal from the proposed API". Feature removal is a feature. |
My concern here is that a feature request is (in most cases) purely additive. It can be deferred to "v.next", allowed to linger indefinitely, wait for use cases, etc. A removal, on the other hand, needs to consider compatibility, probably needs to happen within a version, acknowledge that use cases have been de-prioritized, etc. A feature addition that doesn't happen is neutral for the long term maintenance of implementations. But a feature removal that doesn't happen implies long term work for implementations, tests, interop, etc. So I agree that in terms of impact scale to the spec itself they're similar, but the attention needed and urgency seem quite different. I'd really hate for a suggested removal to not get the attention it deserves. |
For both feature-specific labels and milestones - I'm asking around. I have anecdotes but not data. :) |
117cf22
to
ab35078
Compare
From asking around, more anecdotes:
In personal projects I like the use of Milestones to indicate when an issue is "accepted" as a requirement for the current release or explicitly "deferred" to a future release, and issues with no Milestone are undecided (so: action is required). Using labels is fine but you don't get the nice bar graphs. The current situation is that we have a handful of v2 but (1) IMHO many of those should be considered blockers for TR and (2) anything not marked v2 is ambiguous. Ultimately it all comes down to: I want a clear view on what work remains and what next action is needed, so I'll be happy with anything that helps with that! Labels for "significant new features" SGTM, on a case-by-case basis, like has happened for webgpu interop. |
ab35078
to
6b92e7f
Compare
6b92e7f
to
f82db60
Compare
I dropped the proposals for "samples" and "interop" as I couldn't identify enough proposed issues in each category for it to make sense. |
Created operator specific and populated per triage #533 (comment) |
6b6453a
to
71ed611
Compare
Co-authored-by: Anssi Kostiainen <[email protected]>
71ed611
to
0d566e0
Compare
I've updated https://github.com/webmachinelearning/webnn/labels to match. A few notes:
@inexorabletash, do you also feel like we could merge this soon? Please check things are cohesive e.g. remove "help wanted". |
* Removed TODO re: "help wanted" - label deleted * Removed TODO: "needs PR"/"has PR" and "needs WPT" / "has WPT" - needs more discussion
I just went through and removed usage of "enhancement" (there were only 4). #448 is an example of an issue that I'm not sure how to label. In prose I'd say "the spec is missing normative text to explain how something works". Should that just be "bug" ? I think if we delete the "enhancement" label (and remove the TODO here) then this is good to merge and we can iterate further e.g. with guidance for issues like #448 |
* Remove TODOs
TODOs removed - merge at your leisure. |
For review - I left lots of notes which should get resolved before this ever merges. Editors - please make edits directly if you can!
Before merging we should also:
docs/
directory to capture these filesPreview | Diff