Skip to content
This repository has been archived by the owner on Jun 6, 2024. It is now read-only.

Fix author-inherent, update weight #47

Merged
merged 3 commits into from
Apr 18, 2022

Conversation

4meta5
Copy link
Contributor

@4meta5 4meta5 commented Apr 7, 2022

follow up

update weight as per #41 (comment) (MOON-1613)

@@ -117,7 +117,7 @@ pub mod pallet {
/// but before transactions are executed.
// This should go into on_post_inherents when it is ready https://github.com/paritytech/substrate/pull/10128
// TODO better weight. For now we just set a somewhat conservative fudge factor
#[pallet::weight((10 * T::DbWeight::get().write, DispatchClass::Mandatory))]
#[pallet::weight((3 * T::DbWeight::get().read + 11 * T::DbWeight::get().write, DispatchClass::Mandatory))]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that this is a fudge factor, but it seems extremely high. What's the rationale for them?

Copy link
Contributor Author

@4meta5 4meta5 Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was 10 writes before and now it is 11 writes. The HighestSlotSeen added 1 more write.

I do not know why the fudge factor was 7 writes, it isn't benchmarked so the author (probably @JoshOrndorff ) was presumably being conservative (and just estimating all reads + writes + computation as 10 writes).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it was me that added that fudge factor, and it was exactly as Amar says. I just wanted something quick and safe that was better than 0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is tricky because it calls the SlotBeacon which can have any implementation it wants. I think this problem has been solved in substrate land. It's the problem of weighing functionality that is abstracted behind a trait. How do they do it for all the pallets that depend on Currency?

Copy link
Contributor Author

@4meta5 4meta5 Apr 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true I was assuming T::SlotBeacon::slot() was 1 read. I think this is what most implementations will be in practice.

This is tricky because it calls the SlotBeacon which can have any implementation it wants.

@girazoki told me it's best to return (ReturnValue, WeightConsumed) in that case for the trait that is getting the value. So we'd need to change the SlotBeacon trait to return this tuple and change the impl to return the weight consumed too.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in the process of benchmarking this, I think I have almost everything set except for paritytech/cumulus#1187.

@@ -117,7 +117,7 @@ pub mod pallet {
/// but before transactions are executed.
// This should go into on_post_inherents when it is ready https://github.com/paritytech/substrate/pull/10128
// TODO better weight. For now we just set a somewhat conservative fudge factor
#[pallet::weight((10 * T::DbWeight::get().write, DispatchClass::Mandatory))]
#[pallet::weight((3 * T::DbWeight::get().read + 11 * T::DbWeight::get().write, DispatchClass::Mandatory))]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#[pallet::weight((3 * T::DbWeight::get().read + 11 * T::DbWeight::get().write, DispatchClass::Mandatory))]
#[pallet::weight((10 * T::DbWeight::get().write, DispatchClass::Mandatory))]

I'll revert the weight change so the objective optimization can get in.

dont change benchmarking in this PR
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants