-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow up work on TransactionExtension
- fix weights and clean up UncheckedExtrinsic
#6418
Follow up work on TransactionExtension
- fix weights and clean up UncheckedExtrinsic
#6418
Conversation
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
bot bench-all substrate -v PIPELINE_SCRIPTS_REF=george/substrate-ext |
@georgepisaltu https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7726182 was started for your command Comment |
@georgepisaltu Command |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but one comment
// Minimum execution time: 3_876_000 picoseconds. | ||
Weight::from_parts(4_160_000, 3509) | ||
// Minimum execution time: 3_388_000 picoseconds. | ||
Weight::from_parts(3_577_000, 3509) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you manage to not the count the BlockHash
storage read?
I thought we would write something like:
diff --git a/substrate/frame/system/benchmarking/src/extensions.rs b/substrate/frame/system/benchmarking/src/extensions.rs
index 2c7ffb56227..d9564feb6d9 100644
--- a/substrate/frame/system/benchmarking/src/extensions.rs
+++ b/substrate/frame/system/benchmarking/src/extensions.rs
@@ -53,6 +53,9 @@ mod benchmarks {
let caller = whitelisted_caller();
let info = DispatchInfo { call_weight: Weight::zero(), ..Default::default() };
let call: T::RuntimeCall = frame_system::Call::remark { remark: vec![] }.into();
+ frame_benchmarking::benchmarking::add_to_whitelist(
+ frame_system::BlockHash::<T>::hashed_key_for(BlockNumberFor::<T>::from(0u32)).into(),
+ );
#[block]
{
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did it with whitelisted_caller
since it was the only read the bench did, but this is more granular so I replaced it with your suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok both implementation are good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why my suggestion wrong? there is still a read for the block hash here.
I rerun the benchmark in this PR #6818
it seems ok.
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
//! WASM-EXECUTION: `Compiled`, CHAIN: `Some("dev")`, DB CACHE: `1024` | ||
|
||
// Executed Command: | ||
// ./target/debug/substrate-node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lol these weights were def wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it's a new and currently unused extension introduced in #3685 and those weights were generated on my machine just to make sure the benchmarks work.
.saturating_add(Weight::from_parts(155_577_516, 0).saturating_mul(v.into())) | ||
.saturating_add(T::DbWeight::get().reads(9010_u64)) | ||
.saturating_add(T::DbWeight::get().reads((9_u64).saturating_mul(v.into()))) | ||
.saturating_add(T::DbWeight::get().writes(7008_u64)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good find. this is indeed an error: https://github.com/paritytech/polkadot-sdk/pull/6025/files#r1839635371
I didn't review the weight carefully I will do another review.
a PR is opened: #6463
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I normally use this tool when comparing weights (have to run it locally when using forks).
It has a CLI and web version. Outputs for proof-size.pdf and ref-time.pdf. Url would be this for the self-hosted version.
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bridge modifications look good
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
…UncheckedExtrinsic` (#6418) Follow up to #3685 Partially fixes #6403 The main PR introduced bare support for the new extension version byte as well as extension weights and benchmarking. This PR: - Removes the redundant extension version byte from the signed v4 extrinsic, previously unused and defaulted to 0. - Adds the extension version byte to the inherited implication passed to `General` transactions. - Whitelists the `pallet_authorship::Author`, `frame_system::Digest` and `pallet_transaction_payment::NextFeeMultiplier` storage items as they are read multiple times by extensions for each transaction, but are hot in memory and currently overestimate the weight. - Whitelists the benchmark caller for `CheckEra` and `CheckGenesis` as the reads are performed for every transaction and overestimate the weight. - Updates the umbrella frame weight template to work with the system extension changes. - Plans on re-running the benchmarks at least for the `frame_system` extensions. --------- Signed-off-by: georgepisaltu <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gui <[email protected]>
ae4b68b
…UncheckedExtrinsic` (#6418) Follow up to #3685 Partially fixes #6403 The main PR introduced bare support for the new extension version byte as well as extension weights and benchmarking. This PR: - Removes the redundant extension version byte from the signed v4 extrinsic, previously unused and defaulted to 0. - Adds the extension version byte to the inherited implication passed to `General` transactions. - Whitelists the `pallet_authorship::Author`, `frame_system::Digest` and `pallet_transaction_payment::NextFeeMultiplier` storage items as they are read multiple times by extensions for each transaction, but are hot in memory and currently overestimate the weight. - Whitelists the benchmark caller for `CheckEra` and `CheckGenesis` as the reads are performed for every transaction and overestimate the weight. - Updates the umbrella frame weight template to work with the system extension changes. - Plans on re-running the benchmarks at least for the `frame_system` extensions. --------- Signed-off-by: georgepisaltu <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gui <[email protected]>
This PR is a backport of #6418. For context, `TransactionExtension`, introduced in #3685, is part of the `stable2412` release, and this PR brings important fixes and quality of life improvements. Doing the backport now allows us to not break the interface later. Opened against #6473 as the changes in the original PR were made on top of the changes in this backport. --------- Signed-off-by: georgepisaltu <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: GitHub Action <[email protected]>
* Migrate pallet-transaction-storage and pallet-indices to benchmark v2 (paritytech#6290) Part of: paritytech#6202 --------- Co-authored-by: Giuseppe Re <[email protected]> Co-authored-by: GitHub Action <[email protected]> * fix prospective-parachains best backable chain reversion bug (paritytech#6417) Kudos to @EclesioMeloJunior for noticing it Also added a regression test for it. The existing unit test was exercising only the case where the full chain is reverted --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> * Remove network starter that is no longer needed (paritytech#6400) # Description This seems to be an old artifact of the long closed paritytech/substrate#6827 that I noticed when working on related code earlier. ## Integration `NetworkStarter` was removed, simply remove its usage: ```diff -let (network, system_rpc_tx, tx_handler_controller, start_network, sync_service) = +let (network, system_rpc_tx, tx_handler_controller, sync_service) = build_network(BuildNetworkParams { ... -start_network.start_network(); ``` ## Review Notes Changes are trivial, the only reason for this to not be accepted is if it is desired to not start network automatically for whatever reason, in which case the description of network starter needs to change. # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [ ] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> * `fatxpool`: size limits implemented (paritytech#6262) This PR adds size-limits to the fork-aware transaction pool. **Review Notes** - Existing [`TrackedMap`](https://github.com/paritytech/polkadot-sdk/blob/58fd5ae4ce883f42c360e3ad4a5df7d2258b42fe/substrate/client/transaction-pool/src/graph/tracked_map.rs#L33-L41) is used in internal mempool to track the size of extrinsics: https://github.com/paritytech/polkadot-sdk/blob/58fd5ae4ce883f42c360e3ad4a5df7d2258b42fe/substrate/client/transaction-pool/src/graph/tracked_map.rs#L33-L41 - In this PR, I also removed the logic that kept transactions in the `tx_mem_pool` if they were immediately dropped by the views. Initially, I implemented this as an improvement: if there was available space in the _mempool_ and all views dropped the transaction upon submission, the transaction would still be retained in the _mempool_. However, upon further consideration, I decided to remove this functionality to reduce unnecessary complexity. Now, when all views drop a transaction during submission, it is automatically rejected, with the `submit/submit_and_watch` call returning `ImmediatelyDropped`. Closes: paritytech#5476 --------- Co-authored-by: Sebastian Kunert <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> * pallet-membership: Do not verify the `MembershipChanged` in bechmarks (paritytech#6439) There is no need to verify in the `pallet-membership` benchmark that the `MemembershipChanged` implementation works as the pallet thinks it should work. If you for example set it to `()`, `get_prime()` will always return `None`. TLDR: Remove the checks of `MembershipChanged` in the benchmarks to support any kind of implementation. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> * add FeeManager to pallet xcm (paritytech#5363) Closes paritytech#2082 change send xcm to use `xcm::executor::FeeManager` to determine if the sender should be charged. I had to change the `FeeManager` of the penpal config to ensure the same test behaviour as before. For the other tests, I'm using the `FeeManager` from the `xcm::executor::FeeManager` as this one is used to check if the fee can be waived on the charge fees method. --------- Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: GitHub Action <[email protected]> * Use relay chain block number in the broker pallet instead of block number (paritytech#5656) Based on paritytech#3331 Related to paritytech#3268 Implements migrations with customizable block number to relay height number translation function. Adds block to relay height migration code for rococo and westend. --------- Co-authored-by: DavidK <[email protected]> Co-authored-by: Kian Paimani <[email protected]> * migrate pallet-nft-fractionalization to benchmarking v2 syntax (paritytech#6301) Migrates pallet-nft-fractionalization to benchmarking v2 syntax. Part of: * paritytech#6202 --------- Co-authored-by: Giuseppe Re <[email protected]> Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> * [pallet-revive] adjust fee dry-run calculation (paritytech#6393) - Fix bare_eth_transact so that it estimate more precisely the transaction fee - Add some context to the build.rs to make it easier to troubleshoot errors - Add TransactionBuilder for the RPC tests. - Improve error message, proxy rpc error from the node and handle reverted error message - Add logs in ReceiptInfo --------- Co-authored-by: GitHub Action <[email protected]> * NoOp Impl Polling Trait (paritytech#5311) Adds NoOp implementation for the `Polling` trait and updates benchmarks in `pallet-ranked-collective`. --------- Co-authored-by: Oliver Tale-Yazdi <[email protected]> * Migrate pallet-child-bounties benchmark to v2 (paritytech#6310) Part of: - paritytech#6202. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Giuseppe Re <[email protected]> * Introduce `ConstUint` to make dependent types in `DefaultConfig` more adaptable (paritytech#6425) # Description Resolves paritytech#6193 This PR introduces `ConstUint` as a replacement for existing constant getter types like `ConstU8`, `ConstU16`, etc., providing a more flexible and unified approach. ## Integration This update is backward compatible, so developers can choose to adopt `ConstUint` in new implementations or continue using the existing types as needed. ## Review Notes `ConstUint` is a convenient alternative to `ConstU8`, `ConstU16`, and similar types, particularly useful for configuring `DefaultConfig` in pallets. It enables configuring the underlying integer for a specific type without the need to update all dependent types, offering enhanced flexibility in type management. # Checklist * [x] My PR includes a detailed description as outlined in the "Description" and its two subsections above. * [ ] My PR follows the [labeling requirements]( https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process ) of this project (at minimum one label for `T` required) * External contributors: ask maintainers to put the right label on your PR. * [ ] I have made corresponding changes to the documentation (if applicable) * [ ] I have added tests that prove my fix is effective or that my feature works (if applicable) * Use type alias for transactions (paritytech#6431) Very tiny change that helps with debugging of transactions propagation by referring to the same type alias not only at receiving side, but also on the sending size for symmetry * [Release|CI/CD] Fix audiences changelog template (paritytech#6444) This PR addresses an issue mentioned [here](paritytech#6424 (comment)). The problem was that when the prdoc file has two audiences, but only one description like in [prdoc_5660](https://github.com/paritytech/polkadot-sdk/blob/master/prdoc/1.16.0/pr_5660.prdoc) it was ignored by the template. * XCMv5: add ExecuteWithOrigin instruction (paritytech#6304) Added `ExecuteWithOrigin` instruction according to the old XCM RFC 38: polkadot-fellows/xcm-format#38. This instruction allows you to descend or clear while going back again. ## TODO - [x] Implementation - [x] Unit tests - [x] Integration tests - [x] Benchmarks - [x] PRDoc ## Future work Modify `WithComputedOrigin` barrier to allow, for example, fees to be paid with a descendant origin using this instruction. --------- Signed-off-by: Adrian Catangiu <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: Andrii <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Joseph Zhao <[email protected]> Co-authored-by: Nazar Mokrynskyi <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Shawn Tabrizi <[email protected]> Co-authored-by: command-bot <> * rpc server: fix host filter for localhost on ipv6 (paritytech#6454) This PR fixes an issue that I discovered using connecting to the RPC via localhost using cURL, where cURL tries to connect to via ipv6 before ipv4 when querying `localhost` which messed up the http host filter whereas it would connect to the address `[::1]::9944 host_header: localhost:9944` but the ipv6 interface only whitelisted `[::1]:9944` which this fixes. So let's whitelist all localhost interfaces to avoid such weird edge-cases. ### Behavior before this PR ```bash $ polkadot --chain westend-dev & $ curl -v \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \ http://localhost:9944 * Host localhost:9944 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9944... * Connected to localhost (::1) port 9944 > POST / HTTP/1.1 > Host: localhost:9944 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Content-Length: 50 > < HTTP/1.1 403 Forbidden < content-type: text/plain < content-length: 41 < date: Tue, 12 Nov 2024 13:03:49 GMT < Provided Host header is not whitelisted. * Connection #0 to host localhost left intact ``` ### Behavior after this PR ```bash $ polkadot --chain westend-dev & ➜ wasm-tests (update-artifacts-1731284930) ✗ curl -v \ -H 'Content-Type: application/json' \ -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \ http://localhost:9944 * Host localhost:9944 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:9944... * Connected to localhost (::1) port 9944 > POST / HTTP/1.1 > Host: localhost:9944 > User-Agent: curl/8.5.0 > Accept: */* > Content-Type: application/json > Content-Length: 50 > < HTTP/1.1 200 OK < content-type: application/json; charset=utf-8 < vary: origin, access-control-request-method, access-control-request-headers < content-length: 54 < date: Tue, 12 Nov 2024 13:02:57 GMT < * Connection #0 to host localhost left intact {"jsonrpc":"2.0","id":"id","result":"Parity Polkadot"}% ``` --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: command-bot <> * [pallet-revive] eth-rpc fixes (paritytech#6453) - Breaking down the integration-test into multiple tests - Fix tx hash to use expected keccak-256 - Add option to ethers.js example to connect to westend and use a private key --------- Co-authored-by: GitHub Action <[email protected]> * Remove debug message about pruning active leaves (paritytech#6440) # Description The debug message was added to identify a potential memory leak. However, recent observations show that pruning works as expected. Therefore, it is best to remove this line, as it generates quite annoying logs. ## Integration Doesn't affect downstream projects. --------- Co-authored-by: GitHub Action <[email protected]> * [Tx ext stage 2: 1/4] Add `TransactionSource` as argument in `TransactionExtension::validate` (paritytech#6323) ## Meta This PR is part of 4 PR: * paritytech#6323 * paritytech#6324 * paritytech#6325 * paritytech#6326 ## Description One goal of transaction extension is to get rid or unsigned transactions. But unsigned transaction validation has access to the `TransactionSource`. The source is used for unsigned transactions that the node trust and don't want to pay upfront. Instead of using transaction source we could do: the transaction is valid if it is signed by the block author, conceptually it should work, but it doesn't look so easy. This PR add `TransactionSource` to the validate function for transaction extensions * remove pallet::getter from pallet-staking (paritytech#6184) # Description Part of paritytech#3326 Removes all pallet::getter occurrences from pallet-staking and replaces them with explicit implementations. Adds tests to verify that retrieval of affected entities works as expected so via storage::getter. ## Review Notes 1. Traits added to the `derive` attribute are used in tests (either directly or indirectly). 2. The getters had to be placed in a separate impl block since the other one is annotated with `#[pallet::call]` and that requires `#[pallet::call_index(0)]` annotation on each function in that block. So I thought it's better to separate them. --------- Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> * Refactor pallet `society` (paritytech#6367) - [x] Removing `without_storage_info` and adding bounds on the stored types for pallet `society` - issue paritytech#6289 - [x] Migrating to benchmarking V2 - paritytech#6202 --------- Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Muharem <[email protected]> * frame-benchmarking: Use correct components for pallet instances (paritytech#6435) When using multiple instances of the same pallet, each instance was executed with the components of all instances. While actually each instance should only be executed with the components generated for the particular instance. The problem here was that in the runtime only the pallet-name was used to determine if a certain pallet should be benchmarked. When using instances, the pallet name is the same for both of these instances. The solution is to also take the instance name into account. The fix requires to change the `Benchmark` runtime api to also take the `instance`. The node side is written in a backwards compatible way to also support runtimes which do not yet support the `instance` parameter. --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: clangenb <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> * Get rid of `libp2p` dependency in `sc-authority-discovery` (paritytech#5842) ## Issue paritytech#4859 ## Description This PR removes `libp2p` types in authority-discovery, and replace them with network backend agnostic types from `sc-network-types`. The `sc-network` interface is therefore updated accordingly. --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> * backing: improve session buffering for runtime information (paritytech#6284) ## Issue [[paritytech#3421] backing: improve session buffering for runtime information](paritytech#3421) ## Description In the current implementation of the backing module, certain pieces of information, which remain unchanged throughout a session, are fetched multiple times via runtime API calls. The goal of this task was to introduce a local cache to store such session-stable information and perform the runtime API call only once per session. This PR implements caching specifically for the validators list, node features, executor parameters, minimum backing votes threshold, and validator-to-group mapping, which were previously fetched from the runtime or computed each time `PerRelayParentState` was built. Now, this information is cached and reused within the session. ## TODO * [X] Create a separate struct for per-session caches; * [X] Cache validators list; * [X] Cache node features; * [X] Cache executor parameters; * [X] Cache minimum backing votes threshold; * [X] Cache validator-to-group mapping; * [X] Update tests to reflect these changes; * [X] Add prdoc. ## For the next PR Cache validator groups and any other session-stable data (if present). * Add litep2p network protocol benches (paritytech#6455) # Description Add support to run networking protocol benchmarks with litep2p backend. Now we can compare the work of both libp2p and litep2p backends for notifications and request-response protocols. Next step: extract worker initialization from the benchmark loop. ### Example run on local machine <img width="916" alt="image" src="https://github.com/user-attachments/assets/6bb9f90a-76a4-417e-b9d3-db27aa8a356f"> ## Integration Does not affect downstream projects. ## Review Notes https://github.com/paritytech/polkadot-sdk/blob/d4d9502538e8a940b809ecc77843af3cea101e19/substrate/client/network/src/litep2p/service.rs#L510-L520 This method should be implemented to run request benchmarks. --------- Co-authored-by: GitHub Action <[email protected]> * Fixed bridges zombienet tests because of removed NetworkId::Rococo/Westend from xcm::v5 (paritytech#6465) Closes: paritytech#6449 * Fix staking benchmark (paritytech#6463) Found by @ggwpez Fix staking benchmark, error was introduced when migrating to v2: paritytech#6025 --------- Co-authored-by: GitHub Action <[email protected]> * add pipeline to build runtimes * Follow up work on `TransactionExtension` - fix weights and clean up `UncheckedExtrinsic` (paritytech#6418) Follow up to paritytech#3685 Partially fixes paritytech#6403 The main PR introduced bare support for the new extension version byte as well as extension weights and benchmarking. This PR: - Removes the redundant extension version byte from the signed v4 extrinsic, previously unused and defaulted to 0. - Adds the extension version byte to the inherited implication passed to `General` transactions. - Whitelists the `pallet_authorship::Author`, `frame_system::Digest` and `pallet_transaction_payment::NextFeeMultiplier` storage items as they are read multiple times by extensions for each transaction, but are hot in memory and currently overestimate the weight. - Whitelists the benchmark caller for `CheckEra` and `CheckGenesis` as the reads are performed for every transaction and overestimate the weight. - Updates the umbrella frame weight template to work with the system extension changes. - Plans on re-running the benchmarks at least for the `frame_system` extensions. --------- Signed-off-by: georgepisaltu <[email protected]> Co-authored-by: command-bot <> Co-authored-by: gui <[email protected]> * feat: add workflow to test readme generation (paritytech#6359) # Description Created a workflow to search for README.docify.md in the repo, and run cargo build --features generate-readme in the dir of the file (assuming it is related to a crate). If the git diff shows some output for the README.md, then the file update wasn't pushed on the branch, and the workflow fails. Closes paritytech#6331 ## Integration Downstream projects that want to adopt this README checking workflow should: 1. Copy the `.github/workflows/readme-check.yml` file to their repository 2. Ensure any `README.docify.md` files in their project follow the expected format 3. Implement the `generate-readme` feature flag in their Cargo.toml if not already present ## Review Notes This PR adds a GitHub Actions workflow that automatically verifies README.md files are up-to-date with their corresponding README.docify.md sources. Key implementation details: - The workflow runs on both PRs and pushes to main - It finds all `README.docify.md` files recursively in the repository - For each file found: - Builds the project with `--features generate-readme` in that directory - Checks if the README.md has any uncommitted changes - Fails if any README.md is out of sync --------- Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Iulian Barbu <[email protected]> * [pallet-revive] set logs_bloom (paritytech#6460) Set the logs_bloom in the transaction receipt --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Cyrill Leutwiler <[email protected]> * Support more types in TypeWithDefault (paritytech#6411) # Description When using `TypeWithDefault<u32, ..>` as the default nonce provider to overcome the [replay attack](https://wiki.polkadot.network/docs/transaction-attacks#replay-attack) issue, it fails to compile due to `TypeWithDefault<u32, ..>: TryFrom<u64>` is not satisfied (which is required by trait `BaseArithmetic`). This is because the blanket implementation `TryFrom<U> for T where U: Into<T>` only impl `TryFrom<u16>` and `TryFrom<u8>` for `u32` since `u32` only impl `Into` for `u16` and `u8` but not `u64`. This PR fixes the issue by adding `TryFrom<u16/u32/u64/u128>` and `From<u8/u16/u32/u64/u128>` impl (using macro) for `TypeWithDefault<u8/u16/u32/u64/u128, ..>` and removing the blanket impl (otherwise the compiler will complain about conflicting impl), such that `TypeWithDefault<u8/u16/u32/u64/u128, ..>: AtLeast8/16/32Bit` is satisfied. ## Integration This PR adds support to more types to be used with `TypeWithDefault`, existing code that used `u64` with `TypeWithDefault` should not be affected, an unit test is added to ensure that. ## Review Notes This PR simply makes `TypeWithDefault<u8/u16/u32/u64/u128, ..>: AtLeast8/16/32Bit` satisfied --------- Signed-off-by: linning <[email protected]> * [pallet-revive] use evm decimals in call host fn (paritytech#6466) This PR update the pallet to use the EVM 18 decimal balance in contracts call and host functions instead of the native balance. It also updates the js example to add the piggy-bank solidity contract that expose the problem --------- Co-authored-by: GitHub Action <[email protected]> * network/litep2p: Update litep2p network backend to version 0.8.1 (paritytech#6484) This PR updates the litep2p backend to version 0.8.1 from 0.8.0. - Check the [litep2p updates forum post](https://forum.polkadot.network/t/litep2p-network-backend-updates/9973/3) for performance dashboards. - Check [litep2p release notes](paritytech/litep2p#288) The v0.8.1 release includes key fixes that enhance the stability and performance of the litep2p library. The focus is on long-running stability and improvements to polling mechanisms. ### Long Running Stability Improvements This issue caused long-running nodes to reject all incoming connections, impacting overall stability. Addressed a bug in the connection limits functionality that incorrectly tracked connections due for rejection. This issue caused an artificial increase in inbound peers, which were not being properly removed from the connection limit count. This fix ensures more accurate tracking and management of peer connections [paritytech#286](paritytech/litep2p#286). ### Polling implementation fixes This release provides multiple fixes to the polling mechanism, improving how connections and events are processed: - Resolved an overflow issue in TransportContext’s polling index for streams, preventing potential crashes ([paritytech#283](paritytech/litep2p#283)). - Fixed a delay in the manager’s poll_next function that prevented immediate polling of newly added futures ([paritytech#287](paritytech/litep2p#287)). - Corrected an issue where the listener did not return Poll::Ready(None) when it was closed, ensuring proper signal handling ([paritytech#285](paritytech/litep2p#285)). ### Fixed - manager: Fix connection limits tracking of rejected connections ([paritytech#286](paritytech/litep2p#286)) - transport: Fix waking up on filtered events from `poll_next` ([paritytech#287](paritytech/litep2p#287)) - transports: Fix missing Poll::Ready(None) event from listener ([paritytech#285](paritytech/litep2p#285)) - manager: Avoid overflow on stream implementation for `TransportContext` ([paritytech#283](paritytech/litep2p#283)) - manager: Log when polling returns Ready(None) ([paritytech#284](paritytech/litep2p#284)) ### Testing Done Started kusama nodes running side by side with a higher number of inbound and outbound connections (500). We previously tested with peers bounded at 50. This testing filtered out the fixes included in the latest release. With this high connection testing setup, litep2p outperforms libp2p in almost every domain, from performance to the warnings / errors encountered while operating the nodes. TLDR: this is the version we need to test on kusama validators next - Litep2p Repo | Count | Level | Triage report -|-|-|- polkadot-sdk | 409 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Peer disconnected with inflight after backoffs. Banned, disconnecting. ) litep2p | 128 | warn | Refusing to add known address that corresponds to a different peer ID litep2p | 54 | warn | inbound identify substream opened for peer who doesn't exist polkadot-sdk | 7 | error | 💔 Called `on_validated_block_announce` with a bad peer ID .* polkadot-sdk | 1 | warn | ❌ Error while dialing .*: .* polkadot-sdk | 1 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Invalid justification. Banned, disconnecting. ) - Libp2p Repo | Count | Level | Triage report -|-|-|- polkadot-sdk | 1023 | warn | 💔 Ignored block \(#.* -- .*\) announcement from .* because all validation slots are occupied. polkadot-sdk | 472 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Unsupported protocol. Banned, disconnecting. ) polkadot-sdk | 379 | error | 💔 Called `on_validated_block_announce` with a bad peer ID .* polkadot-sdk | 163 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Invalid justification. Banned, disconnecting. ) polkadot-sdk | 116 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Peer disconnected with inflight after backoffs. Banned, disconnecting. ) polkadot-sdk | 83 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Same block request multiple times. Banned, disconnecting. ) polkadot-sdk | 4 | warn | Re-finalized block #.* \(.*\) in the canonical chain, current best finalized is #.* polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Genesis mismatch. Banned, disconnecting. ) polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned, disconnecting. ( Not requested block data. Banned, disconnecting. ) polkadot-sdk | 2 | warn | Can't listen on .* because: .* polkadot-sdk | 1 | warn | ❌ Error while dialing .*: .* --------- Signed-off-by: Alexandru Vasile <[email protected]> * sp-trie: minor fix to avoid possible panic during node decoding (paritytech#6486) # Description This PR is a simple fix consisting of adding a check to the process of decoding nodes of a storage proof to avoid panicking when receiving badly-constructed proofs, returning an error instead. This would close paritytech#6485 ## Integration No changes have to be done downstream, and as such the version bump should be minor. --------- Co-authored-by: Bastian Köcher <[email protected]> * migrate pallet-nomination-pool-benchmarking to benchmarking syntax v2 (paritytech#6302) Migrates pallet-nomination-pool-benchmarking to benchmarking syntax v2. Part of: * paritytech#6202 --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Giuseppe Re <[email protected]> * Migrate some pallets to benchmark v2 (paritytech#6311) Part of paritytech#6202 --------- Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Giuseppe Re <[email protected]> * Mention that account might still be required in doc for feeless if. (paritytech#6490) Co-authored-by: Bastian Köcher <[email protected]> * Pure state sync refactoring (part-1) (paritytech#6249) This pure refactoring of state sync is preparing for paritytech#4. As the rough plan in paritytech#4 (comment), there will be two PRs for the state sync refactoring. This first PR focuses on isolating the function `process_state_key_values()` as the central point for storing received state data in memory. This function will later be adapted to forward the state data directly to the DB layer for persistent sync. A follow-up PR will handle the encapsulation of `StateSyncMetadata` to support this persistent storage. Although there are many commits in this PR, each commit is small and intentionally incremental to facilitate a smoother review, please review them commit by commit. Each commit should represent an equivalent rewrite of the existing logic, with one exception paritytech@bb447b2, which has a slight deviation from the original but is correct IMHO. Please give this commit special attention during the review. * [WIP][ci] Add worfklow stopper (paritytech#4551) PR to implements workflow stopper - a custom solution to stop all workflows if one of a required jobs failed. Previously we had the same solution in GitLab and it saved a lot of compute. Because GitHub doesn't have one united pipeline and instead it has multiple workflows something like this has to be implemented. cc paritytech/ci_cd#939 * Remove `ProspectiveParachainsMode` usage in backing subsystem (paritytech#6215) Since async backing parameters runtime api is released on all networks the code in backing subsystem can be simplified by removing the usages of `ProspectiveParachainsMode` and keeping only the branches of the code under `ProspectiveParachainsMode::Enabled`. The PR does that and reworks the tests in mod.rs to use async backing. It's a preparation for paritytech#5079 --------- Co-authored-by: Alin Dima <[email protected]> Co-authored-by: command-bot <> * sp-runtime: Be a little bit more functional :D (paritytech#6526) Co-authored-by: GitHub Action <[email protected]> * `TransactionPool` API uses `async_trait` (paritytech#6528) This PR refactors `TransactionPool` API to use `async_trait`, replacing the` Pin<Box<...>>` pattern. This should improve readability and maintainability. The change is not altering any functionality. --------- Co-authored-by: GitHub Action <[email protected]> * sp-trie: correctly avoid panicking when decoding bad compact proofs (paritytech#6502) # Description Opening another PR because I added a test to check for my fix pushed in paritytech#6486 and realized that for some reason I completely forgot how to code and did not fix the underlying issue, since out-of-bounds indexing could still happen even with the check I added. This one should fix that and, as an added bonus, has a simple test used as an integrity check to make sure future changes don't accidently revert this fix. Now `sp-trie` should definitely not panic when faced with bad `CompactProof`s. Sorry about that 😅 This, like paritytech#6486, is related to issue paritytech#6485 ## Integration No changes have to be done downstream, and as such the version bump should be minor. --------- Co-authored-by: Bastian Köcher <[email protected]> * [pallet-revive] Update delegate_call to accept address and weight (paritytech#6111) Enhance the `delegate_call` function to accept an `address` target parameter instead of a `code_hash`. This allows direct identification of the target contract using the provided address. Additionally, introduce parameters for specifying a customizable `ref_time` limit and `proof_size` limit, thereby improving flexibility and control during contract interactions. --------- Co-authored-by: Alexander Theißen <[email protected]> * Fix metrics not shutting down if there are open connections (paritytech#6220) Fix prometheus metrics not shutting down if there are open connections. I fixed the same issue in the past but it broke again after a dependecy upgrade. See also: paritytech#1637 * Validator Re-Enabling (paritytech#5724) Aims to implement Stage 3 of Validator Disbling as outlined here: paritytech#4359 Features: - [x] New Disabling Strategy (Staking level) - [x] Re-enabling logic (Session level) - [x] More generic disabling decision output - [x] New Disabling Events Testing & Security: - [x] Unit tests - [x] Mock tests - [x] Try-runtime checks - [x] Try-runtime tested on westend snap - [x] Try-runtime CI tests - [ ] Re-enabling Zombienet Test (?) - [ ] SRLabs Audit Closes paritytech#4745 Closes paritytech#2418 --------- Co-authored-by: ordian <[email protected]> Co-authored-by: Ankan <[email protected]> Co-authored-by: Tsvetomir Dimitrov <[email protected]> * Migrate pallet-democracy benchmarks to benchmark v2 syntax (paritytech#6509) # Description Migrates pallet-democracy benchmarks to benchmark v2 syntax This is Part of paritytech#6202 --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: command-bot <> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> * Forward logging directives to Polkadot workers (paritytech#6534) This pull request forward all the logging directives given to the node via `RUST_LOG` or `-l` to the workers, instead of only forwarding `RUST_LOG`. --------- Co-authored-by: GitHub Action <[email protected]> * Support block gap created by fast sync (paritytech#5703) This is part 2 of paritytech#5406 (comment), properly handling the block gap generated during fast sync. Although paritytech#5406 remains unresolved due to the known issues in paritytech#5663, I decided to open up this PR earlier than later to speed up the overall progress. I've tested the fast sync locally with this PR, and it appears to be functioning well. (I was doing a fast sync from a discontinued archive node locally, thus the issue highlighted in paritytech#5663 (comment) was bypassed exactly.) Once the edge cases in paritytech#5663 are addressed, we can move forward by removing the body attribute from the LightState block request and complete the work on paritytech#5406. The changes in this PR are incremental, so reviewing commit by commit should provide the best clarity. cc @dmitry-markin --------- Co-authored-by: Bastian Köcher <[email protected]> * Pure state sync refactoring (part-2) (paritytech#6521) This PR is the second part of the pure state sync refactoring, encapsulating `StateSyncMetadata` as a separate entity. Now it's pretty straightforward what changes are needed for the persistent state sync as observed in the struct `StateSync`: - `state`: redirect directly to the DB layer instead of being accumulated in the memory. - `metadata`: handle the state sync metadata on disk whenever the state is forwarded to the DB, resume an ongoing state sync on a restart, etc. --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> * Add and test events in `pallet-conviction-voting` (paritytech#6544) # Description paritytech#4613 introduced events for `pallet_conviction_voting::{vote, remove_vote, remove_other_vote}`. However: 1. it did not include `unlock` 2. the pallet's unit tests were missing an update ## Integration N/A ## Review Notes This is as paritytech#6261 was, so it is a trivial change. * Increase default trie cache size to 1GiB (paritytech#6546) The default trie cache size before was set to `64MiB`, which is quite low to achieve real speed ups. `1GiB` should be a reasonable number as the requirements for validators/collators/full nodes are much higher when it comes to minimum memory requirements. Also the cache will not use `1GiB` from the start and fills over time. The setting can be changed by setting `--trie-cache-size BYTE_SIZE`. --------- Co-authored-by: GitHub Action <[email protected]> * Bridges testing improvements (paritytech#6536) This PR includes: - Refactored integrity tests to support standalone deployment of `pallet-bridge-messages`. - Refactored the `open_and_close_bridge_works` test case to support multiple scenarios, such as: 1. A local chain opening a bridge. 2. Sibling parachains opening a bridge. 3. The relay chain opening a bridge. - Previously, we added instance support for `pallet-bridge-relayer` but overlooked updating the `DeliveryConfirmationPaymentsAdapter`. --------- Co-authored-by: GitHub Action <[email protected]> * Migrate pallet-scheduler benchmark to v2 (paritytech#6292) Part of: - paritytech#6202. --------- Signed-off-by: Xavier Lau <[email protected]> Co-authored-by: Giuseppe Re <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> * Removes constraint in `BlockNumberProvider` from treasury (paritytech#6522) paritytech#3970 updated the treasury pallet to support relay chain block number provider. However, it added a constraint to the BlockNumberProvider to have the same block number type as frame_system: ```rust type BlockNumberProvider: BlockNumberProvider<BlockNumber = BlockNumberFor<Self>>; ``` This PR removes that constraint as suggested by @gui1117 * add profile * exclude trigger on push --------- Signed-off-by: Adrian Catangiu <[email protected]> Signed-off-by: georgepisaltu <[email protected]> Signed-off-by: linning <[email protected]> Signed-off-by: Alexandru Vasile <[email protected]> Signed-off-by: Xavier Lau <[email protected]> Co-authored-by: Joseph Zhao <[email protected]> Co-authored-by: Giuseppe Re <[email protected]> Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Alin Dima <[email protected]> Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Nazar Mokrynskyi <[email protected]> Co-authored-by: Michal Kucharczyk <[email protected]> Co-authored-by: Sebastian Kunert <[email protected]> Co-authored-by: Adrian Catangiu <[email protected]> Co-authored-by: jpserrat <[email protected]> Co-authored-by: davidk-pt <[email protected]> Co-authored-by: DavidK <[email protected]> Co-authored-by: Kian Paimani <[email protected]> Co-authored-by: clangenb <[email protected]> Co-authored-by: PG Herveou <[email protected]> Co-authored-by: Doordashcon <[email protected]> Co-authored-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Xavier Lau <[email protected]> Co-authored-by: Jeeyong Um <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]> Co-authored-by: Andrii <[email protected]> Co-authored-by: Branislav Kontur <[email protected]> Co-authored-by: Shawn Tabrizi <[email protected]> Co-authored-by: Niklas Adolfsson <[email protected]> Co-authored-by: Andrei Eres <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Michał Gil <[email protected]> Co-authored-by: Dónal Murray <[email protected]> Co-authored-by: Muharem <[email protected]> Co-authored-by: Kazunobu Ndong <[email protected]> Co-authored-by: Dmitry Markin <[email protected]> Co-authored-by: Alexandru Vasile <[email protected]> Co-authored-by: Stephane Gurgenidze <[email protected]> Co-authored-by: georgepisaltu <[email protected]> Co-authored-by: Viraj Bhartiya <[email protected]> Co-authored-by: Alexander Samusev <[email protected]> Co-authored-by: Iulian Barbu <[email protected]> Co-authored-by: Cyrill Leutwiler <[email protected]> Co-authored-by: NingLin-P <[email protected]> Co-authored-by: Tobi Demeco <[email protected]> Co-authored-by: Guillaume Thiolliere <[email protected]> Co-authored-by: Liu-Cheng Xu <[email protected]> Co-authored-by: Tsvetomir Dimitrov <[email protected]> Co-authored-by: Ermal Kaleci <[email protected]> Co-authored-by: Alexander Theißen <[email protected]> Co-authored-by: tmpolaczyk <[email protected]> Co-authored-by: Maciej <[email protected]> Co-authored-by: ordian <[email protected]> Co-authored-by: Ankan <[email protected]> Co-authored-by: Alexandre R. Baldé <[email protected]> Co-authored-by: Xavier Lau <[email protected]> Co-authored-by: gupnik <[email protected]>
Follow up to #3685
Partially fixes #6403
The main PR introduced bare support for the new extension version byte as well as extension weights and benchmarking.
This PR:
General
transactions.pallet_authorship::Author
,frame_system::Digest
andpallet_transaction_payment::NextFeeMultiplier
storage items as they are read multiple times by extensions for each transaction, but are hot in memory and currently overestimate the weight.CheckEra
andCheckGenesis
as the reads are performed for every transaction and overestimate the weight.frame_system
extensions.