You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you ever want to force the indexer to re-index some past transactions, and manually update last_processed_block in the crawler_state where name = 'transactions' ... without the on_conflict_do_nothing you can put the indexer into an infinite loop.
Example:
First, update the crawler_state to "rewind history":
namada-indexer=# UPDATE crawler_state set last_processed_block = 38 where name = 'transactions';
UPDATE 1
Now, run transactions:
2024-12-06T18:02:07.412603Z INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:07.423771Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db
Caused by:
duplicate key value violates unique constraint "wrapper_transactions_pkey"
2024-12-06T18:02:10.686714Z INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:10.690671Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db
Caused by:
duplicate key value violates unique constraint "wrapper_transactions_pkey"
2024-12-06T18:02:14.555980Z INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:14.564664Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db
Caused by:
duplicate key value violates unique constraint "wrapper_transactions_pkey"
... forever looping, never recovering ...
In this case, I'd rather it simply ignore the constraint violation (since already-indexed transactions should never change on-chain, we should not UPDATE them) and move on without failing.
I think this scenario can also occur if you were running two instances of the transactions crawler simultaneously (which you might want to do for HA purposes).
If you ever want to force the indexer to re-index some past transactions, and manually update
last_processed_block
in thecrawler_state
wherename = 'transactions'
... without theon_conflict_do_nothing
you can put the indexer into an infinite loop.Example:
First, update the
crawler_state
to "rewind history":Now, run
transactions
:In this case, I'd rather it simply ignore the constraint violation (since already-indexed
transactions
should never change on-chain, we should notUPDATE
them) and move on without failing.I think this scenario can also occur if you were running two instances of the
transactions
crawler simultaneously (which you might want to do for HA purposes).Originally posted by @joel-u410 in #81 (comment)
The text was updated successfully, but these errors were encountered: