Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[transactions] Ignore conflicts when inserting #182

Open
joel-u410 opened this issue Dec 6, 2024 · 1 comment
Open

[transactions] Ignore conflicts when inserting #182

joel-u410 opened this issue Dec 6, 2024 · 1 comment

Comments

@joel-u410
Copy link
Contributor

If you ever want to force the indexer to re-index some past transactions, and manually update last_processed_block in the crawler_state where name = 'transactions' ... without the on_conflict_do_nothing you can put the indexer into an infinite loop.

Example:

First, update the crawler_state to "rewind history":

namada-indexer=# UPDATE crawler_state set last_processed_block = 38 where name = 'transactions';
UPDATE 1

Now, run transactions:

2024-12-06T18:02:07.412603Z  INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:07.423771Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db

Caused by:
    duplicate key value violates unique constraint "wrapper_transactions_pkey"
2024-12-06T18:02:10.686714Z  INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:10.690671Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db

Caused by:
    duplicate key value violates unique constraint "wrapper_transactions_pkey"
2024-12-06T18:02:14.555980Z  INFO transactions: Queried block successfully wrapper_txs=1 inner_txs=1 block=39
2024-12-06T18:02:14.564664Z ERROR shared::error: Database error reason=Failed to insert wrapper transactions in db

Caused by:
    duplicate key value violates unique constraint "wrapper_transactions_pkey"

... forever looping, never recovering ...

In this case, I'd rather it simply ignore the constraint violation (since already-indexed transactions should never change on-chain, we should not UPDATE them) and move on without failing.

I think this scenario can also occur if you were running two instances of the transactions crawler simultaneously (which you might want to do for HA purposes).

Originally posted by @joel-u410 in #81 (comment)

@mateuszjasiuk
Copy link
Collaborator

Right! I'm just curious if we do not potentially hide real error in some cases 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants