Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Refactor gossip protocol #308

Draft
wants to merge 112 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
112 commits
Select commit Hold shift + click to select a range
63a316b
Split message to gossip and fiber
contrun Nov 14, 2024
b08bb66
Add gossip message definition
contrun Nov 15, 2024
703df22
Fix types.rs for newer gossip message
contrun Nov 15, 2024
2af5ab5
Fix compilation error because message change
contrun Nov 15, 2024
9ca595d
Register gossip protocol
contrun Nov 15, 2024
2b5223f
Rename FiberBroadcastMessage to BroadcastMessage
contrun Nov 18, 2024
b0cfb5a
Use separate response type in fiber gossip protocol
contrun Nov 18, 2024
654db4d
Use channel outpoint to query all gossip messages
contrun Nov 18, 2024
fa7b3c1
Add missing comments for QueryBroadcastMessagesResult
contrun Nov 18, 2024
c533ad1
Fix compilation becasue message protocol change
contrun Nov 18, 2024
3c36de3
Use struct for gossip message if possible
contrun Nov 18, 2024
3353246
Relocate gossip only data structure
contrun Nov 18, 2024
164bba1
Remove message queue while syncing
contrun Nov 18, 2024
7c96a2d
Add Cursor type
contrun Nov 18, 2024
3ced134
Add network actor event GossipMessage
contrun Nov 19, 2024
59490f3
Test Channel{Anouncement,Update} cursor serder
contrun Nov 19, 2024
fd0ba19
Add separate gossip protocol handler
contrun Nov 19, 2024
6d84648
Receive service control in pre_start for gossip actor
contrun Nov 19, 2024
e8292e3
Implement some gossip message processing
contrun Nov 19, 2024
a832ef6
Remove some gossip message command in network actor
contrun Nov 19, 2024
c7f98c5
Add types BroadcastMessageQueryFlags BroadcastMessageWithTimestamp
contrun Nov 20, 2024
3181b48
Add gossip message store
contrun Nov 20, 2024
8b60e78
Implement gossip message store
contrun Nov 20, 2024
5437693
Use GossipMessageStore in gossip actor
contrun Nov 20, 2024
b6345e6
Avoid overwriting channel updates of the same outpoint
contrun Nov 20, 2024
aa964be
Fix BroadcastMessageQueryFlags values
contrun Nov 20, 2024
a3878aa
Process BroadcastMessagesFilterResult
contrun Nov 21, 2024
cefccfd
Handle more gossip messages
contrun Nov 21, 2024
183160f
Remove graph syncer
contrun Nov 21, 2024
6d94ebc
Remove syncing status
contrun Nov 21, 2024
afb2033
Remove get current block number
contrun Nov 21, 2024
95b7108
Mock sending gossip message
contrun Nov 21, 2024
1fc73dd
Start the syncing process
contrun Nov 21, 2024
75e0ae6
Refactor graph for the new gossip message store
contrun Nov 22, 2024
9ba61fb
Fix tests compilation
contrun Nov 22, 2024
9953b04
Add actor message to process a single broadcast message
contrun Nov 22, 2024
f963f88
Add chain_hash to QueryBroadcastMessages
contrun Nov 22, 2024
82387ab
Add iterator function for GossipMessageStore
contrun Nov 22, 2024
648de45
Add methods to save refined broadcast messages
contrun Nov 23, 2024
bd37e88
Implement get_{nodes,channels}_with_params
contrun Nov 23, 2024
97148ab
Fix graph tests compilation
contrun Nov 23, 2024
5160163
Fix batch not commited in store
contrun Nov 23, 2024
977feb8
Fix graph store tests
contrun Nov 23, 2024
864f8bf
Add tests for saving channel updates
contrun Nov 23, 2024
5242732
Fix gossip message store get_broadcast_messages_iter
contrun Nov 23, 2024
e2a1b38
Fix gossip message store get_broadcast_messages
contrun Nov 23, 2024
c460d4e
Fix potential infinite loop because cursor not updated
contrun Nov 23, 2024
73ae939
Fix graph tests because outdated message
contrun Nov 23, 2024
d109d19
Don't use identical gossip actor name
contrun Nov 23, 2024
e586f9e
Fix wrong channel update direction implementation
contrun Nov 23, 2024
c666fea
Fix timestamp issue in tests
contrun Nov 23, 2024
711a585
Fix test_sync_node_announcement_version
contrun Nov 23, 2024
948a514
Fix test_connect_to_peers_with_mutual_channel_on_restart
contrun Nov 24, 2024
34dd7bd
Persist peer id to pubkey map
contrun Nov 24, 2024
05b750c
Use smaller network maintenance interval for tests
contrun Nov 24, 2024
b4dd4c3
Remove outdated test helper functions
contrun Nov 24, 2024
07c21b1
Fix channel update timestamp
contrun Nov 24, 2024
6917331
Broadcast messsages to peers who sent filter
contrun Nov 24, 2024
e1eadce
Fix some comments because of refactoring
contrun Nov 24, 2024
7c4c7ea
Avoid verifying broadcast message multiple times
contrun Nov 24, 2024
08175d5
Use old peer state while sending gossip messages
contrun Nov 25, 2024
a3fe47a
Ignore existing while adding peers for filters
contrun Nov 26, 2024
188f760
Save and send queries for missing messages
contrun Nov 26, 2024
3e0db13
Remove sent messages cache in gossip
contrun Nov 27, 2024
8bf8722
Add gossip syncing actor
contrun Nov 28, 2024
4477356
Fix returning new broadcast message in test memory store
contrun Nov 28, 2024
b8da14c
Bookkeep syncing status only in syncer
contrun Nov 28, 2024
3ecf5d5
Remove pending_broadcast_messages in gossip actor
contrun Nov 28, 2024
fbe2447
Reject broadcast messages from the far future
contrun Nov 28, 2024
3a817bd
Only broadcast messages when we have related messages in store
contrun Nov 28, 2024
7d9a15e
Broadcast many messages in one go
contrun Dec 2, 2024
cd5410e
Add SubscribableGossipMessageStore trait
contrun Dec 2, 2024
b9e230e
Remove save_broadcast_message from GossipMessageStore trait
contrun Dec 2, 2024
f1e6f01
Implement SubscribableGossipMessageStore
contrun Dec 3, 2024
a386fe5
Make SubscribableGossipMessageStore independent from GossipMessageStore
contrun Dec 3, 2024
2324acb
Subscribe gossip message store updates for network graph
contrun Dec 3, 2024
44c6e63
Update comments
contrun Dec 3, 2024
70c0414
Fix not connecting to announced nodes
contrun Dec 4, 2024
1041361
Simplify implementation of creating a channel
contrun Dec 5, 2024
d5ad36c
Add message saving notification
contrun Dec 5, 2024
7078158
Fix condition in checking new get request
contrun Dec 5, 2024
be28389
Add a function to go back some time for cursor
contrun Dec 5, 2024
9045f3f
Remove after cursor from peer state
contrun Dec 5, 2024
2ff09ff
Use dedicated actor for active syncing
contrun Dec 5, 2024
d2cabb0
Fix test channel update version
contrun Dec 5, 2024
250b25b
More properly guarantee block identical timestamp
contrun Dec 5, 2024
9f5a434
Read private key from fiber config in tests
contrun Dec 6, 2024
44173dc
Fix starting active syncing condition
contrun Dec 6, 2024
0f2d1da
Encapsulate network graph in test network node
contrun Dec 6, 2024
52fb671
Fix using the wrong channel update in certain test
contrun Dec 6, 2024
e036507
Fix missing gossip message updates for store
contrun Dec 6, 2024
de09c81
Make gossip store maintenance interval configurable
contrun Dec 6, 2024
c49878e
Simplify initial loading of store subscribers
contrun Dec 6, 2024
af9acf2
Add subscription id for better debugging info
contrun Dec 6, 2024
47d6613
Fix returned gossip messages not ordered
contrun Dec 6, 2024
b837783
Correctly update last cursor in gossip store tick handler
contrun Dec 6, 2024
58a9e44
Ignore my own node announcement while sampling addresses from graph
contrun Dec 6, 2024
5d5659a
Fix channel update of different nodes
contrun Dec 6, 2024
f936f42
Fix a few syncing test cases
contrun Dec 6, 2024
d8cceeb
Merge remote-tracking branch 'nervosnetwork/main' into refactor-gossi…
contrun Dec 10, 2024
14f2869
Merge remote-tracking branch 'nervosnetwork/main' into refactor-gossi…
contrun Dec 10, 2024
ff65a51
Avoid skipping some messages in graph tests
contrun Dec 10, 2024
d396ca4
Remove git merge temp files
contrun Dec 10, 2024
f8bb4c0
Fix get broadcast messages implementation
contrun Dec 10, 2024
17bd3be
Fix broadcast messages not sent
contrun Dec 10, 2024
620e31b
Update graph while building payment path
contrun Dec 10, 2024
37d9291
Fix flaky test test_remove_tlc_with_expiry_error
contrun Dec 10, 2024
0b9d9c1
Remove unused code
contrun Dec 10, 2024
0154c6c
Remove duplicated expiry addition
contrun Dec 11, 2024
37624f7
Remove lagged messages in gossip message store actor
contrun Dec 11, 2024
8d057e9
Update comments for store updates subscription
contrun Dec 11, 2024
f79c059
Order messages by dependency relationship
contrun Dec 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 49 additions & 24 deletions src/ckb/actor.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
use ckb_sdk::{rpc::ResponseFormatGetter, CkbRpcClient, RpcError};
use ckb_types::{core::TransactionView, packed, prelude::*};
use ckb_types::{core::TransactionView, packed, prelude::*, H256};
use ractor::{
concurrency::{sleep, Duration},
Actor, ActorProcessingErr, ActorRef, RpcReplyPort,
Expand All @@ -25,19 +25,6 @@ pub struct TraceTxRequest {
pub confirmations: u64,
}

#[derive(Debug)]
pub enum CkbChainMessage {
Fund(
FundingTx,
FundingRequest,
RpcReplyPort<Result<FundingTx, FundingError>>,
),
Sign(FundingTx, RpcReplyPort<Result<FundingTx, FundingError>>),
SendTx(TransactionView, RpcReplyPort<Result<(), RpcError>>),
TraceTx(TraceTxRequest, RpcReplyPort<TraceTxResponse>),
GetCurrentBlockNumber((), RpcReplyPort<Result<u64, RpcError>>),
}

#[derive(Debug)]
pub struct TraceTxResponse {
pub tx: Option<ckb_jsonrpc_types::TransactionView>,
Expand All @@ -53,6 +40,39 @@ impl TraceTxResponse {
}
}

#[derive(Debug, Clone)]
pub struct GetBlockTimestampRequest {
block_hash: H256,
}

impl GetBlockTimestampRequest {
pub fn from_block_hash(block_hash: H256) -> Self {
Self { block_hash }
}

pub fn block_hash(&self) -> H256 {
self.block_hash.clone()
}
}

pub type GetBlockTimestampResponse = u64;

#[derive(Debug)]
pub enum CkbChainMessage {
Fund(
FundingTx,
FundingRequest,
RpcReplyPort<Result<FundingTx, FundingError>>,
),
Sign(FundingTx, RpcReplyPort<Result<FundingTx, FundingError>>),
SendTx(TransactionView, RpcReplyPort<Result<(), RpcError>>),
TraceTx(TraceTxRequest, RpcReplyPort<TraceTxResponse>),
GetBlockTimestamp(
GetBlockTimestampRequest,
RpcReplyPort<Result<Option<GetBlockTimestampResponse>, RpcError>>,
),
}

#[ractor::async_trait]
impl Actor for CkbChainActor {
type Msg = CkbChainMessage;
Expand Down Expand Up @@ -89,17 +109,8 @@ impl Actor for CkbChainActor {
message: Self::Msg,
state: &mut Self::State,
) -> Result<(), ActorProcessingErr> {
use CkbChainMessage::{Fund, GetCurrentBlockNumber, SendTx, Sign, TraceTx};
use CkbChainMessage::{Fund, SendTx, Sign, TraceTx};
match message {
GetCurrentBlockNumber(_, reply) => {
// Have to use block_in_place here, see https://github.com/seanmonstar/reqwest/issues/1017.
let result = tokio::task::block_in_place(move || {
CkbRpcClient::new(&state.config.rpc_url)
.get_tip_block_number()
.map(|x| x.value())
});
let _ = reply.send(result);
}
Fund(tx, request, reply_port) => {
let context = state.build_funding_context(&request);
if !reply_port.is_closed() {
Expand Down Expand Up @@ -251,6 +262,20 @@ impl Actor for CkbChainActor {
}
}
}
CkbChainMessage::GetBlockTimestamp(
GetBlockTimestampRequest { block_hash },
reply_port,
) => {
let rpc_url = state.config.rpc_url.clone();
tokio::task::block_in_place(move || {
let ckb_client = CkbRpcClient::new(&rpc_url);
let _ = reply_port.send(
ckb_client
.get_block(block_hash)
.map(|x| x.map(|x| x.header.inner.timestamp.into())),
);
});
}
}
Ok(())
}
Expand Down
8 changes: 4 additions & 4 deletions src/ckb/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ serde_with::serde_conv!(
);

#[serde_as]
#[derive(Serialize, Deserialize, Debug, Clone, Default, PartialEq)]
#[derive(Serialize, Deserialize, Debug, Clone, Default, Eq, PartialEq, Hash)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These Eq and Hash added in this file is not necessary.

Suggest a clean up:

diff --git a/src/ckb/config.rs b/src/ckb/config.rs
index 9ddded8..d0e318d 100644
--- a/src/ckb/config.rs
+++ b/src/ckb/config.rs
@@ -148,7 +148,7 @@ serde_with::serde_conv!(
 );
 
 #[serde_as]
-#[derive(Serialize, Deserialize, Debug, Clone, Default, Eq, PartialEq, Hash)]
+#[derive(Serialize, Deserialize, Debug, Clone, Default, PartialEq)]
 pub struct UdtScript {
     pub code_hash: H256,
     #[serde_as(as = "ScriptHashTypeWrapper")]
@@ -158,7 +158,7 @@ pub struct UdtScript {
 }
 
 #[serde_as]
-#[derive(Serialize, Deserialize, Clone, Debug, Eq, PartialEq, Hash)]
+#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
 pub struct UdtCellDep {
     #[serde_as(as = "DepTypeWrapper")]
     pub dep_type: DepType,
@@ -166,7 +166,7 @@ pub struct UdtCellDep {
     pub index: u32,
 }
 
-#[derive(Serialize, Deserialize, Clone, Debug, Default, Eq, PartialEq, Hash)]
+#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
 pub struct UdtArgInfo {
     pub name: String,
     pub script: UdtScript,
@@ -174,7 +174,7 @@ pub struct UdtArgInfo {
     pub cell_deps: Vec<UdtCellDep>,
 }
 
-#[derive(Serialize, Deserialize, Clone, Debug, Default, Eq, PartialEq, Hash)]
+#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
 pub struct UdtCfgInfos(pub Vec<UdtArgInfo>);
 
 impl FromStr for UdtCfgInfos {
diff --git a/src/fiber/types.rs b/src/fiber/types.rs
index 16b08ec..a651290 100644
--- a/src/fiber/types.rs
+++ b/src/fiber/types.rs
@@ -1498,7 +1498,7 @@ impl TryFrom<molecule_fiber::AnnouncementSignatures> for AnnouncementSignatures
     }
 }
 
-#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq, Hash)]
+#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
 pub struct NodeAnnouncement {
     // Signature to this message, may be empty the message is not signed yet.
     pub signature: Option<EcdsaSignature>,
@@ -2274,7 +2274,7 @@ impl TryFrom<molecule_gossip::GossipMessage> for GossipMessage {
     }
 }
 
-#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq, Hash)]
+#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
 pub enum BroadcastMessage {
     NodeAnnouncement(NodeAnnouncement),
     ChannelAnnouncement(ChannelAnnouncement),

pub struct UdtScript {
pub code_hash: H256,
#[serde_as(as = "ScriptHashTypeWrapper")]
Expand All @@ -158,23 +158,23 @@ pub struct UdtScript {
}

#[serde_as]
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq)]
#[derive(Serialize, Deserialize, Clone, Debug, Eq, PartialEq, Hash)]
pub struct UdtCellDep {
#[serde_as(as = "DepTypeWrapper")]
pub dep_type: DepType,
pub tx_hash: H256,
pub index: u32,
}

#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
#[derive(Serialize, Deserialize, Clone, Debug, Default, Eq, PartialEq, Hash)]
pub struct UdtArgInfo {
pub name: String,
pub script: UdtScript,
pub auto_accept_amount: Option<u128>,
pub cell_deps: Vec<UdtCellDep>,
}

#[derive(Serialize, Deserialize, Clone, Debug, Default, PartialEq)]
#[derive(Serialize, Deserialize, Clone, Debug, Default, Eq, PartialEq, Hash)]
pub struct UdtCfgInfos(pub Vec<UdtArgInfo>);

impl FromStr for UdtCfgInfos {
Expand Down
5 changes: 4 additions & 1 deletion src/ckb/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,10 @@ mod actor;
mod error;
mod funding;

pub use actor::{CkbChainActor, CkbChainMessage, TraceTxRequest, TraceTxResponse};
pub use actor::{
CkbChainActor, CkbChainMessage, GetBlockTimestampRequest, GetBlockTimestampResponse,
TraceTxRequest, TraceTxResponse,
};
pub use config::{CkbConfig, DEFAULT_CKB_BASE_DIR_NAME};
pub use error::{CkbChainError, FundingError};
pub use funding::{FundingRequest, FundingTx};
Expand Down
47 changes: 38 additions & 9 deletions src/ckb/tests/test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,19 @@ use ckb_types::{
core::{DepType, TransactionView},
packed::{CellDep, CellOutput, OutPoint, Script, Transaction},
prelude::{Builder, Entity, IntoTransactionView, Pack, PackVec, Unpack},
H256,
};
use once_cell::sync::Lazy;
use once_cell::sync::{Lazy, OnceCell};
use std::{collections::HashMap, sync::Arc, sync::RwLock};

use crate::ckb::{
config::UdtCfgInfos,
contracts::{Contract, ContractsContext, ContractsInfo},
TraceTxRequest, TraceTxResponse,
use tokio::sync::RwLock as TokioRwLock;

use crate::{
ckb::{
config::UdtCfgInfos,
contracts::{Contract, ContractsContext, ContractsInfo},
TraceTxRequest, TraceTxResponse,
},
now_timestamp_as_millis_u64,
};

use crate::ckb::CkbChainMessage;
Expand Down Expand Up @@ -308,9 +313,6 @@ impl Actor for MockChainActor {
debug!("MockChainActor received message: {:?}", message);
use CkbChainMessage::*;
match message {
GetCurrentBlockNumber(_, reply) => {
let _ = reply.send(Ok(0));
}
Fund(tx, request, reply_port) => {
let mut fulfilled_tx = tx.clone();
let outputs = fulfilled_tx
Expand Down Expand Up @@ -490,6 +492,33 @@ impl Actor for MockChainActor {
}
};
}
GetBlockTimestamp(request, rpc_reply_port) => {
// The problem of channel announcement is that each nodes will query the block timestamp
// and use it as the channel announcement timestamp.
// Guaranteeing the block timestamp is the same across all nodes is important
// because if a node A has a greater channel announcement timestamp than node B, then when
// A tries to get broadcast messages after this channel announcement timestamp, B will return
// the channel announcement. But for A, it is not a later broadcast message. This process will
// cause an infinite loop.
// So here we create an static lock which is shared across all nodes, and we use this lock to
// guarantee that the block timestamp is the same across all nodes.
static BLOCK_TIMESTAMP: OnceCell<TokioRwLock<HashMap<H256, u64>>> = OnceCell::new();
BLOCK_TIMESTAMP.get_or_init(|| TokioRwLock::new(HashMap::new()));
let timestamp = *BLOCK_TIMESTAMP
.get()
.unwrap()
.write()
.await
.entry(request.block_hash())
.or_insert(now_timestamp_as_millis_u64());

debug!(
"Get block timestamp: block_hash: {:?}, timestamp: {}",
request.block_hash(),
timestamp
);
let _ = rpc_reply_port.send(Ok(Some(timestamp)));
}
}
Ok(())
}
Expand Down
7 changes: 7 additions & 0 deletions src/errors.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
use ckb_sdk::RpcError;
use ractor::{MessagingErr, SpawnErr};
use tentacle::{error::SendErrorKind, secio::PeerId};
use thiserror::Error;
Expand Down Expand Up @@ -48,8 +49,14 @@ pub enum Error {
InvalidPeerMessage(String),
#[error("Onion packet error: {0}")]
InvalidOnionPacket(crate::fiber::types::Error),
#[error("Ckb Rpc error: {0}")]
CkbRpcError(RpcError),
#[error("Database error: {0}")]
DBInternalError(String),
#[error("Internal error: {0}")]
InternalError(anyhow::Error),
#[error("Invalid chain hash: {0} (expecting {1})")]
InvalidChainHash(Hash256, Hash256),
}

pub type Result<T> = std::result::Result<T, Error>;
62 changes: 30 additions & 32 deletions src/fiber/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@ use crate::{
fiber::{
fee::calculate_tlc_forward_fee,
network::{get_chain_hash, SendOnionPacketCommand},
serde_utils::PubNonceAsBytes,
types::{ChannelUpdate, PeeledPaymentOnionPacket, TlcErr, TlcErrPacket, TlcErrorCode},
types::{
BroadcastMessage, ChannelUpdate, PeeledPaymentOnionPacket, TlcErr, TlcErrPacket,
TlcErrorCode,
},
},
invoice::{CkbInvoice, CkbInvoiceStatus, InvoiceStore},
now_timestamp_as_millis_u64,
Expand Down Expand Up @@ -70,7 +72,7 @@ use super::{
hash_algorithm::HashAlgorithm,
key::blake2b_hash_with_salt,
network::FiberMessageWithPeerId,
serde_utils::EntityHex,
serde_utils::{EntityHex, PubNonceAsBytes},
types::{
AcceptChannel, AddTlc, ChannelAnnouncement, ChannelReady, ClosingSigned, CommitmentSigned,
EcdsaSignature, FiberChannelMessage, FiberMessage, Hash256, OpenChannel,
Expand All @@ -90,6 +92,14 @@ pub const FUNDING_CELL_WITNESS_LEN: usize = 16 + 32 + 64;
// is funded or not.
pub const INITIAL_COMMITMENT_NUMBER: u64 = 0;

// Whether we are receiving a channel update from node1 or node2.
// If the flag is set, it means the channel update is from node2, otherwise it is from node1.
pub const MESSAGE_OF_NODE1_FLAG: u32 = 0;

// Whether we are receiving a channel update from node1 or node2.
// If the flag is set, it means the channel update is from node2, otherwise it is from node1.
pub const MESSAGE_OF_NODE2_FLAG: u32 = 1;
Comment on lines +95 to +101
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comments is not right.


// The channel is disabled, and no more tlcs can be added to the channel.
pub const CHANNEL_DISABLED_FLAG: u32 = 1;

Expand Down Expand Up @@ -1630,10 +1640,9 @@ where

self.network
.send_message(NetworkActorMessage::new_command(
NetworkActorCommand::ProccessChannelUpdate(
self.get_remote_peer_id(),
update,
),
NetworkActorCommand::BroadcastMessages(vec![
BroadcastMessage::ChannelUpdate(update),
]),
))
.expect(ASSUME_NETWORK_ACTOR_ALIVE);

Expand Down Expand Up @@ -3272,6 +3281,11 @@ impl ChannelActorState {
.get_unsigned_channel_update_message()
.expect("public channel can generate channel update message");
f(&mut channel_update);
debug!(
"Generated channel update message for channel {:?}: {:?}",
&self.get_id(),
&channel_update
);
let node_signature =
sign_network_message(network.clone(), channel_update.message_to_sign())
.await
Expand Down Expand Up @@ -3312,10 +3326,9 @@ impl ChannelActorState {

network
.send_message(NetworkActorMessage::new_command(
NetworkActorCommand::ProccessChannelUpdate(
self.get_remote_peer_id(),
NetworkActorCommand::BroadcastMessages(vec![BroadcastMessage::ChannelUpdate(
channel_update,
),
)]),
))
.expect(ASSUME_NETWORK_ACTOR_ALIVE);
}
Expand Down Expand Up @@ -3360,7 +3373,7 @@ impl ChannelActorState {
) => Some(ChannelUpdate::new_unsigned(
Default::default(),
self.must_get_funding_transaction_outpoint(),
std::time::UNIX_EPOCH.elapsed().expect("Duration since unix epoch").as_secs(),
now_timestamp_as_millis_u64(),
message_flags,
0,
expiry_delta,
Expand Down Expand Up @@ -5241,30 +5254,15 @@ impl ChannelActorState {
self.on_channel_ready(network).await;

debug!(
"Broadcasting channel announcement message {:?}",
&channel_announcement,
"Broadcasting channel announcement {:?} and channel update {:?}",
&channel_announcement, &channel_update
);
network
.send_message(NetworkActorMessage::new_command(
NetworkActorCommand::ProcessChannelAnnouncement(
self.get_remote_peer_id(),
self.get_funding_transaction_block_number(),
self.get_funding_transaction_index(),
channel_announcement,
),
))
.expect(ASSUME_NETWORK_ACTOR_ALIVE);
debug!(
"Broadcasting channel update message to peers: {:?}",
&channel_update
);

network
.send_message(NetworkActorMessage::new_command(
NetworkActorCommand::ProccessChannelUpdate(
self.get_remote_peer_id(),
channel_update,
),
NetworkActorCommand::BroadcastMessages(vec![
BroadcastMessage::ChannelAnnouncement(channel_announcement),
BroadcastMessage::ChannelUpdate(channel_update),
]),
))
.expect(ASSUME_NETWORK_ACTOR_ALIVE);
}
Expand Down
Loading
Loading