pub struct UtreexoNode<Chain: ChainBackend, Context = RunningNode> {
pub(crate) common: NodeCommon<Chain>,
pub(crate) context: Context,
}Expand description
The main node that operates while florestad is up.
UtreexoNode aims to be modular where Chain can be any implementation
of a ChainBackend.
Context refers to which state the UtreexoNode is on, being
RunningNode, SyncNode, and ChainSelector. Defaults to
RunningNode which automatically transitions between contexts.
Fields§
§common: NodeCommon<Chain>§context: ContextImplementations§
Source§impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
pub(crate) fn request_blocks( &mut self, blocks: Vec<BlockHash>, ) -> Result<(), WireError>
pub(crate) fn request_block_proof( &mut self, block: Block, peer: u32, ) -> Result<(), WireError>
pub(crate) fn attach_proof( &mut self, uproof: UtreexoProof, peer: u32, ) -> Result<(), WireError>
Sourcepub(crate) fn ask_for_missed_proofs(&mut self) -> Result<(), WireError>
pub(crate) fn ask_for_missed_proofs(&mut self) -> Result<(), WireError>
Asks all utreexo peers for proofs of blocks that we have, but haven’t received proofs for yet, and don’t have any GetProofs inflight. This may be caused by a peer disconnecting while we didn’t have more utreexo peers to redo the request.
Sourcepub(crate) fn process_pending_blocks(&mut self) -> Result<(), WireError>
pub(crate) fn process_pending_blocks(&mut self) -> Result<(), WireError>
Processes ready blocks in order, stopping at the tip or the first missing block/proof. Call again when new blocks or proofs arrive.
Sourcefn process_block(
&mut self,
block_height: u32,
block_hash: BlockHash,
) -> Result<(), WireError>
fn process_block( &mut self, block_height: u32, block_hash: BlockHash, ) -> Result<(), WireError>
Actually process a block that is ready to be processed.
This function will take the next block in our chain, process its proof and validate it. If everything is correct, it will connect the block to our chain.
Sourcefn block_validation_err(e: BlockchainError) -> Option<BlockValidationErrors>
fn block_validation_err(e: BlockchainError) -> Option<BlockValidationErrors>
Returns the inner BlockValidationErrors of this chain error, if any.
Sourcefn handle_validation_errors(
&mut self,
e: BlockValidationErrors,
block: Block,
block_peer: u32,
utreexo_peer: u32,
) -> Option<u32>
fn handle_validation_errors( &mut self, e: BlockValidationErrors, block: Block, block_peer: u32, utreexo_peer: u32, ) -> Option<u32>
Handles the different block validation errors that can happen when connecting a block.
Returns the peer id that caused this error, since it could be block or utreexo-related.
Source§impl<Chain> UtreexoNode<Chain, ChainSelector>where
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
Chain::Error: From<UtreexoLeafError>,
impl<Chain> UtreexoNode<Chain, ChainSelector>where
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
Chain::Error: From<UtreexoLeafError>,
Sourceasync fn handle_headers(
&mut self,
peer: u32,
headers: Vec<Header>,
) -> Result<(), WireError>
async fn handle_headers( &mut self, peer: u32, headers: Vec<Header>, ) -> Result<(), WireError>
This function is called every time we get a Headers message from a peer.
It will validate the headers and add them to our chain, if they are valid.
If we get an empty headers message, we’ll check what to do next, depending on
our current state. We may poke our peers to see if they have an alternative tip,
or we may just finish the IBD, if no one have an alternative tip.
Sourcefn parse_acc(acc: Vec<u8>) -> Result<Stump, WireError>
fn parse_acc(acc: Vec<u8>) -> Result<Stump, WireError>
Takes a serialized accumulator and parses it into a Stump
Sourceasync fn grab_both_peers_version(
&mut self,
peer1: u32,
peer2: u32,
block_hash: BlockHash,
block_height: u32,
) -> Result<(Option<Vec<u8>>, Option<Vec<u8>>), WireError>
async fn grab_both_peers_version( &mut self, peer1: u32, peer2: u32, block_hash: BlockHash, block_height: u32, ) -> Result<(Option<Vec<u8>>, Option<Vec<u8>>), WireError>
Sends a request to two peers and wait for their response
This function will send a GetUtreexoState request to two peers and wait for their
response. If both peers respond, it will return the accumulator from both peers.
If only one peer responds, it will return the accumulator from that peer and None
for the other. If no peer responds, it will return None for both.
We use this during the cut-and-choose protocol, to find where they disagree.
Sourceasync fn find_who_is_lying(
&mut self,
peer1: u32,
peer2: u32,
) -> Result<PeerCheck, WireError>
async fn find_who_is_lying( &mut self, peer1: u32, peer2: u32, ) -> Result<PeerCheck, WireError>
Find which peer is lying about what the accumulator state is at a given point
This function will ask peers their accumulator for a given block, and check whether they agree or not. If they don’t, we cut the search in half and keep looking for the fork point. Once we find the last agreed accumulator, we ask for the block and proof that comes after it, update the accumulator from that point, and find who is lying.
If successful returns the PeerCheck enum, representing whether peers are:
- Lying
- Unresponsive
Sourceasync fn get_block_and_proof(
&mut self,
peer: u32,
block_hash: BlockHash,
) -> Result<InflightBlock, WireError>
async fn get_block_and_proof( &mut self, peer: u32, block_hash: BlockHash, ) -> Result<InflightBlock, WireError>
Requests a block and its proof from a peer
If you need to see a peer’s version of a given block, you can use this method to request a block from a specific peer.
Sourcefn update_acc(
&self,
acc: Stump,
block: &Block,
proof: Proof,
leaf_data: &[CompactLeafData],
height: u32,
) -> Result<Stump, WireError>
fn update_acc( &self, acc: Stump, block: &Block, proof: Proof, leaf_data: &[CompactLeafData], height: u32, ) -> Result<Stump, WireError>
Updates a Stump, with the data from a block and its proof
Sourceasync fn find_accumulator_for_block(
&mut self,
height: u32,
hash: BlockHash,
) -> Result<Stump, WireError>
async fn find_accumulator_for_block( &mut self, height: u32, hash: BlockHash, ) -> Result<Stump, WireError>
Finds the accumulator for one block
This method will find what the accumulator looks like for a block with (height, hash). Check-out this post to learn how the cut-and-choose protocol works
Sourceasync fn empty_headers_message(&mut self, peer: u32) -> Result<(), WireError>
async fn empty_headers_message(&mut self, peer: u32) -> Result<(), WireError>
If we get an empty headers message, our next action depends on which state are
we in:
- If we are downloading headers for the first time, this means we’ve just finished and should go to the next phase
- If we are checking with our peer if they have an alternative tip, this peer has send all blocks they have. Once all peers have finished, we just pick the most PoW chain among all chains that we got
async fn is_our_chain_invalid( &mut self, other_tip: BlockHash, ) -> Result<(), WireError>
fn ban_peers_on_tip(&mut self, tip: BlockHash) -> Result<(), WireError>
async fn check_tips(&mut self) -> Result<(), WireError>
Sourcefn request_headers(&mut self, tip: BlockHash) -> Result<(), WireError>
fn request_headers(&mut self, tip: BlockHash) -> Result<(), WireError>
Ask for headers, given a tip
This function will send a getheaders request to our peers, assuming this
peer is following a chain with tip inside it. We use this in case some of
our peer is in a fork, so we can learn about all blocks in that fork and
compare the candidate chains to pick the best one.
Sourcefn poke_peers(&self) -> Result<(), WireError>
fn poke_peers(&self) -> Result<(), WireError>
Sends a getheaders to all our peers
After we download all blocks from one peer, we ask our peers if they agree with our sync peer on what is the best chain. If they are in a fork, we’ll download that fork and compare with our own chain. We should always pick the most PoW one.
pub async fn run(&mut self) -> Result<(), WireError>
Sourcefn can_start_headers_sync(&self) -> bool
fn can_start_headers_sync(&self) -> bool
Whether we have enough peers to start downloading headers
Sourceasync fn maintenance_tick(&mut self) -> Result<LoopControl, WireError>
async fn maintenance_tick(&mut self) -> Result<LoopControl, WireError>
Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.
Returns LoopControl::Break if we need to break the main ChainSelector loop, either
because the kill signal was set or because the header chain is synced.
async fn find_accumulator_for_block_step( &mut self, block: BlockHash, height: u32, ) -> Result<FindAccResult, WireError>
async fn handle_notification( &mut self, notification: NodeNotification, ) -> Result<(), WireError>
async fn handle_peer_notification( &mut self, notification: PeerMessages, peer: u32, time: Instant, ) -> Result<(), WireError>
Source§impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
Sourcepub(crate) fn create_connection(
&mut self,
conn_kind: ConnectionKind,
) -> Result<(), WireError>
pub(crate) fn create_connection( &mut self, conn_kind: ConnectionKind, ) -> Result<(), WireError>
Create a new outgoing connection, selecting an appropriate peer address.
If a fixed peer is set via the --connect CLI argument, its connection
kind will always be coerced to ConnectionKind::Manual. Otherwise,
an address is selected from the AddressMan based on the required
[ServiceFlags] for the given connection_kind.
If no address is available and the kind is not ConnectionKind::Manual,
hardcoded addresses are loaded into the AddressMan as a fallback.
pub(crate) fn open_feeler_connection(&mut self) -> Result<(), WireError>
Sourcepub(crate) fn open_connection(
&mut self,
kind: ConnectionKind,
peer_id: usize,
peer_address: LocalAddress,
allow_v1_fallback: bool,
) -> Result<(), WireError>
pub(crate) fn open_connection( &mut self, kind: ConnectionKind, peer_id: usize, peer_address: LocalAddress, allow_v1_fallback: bool, ) -> Result<(), WireError>
Creates a new outgoing connection with address.
kind may or may not be a ConnectionKind::Feeler, a special connection type
that is used to learn about good peers, but are not kept after handshake
(others are ConnectionKind::Regular, ConnectionKind::Manual and ConnectionKind::Extra).
We will always try to open a V2 connection first. If the allow_v1_fallback is set,
we may retry the connection with the old V1 protocol if the V2 connection fails.
We don’t open the connection here, we create a Peer actor that will try to open
a connection with the given address and kind. If it succeeds, it will send a
PeerMessages::Ready to the node after handshaking.
Sourcepub(crate) async fn open_non_proxy_connection(
kind: ConnectionKind,
peer_address: LocalAddress,
requests_rx: UnboundedReceiver<NodeRequest>,
peer_id_count: u32,
mempool: Arc<Mutex<Mempool>>,
network: Network,
node_tx: UnboundedSender<NodeNotification>,
our_user_agent: String,
our_best_block: u32,
allow_v1_fallback: bool,
) -> Result<(), WireError>
pub(crate) async fn open_non_proxy_connection( kind: ConnectionKind, peer_address: LocalAddress, requests_rx: UnboundedReceiver<NodeRequest>, peer_id_count: u32, mempool: Arc<Mutex<Mempool>>, network: Network, node_tx: UnboundedSender<NodeNotification>, our_user_agent: String, our_best_block: u32, allow_v1_fallback: bool, ) -> Result<(), WireError>
Opens a new connection that doesn’t require a proxy and includes the functionalities of create_outbound_connection.
Sourcepub(crate) async fn open_proxy_connection(
proxy: SocketAddr,
kind: ConnectionKind,
mempool: Arc<Mutex<Mempool>>,
network: Network,
node_tx: UnboundedSender<NodeNotification>,
peer_address: LocalAddress,
requests_rx: UnboundedReceiver<NodeRequest>,
peer_id_count: u32,
our_user_agent: String,
our_best_block: u32,
allow_v1_fallback: bool,
) -> Result<(), WireError>
pub(crate) async fn open_proxy_connection( proxy: SocketAddr, kind: ConnectionKind, mempool: Arc<Mutex<Mempool>>, network: Network, node_tx: UnboundedSender<NodeNotification>, peer_address: LocalAddress, requests_rx: UnboundedReceiver<NodeRequest>, peer_id_count: u32, our_user_agent: String, our_best_block: u32, allow_v1_fallback: bool, ) -> Result<(), WireError>
Opens a connection through a socks5 interface
Sourcepub(crate) fn resolve_connect_host(
address: &str,
default_port: u16,
) -> Result<LocalAddress, AddrParseError>
pub(crate) fn resolve_connect_host( address: &str, default_port: u16, ) -> Result<LocalAddress, AddrParseError>
Resolves a string address into a LocalAddress
This function should get an address in the format <address>[<:port>] and return a
usable LocalAddress. It can be an ipv4, ipv6 or a hostname. In case of hostnames,
we resolve them using the system’s DNS resolver and return an ip address. Errors if
the provided address is invalid, or we can’t resolve it.
TODO: Allow for non-clearnet addresses like onion services and i2p.
pub(crate) fn get_port(network: Network) -> u16
Sourcepub(crate) fn get_peers_from_dns(&self) -> Result<(), WireError>
pub(crate) fn get_peers_from_dns(&self) -> Result<(), WireError>
Fetch peers from DNS seeds, sending a NodeNotification with found ones. Returns
immediately after spawning a background blocking task that performs the work.
Sourcefn maybe_ask_dns_seed_for_addresses(&mut self)
fn maybe_ask_dns_seed_for_addresses(&mut self)
Check whether it’s necessary to request more addresses from DNS seeds.
Perform another address request from DNS seeds if we still don’t have enough addresses
on the AddressMan and the last address request from DNS seeds was over 2 minutes ago.
Sourcefn maybe_use_hardcoded_addresses(&mut self)
fn maybe_use_hardcoded_addresses(&mut self)
If we don’t have any peers, we use the hardcoded addresses.
This is only done if we don’t have any peers for a long time, or we
can’t find a Utreexo peer in a context we need them. This function
won’t do anything if --connect was used
pub(crate) fn init_peers(&mut self) -> Result<(), WireError>
pub(crate) fn maybe_open_connection( &mut self, required_service: ServiceFlags, ) -> Result<(), WireError>
pub(crate) fn maybe_open_connection_with_added_peers( &mut self, ) -> Result<(), WireError>
Source§impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
Sourcefn choose_peer_by_latency(
&self,
service: ServiceFlags,
) -> Option<(&u32, &LocalPeerView)>
fn choose_peer_by_latency( &self, service: ServiceFlags, ) -> Option<(&u32, &LocalPeerView)>
Picks a Ready peer supporting service, biased toward lower message latency.
Each candidate weight is computed as lowest_time / time_i. For instance, if we have two
candidates with latencies of 50ms and 100ms, weights are 1.0 and 0.5 respectively, and the
probability of being chosen is 2/3 and 1/3.
Sourcepub(crate) fn connected_peers(&self) -> usize
pub(crate) fn connected_peers(&self) -> usize
Returns how many connected peers we have.
This function will only count peers that completed handshake and are ready to be used.
Sourcepub(crate) fn send_to_fast_peer(
&self,
request: NodeRequest,
required_service: ServiceFlags,
) -> Result<u32, WireError>
pub(crate) fn send_to_fast_peer( &self, request: NodeRequest, required_service: ServiceFlags, ) -> Result<u32, WireError>
Sends a request to an initialized peer that supports required_service, chosen via a
latency-weighted distribution (lower latency => more likely).
Returns an error if no ready peer has required_service or if sending the request failed.
pub(crate) fn send_to_random_peer( &mut self, req: NodeRequest, required_service: ServiceFlags, ) -> Result<u32, WireError>
pub(crate) fn send_to_peer( &self, peer_id: u32, req: NodeRequest, ) -> Result<(), WireError>
Sourcepub(crate) fn broadcast_to_peers(&mut self, request: NodeRequest)
pub(crate) fn broadcast_to_peers(&mut self, request: NodeRequest)
Sends the same request to all connected peers
This function is best-effort, meaning that some peers may not receive the request if they are disconnected or if there is an error sending the request. We intentionally won’t propagate the error to the caller, as this would request an early return from the function, which would prevent us from sending the request to the peers the comes after the first erroing one.
pub(crate) fn ask_for_addresses(&mut self) -> Result<(), WireError>
fn is_peer_good(peer: &LocalPeerView, needs: ServiceFlags) -> bool
pub(crate) fn handle_peer_ready( &mut self, peer: u32, version: Version, ) -> Result<(), WireError>
Sourcepub(crate) fn handle_notfound_msg(
&mut self,
inv: Inventory,
) -> Result<(), WireError>
pub(crate) fn handle_notfound_msg( &mut self, inv: Inventory, ) -> Result<(), WireError>
Handles a NOTFOUND inventory by completing any matching inflight user request with None.
Sourcepub(crate) fn handle_tx_msg(&mut self, tx: Transaction) -> Result<(), WireError>
pub(crate) fn handle_tx_msg(&mut self, tx: Transaction) -> Result<(), WireError>
Handles an incoming mempool transaction by completing any matching inflight user request.
Sourcepub(crate) fn handle_peer_msg_common(
&mut self,
msg: PeerMessages,
peer: u32,
) -> Result<Option<PeerMessages>, WireError>
pub(crate) fn handle_peer_msg_common( &mut self, msg: PeerMessages, peer: u32, ) -> Result<Option<PeerMessages>, WireError>
Handles peer messages where behavior is common to all node contexts, returning Some only
for peer messages that require context-specific handling.
pub(crate) fn handle_disconnection( &mut self, peer: u32, idx: usize, ) -> Result<(), WireError>
Sourcepub(crate) fn increase_banscore(
&mut self,
peer_id: u32,
factor: u32,
) -> Result<(), WireError>
pub(crate) fn increase_banscore( &mut self, peer_id: u32, factor: u32, ) -> Result<(), WireError>
Increases the “banscore” of a peer.
This is a always increasing number that, if reaches our max_banscore setting,
will cause our peer to be banned for one BANTIME.
The amount of each increment is given by factor, and it’s calibrated for each misbehaving
action that a peer may incur in.
Sourcepub(crate) fn disconnect_and_ban(&mut self, peer: u32) -> Result<(), WireError>
pub(crate) fn disconnect_and_ban(&mut self, peer: u32) -> Result<(), WireError>
Disconnects a peer and bans it for T::BAN_TIME.
Sourcepub(crate) fn check_for_timeout(&mut self) -> Result<(), WireError>
pub(crate) fn check_for_timeout(&mut self) -> Result<(), WireError>
Checks whether some of our inflight requests have timed out.
This function will check if any of our inflight requests have timed out, and if so, it will remove them from the inflight list and increase the banscore of the peer that sent the request. It will also resend the request to another peer.
pub(crate) fn handle_addresses_from_peer( &mut self, peer: u32, addresses: Vec<AddrV2Message>, ) -> Result<(), WireError>
pub(crate) fn redo_inflight_request( &mut self, req: &InflightRequests, ) -> Result<(), WireError>
pub(crate) fn save_peers(&self) -> Result<(), WireError>
Sourcepub(crate) fn save_utreexo_peers(&self) -> Result<(), WireError>
pub(crate) fn save_utreexo_peers(&self) -> Result<(), WireError>
Saves the utreexo peers to disk so we can reconnect with them later
Sourcepub(crate) fn register_message_time(
&mut self,
notification: &PeerMessages,
peer: u32,
read_at: Instant,
) -> Option<()>
pub(crate) fn register_message_time( &mut self, notification: &PeerMessages, peer: u32, read_at: Instant, ) -> Option<()>
Register a message on self.inflights and record the time taken to respond to it.
We need this information for two purposes:
- To calculate the average time taken to respond to messages from peers, which we use to select the fastest peer when sending requests.
- If
metricsfeature is enabled, we record the time taken for all peers on a histogram, and expose it as a prometheus metric.
pub(crate) fn update_peer_metrics(&self)
pub(crate) fn has_utreexo_peers(&self) -> bool
pub(crate) fn has_compact_filters_peer(&self) -> bool
pub(crate) fn get_peer_info(&self, peer_id: &u32) -> Option<PeerInfo>
Sourcepub(crate) fn to_addr_v2(&self, addr: IpAddr) -> AddrV2
pub(crate) fn to_addr_v2(&self, addr: IpAddr) -> AddrV2
Helper function to resolve an IpAddr to AddrV2
This is a little bit of a hack while rust-bitcoin
do not have an from or into that do IpAddr <> AddrV2
Sourcepub fn handle_addnode_add_peer(
&mut self,
addr: IpAddr,
port: u16,
v2_transport: bool,
) -> Result<(), WireError>
pub fn handle_addnode_add_peer( &mut self, addr: IpAddr, port: u16, v2_transport: bool, ) -> Result<(), WireError>
Handles addnode-RPC Add requests, adding a new peer to the added_peers list. This means
the peer is marked as a “manually added peer”. We then try to connect to it, or retry later.
Sourcepub fn handle_addnode_remove_peer(
&mut self,
addr: IpAddr,
port: u16,
) -> Result<(), WireError>
pub fn handle_addnode_remove_peer( &mut self, addr: IpAddr, port: u16, ) -> Result<(), WireError>
Handles remove node requests, removing a peer from the node.
Removes a node from the added_peers list but does not
disconnect the node if it was already connected. It only ensures
that the node is no longer treated as a manually added node
(i.e., it won’t be reconnected if disconnected).
If someone wants to remove a peer, it should be done using the
disconnectnode.
Sourcepub fn handle_disconnect_peer(
&mut self,
addr: IpAddr,
port: u16,
) -> Result<(), WireError>
pub fn handle_disconnect_peer( &mut self, addr: IpAddr, port: u16, ) -> Result<(), WireError>
Handles the node request for immediate disconnection from a peer.
Sourcepub fn handle_addnode_onetry_peer(
&mut self,
addr: IpAddr,
port: u16,
v2_transport: bool,
) -> Result<(), WireError>
pub fn handle_addnode_onetry_peer( &mut self, addr: IpAddr, port: u16, v2_transport: bool, ) -> Result<(), WireError>
Handles addnode onetry requests, connecting to the node and this will try to connect to the given address and port. If it’s successful, it will add the node to the peers list, but not to the added_peers list (e.g., it won’t be reconnected if disconnected).
Source§impl<Chain> UtreexoNode<Chain, RunningNode>where
Chain: ThreadSafeChain + Clone,
WireError: From<Chain::Error>,
Chain::Error: From<UtreexoLeafError>,
impl<Chain> UtreexoNode<Chain, RunningNode>where
Chain: ThreadSafeChain + Clone,
WireError: From<Chain::Error>,
Chain::Error: From<UtreexoLeafError>,
fn send_addresses(&mut self) -> Result<(), WireError>
Sourcepub async fn catch_up(self) -> Result<Self, WireError>
pub async fn catch_up(self) -> Result<Self, WireError>
Every time we restart the node, we’ll be a few blocks behind the tip. This function will start a sync node that will request, download and validate all blocks from the last validation index to the tip. This function will block until the sync node is finished.
On the first startup, if we use either assumeutreexo or pow fraud proofs, this function will only download the blocks that are after the one that got assumed. So, for PoW fraud proofs, this means the last 100 blocks, and for assumeutreexo, this means however many blocks from the hard-coded value in the config file.
Sourcefn check_connections(&mut self) -> Result<(), WireError>
fn check_connections(&mut self) -> Result<(), WireError>
This function is called periodically to check if we have:
- 10 connections
- At least one utreexo peer
- At least one compact filters peer
If we are missing the special peers but have 10 connections, we should disconnect one random peer and try to connect to a utreexo and a compact filters peer.
Sourcepub fn backfill(&self, done_flag: Sender<()>) -> Result<bool, WireError>
pub fn backfill(&self, done_flag: Sender<()>) -> Result<bool, WireError>
If either PoW fraud proofs or assumeutreexo are enabled, we will “skip” IBD for all historical blocks. This allow us to start the node faster, making it usable in a few minutes. If you still want to validate all blocks, you can enable the backfill option.
This function will spawn a background task that will download and validate all blocks that got assumed. After completion, the task will shutdown and the node will continue running normally. If we ever assume an invalid chain, the node will halt and catch fire.
pub async fn run(self, stop_signal: Sender<()>)
Sourceasync fn maintenance_tick(&mut self) -> LoopControl
async fn maintenance_tick(&mut self) -> LoopControl
Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.
Returns LoopControl::Break if we need to stop the node due to the kill signal being set.
fn download_filters(&mut self) -> Result<(), WireError>
fn ask_missed_block(&mut self) -> Result<(), WireError>
Sourcefn get_peer_score(&self, peer: u32) -> u32
fn get_peer_score(&self, peer: u32) -> u32
If we think our tip is stale, we may disconnect one peer and try to get a new one. In this process, if the extra peer gives us a new block, we should drop one of our already connected peers to keep the number of connections stable. This function decides which peer to drop based on whether they’ve timely inv-ed us about the last 6 blocks.
Sourcefn check_for_stale_tip(&mut self) -> Result<(), WireError>
fn check_for_stale_tip(&mut self) -> Result<(), WireError>
This function checks how many time has passed since our last tip update, if it’s been more than 15 minutes, try to update it.
fn handle_new_block( &mut self, block: BlockHash, peer: u32, ) -> Result<(), WireError>
async fn handle_notification( &mut self, notification: NodeNotification, ) -> Result<(), WireError>
Source§impl<Chain> UtreexoNode<Chain, SyncNode>
Node methods for a UtreexoNode where its Context is a SyncNode.
See node for more information.
impl<Chain> UtreexoNode<Chain, SyncNode>
Node methods for a UtreexoNode where its Context is a SyncNode.
See node for more information.
Sourcefn get_blocks_to_download(&mut self)
fn get_blocks_to_download(&mut self)
Computes the next blocks to request, and sends a GETDATA request
We send block requests in batches of four, and we can always have two such batches inflight. Therefore, we can have at most eight inflight blocks.
This function sends exactly one GETDATA, therefore ask for four blocks. It will compute the next blocks we need, given our tip, validation index, inflight requests and cached blocks. We then select a random peer and send the request.
TODO: Be smarter when selecting peers to send, like taking in consideration already inflight blocks and latency.
fn ask_for_missed_blocks(&mut self) -> Result<(), WireError>
Sourcefn check_connections(&mut self) -> Result<(), WireError>
fn check_connections(&mut self) -> Result<(), WireError>
This function will periodically check our connections, to ensure that:
- we have enough utreexo peers to download proofs from (at least 2)
- we have enough peers to download blocks from (at most
MAX_OUTGOING_PEERS) - if some of peers are too slow, and potentially stalling our block download (TODO)
Sourcepub async fn run(self, done_cb: impl FnOnce(&Chain)) -> Self
pub async fn run(self, done_cb: impl FnOnce(&Chain)) -> Self
Starts the sync node by updating the last block requested and starting the main loop. This loop to the following tasks, in order: - Receives messages from our peers through the node_tx channel. - Handles the message received. - Checks if the kill signal is set, if so, breaks the loop. - Checks if the chain is in IBD and disables it if it’s not (e.g. if the chain is synced). - Checks if our tip is obsolete and requests a new one, creating a new connection. - Handles timeouts for inflight requests. - If were low on inflights, requests new blocks to validate.
Sourceasync fn maintenance_tick(&mut self) -> LoopControl
async fn maintenance_tick(&mut self) -> LoopControl
Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.
Returns LoopControl::Break if we need to break the main SyncNode loop, either because
the kill signal was set or because the chain is synced.
Sourceasync fn handle_message(
&mut self,
msg: NodeNotification,
) -> Result<(), WireError>
async fn handle_message( &mut self, msg: NodeNotification, ) -> Result<(), WireError>
Process a message from a peer and handle it accordingly between the variants of PeerMessages.
Source§impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
impl<T, Chain> UtreexoNode<Chain, T>where
T: 'static + Default + NodeContext,
Chain: ChainBackend + 'static,
WireError: From<Chain::Error>,
Sourcepub fn get_handle(&self) -> NodeInterface
pub fn get_handle(&self) -> NodeInterface
Returns a handle to the node interface that we can use to request data from our node. This struct is thread safe, so we can use it from multiple threads and have multiple handles. It also doesn’t require a mutable reference to the node, or any synchronization mechanism.
Sourcefn handle_get_peer_info(&self, responder: Sender<NodeResponse>)
fn handle_get_peer_info(&self, responder: Sender<NodeResponse>)
Handles getpeerinfo requests, returning a list of all connected peers and some useful information about it.
Sourcepub(crate) async fn perform_user_request(
&mut self,
user_req: UserRequest,
responder: Sender<NodeResponse>,
)
pub(crate) async fn perform_user_request( &mut self, user_req: UserRequest, responder: Sender<NodeResponse>, )
Actually perform the user request
These are requests made by some consumer of floresta-wire using the NodeInterface, and may
be a mempool transaction, a block, or a connection request.
Sourcepub(crate) fn check_is_user_block_and_reply(
&mut self,
block: Block,
) -> Result<Option<Block>, WireError>
pub(crate) fn check_is_user_block_and_reply( &mut self, block: Block, ) -> Result<Option<Block>, WireError>
Check if this block request is made by a user through the user interface and answer it back to the user if so.
This function will return the given block if it isn’t a user request. This is to avoid cloning the block.