Struct UtreexoNode

Source
pub struct UtreexoNode<Chain: ChainBackend, Context = RunningNode> {
    pub(crate) common: NodeCommon<Chain>,
    pub(crate) context: Context,
}
Expand description

The main node that operates while florestad is up.

UtreexoNode aims to be modular where Chain can be any implementation of a ChainBackend.

Context refers to which state the UtreexoNode is on, being RunningNode, SyncNode, and ChainSelector. Defaults to RunningNode which automatically transitions between contexts.

Fields§

§common: NodeCommon<Chain>§context: Context

Implementations§

Source§

impl<T, Chain> UtreexoNode<Chain, T>
where T: 'static + Default + NodeContext, Chain: ChainBackend + 'static, WireError: From<Chain::Error>,

Source

pub(crate) fn request_blocks( &mut self, blocks: Vec<BlockHash>, ) -> Result<(), WireError>

Source

pub(crate) fn request_block_proof( &mut self, block: Block, peer: u32, ) -> Result<(), WireError>

Source

pub(crate) fn attach_proof( &mut self, uproof: UtreexoProof, peer: u32, ) -> Result<(), WireError>

Source

pub(crate) fn ask_for_missed_proofs(&mut self) -> Result<(), WireError>

Asks all utreexo peers for proofs of blocks that we have, but haven’t received proofs for yet, and don’t have any GetProofs inflight. This may be caused by a peer disconnecting while we didn’t have more utreexo peers to redo the request.

Source

pub(crate) fn process_pending_blocks(&mut self) -> Result<(), WireError>
where Chain::Error: From<UtreexoLeafError>,

Processes ready blocks in order, stopping at the tip or the first missing block/proof. Call again when new blocks or proofs arrive.

Source

fn process_block( &mut self, block_height: u32, block_hash: BlockHash, ) -> Result<(), WireError>
where Chain::Error: From<UtreexoLeafError>,

Actually process a block that is ready to be processed.

This function will take the next block in our chain, process its proof and validate it. If everything is correct, it will connect the block to our chain.

Source

fn block_validation_err(e: BlockchainError) -> Option<BlockValidationErrors>

Returns the inner BlockValidationErrors of this chain error, if any.

Source

fn handle_validation_errors( &mut self, e: BlockValidationErrors, block: Block, block_peer: u32, utreexo_peer: u32, ) -> Option<u32>

Handles the different block validation errors that can happen when connecting a block.

Returns the peer id that caused this error, since it could be block or utreexo-related.

Source§

impl<Chain> UtreexoNode<Chain, ChainSelector>
where Chain: ChainBackend + 'static, WireError: From<Chain::Error>, Chain::Error: From<UtreexoLeafError>,

Source

async fn handle_headers( &mut self, peer: u32, headers: Vec<Header>, ) -> Result<(), WireError>

This function is called every time we get a Headers message from a peer. It will validate the headers and add them to our chain, if they are valid. If we get an empty headers message, we’ll check what to do next, depending on our current state. We may poke our peers to see if they have an alternative tip, or we may just finish the IBD, if no one have an alternative tip.

Source

fn parse_acc(acc: Vec<u8>) -> Result<Stump, WireError>

Takes a serialized accumulator and parses it into a Stump

Source

async fn grab_both_peers_version( &mut self, peer1: u32, peer2: u32, block_hash: BlockHash, block_height: u32, ) -> Result<(Option<Vec<u8>>, Option<Vec<u8>>), WireError>

Sends a request to two peers and wait for their response

This function will send a GetUtreexoState request to two peers and wait for their response. If both peers respond, it will return the accumulator from both peers. If only one peer responds, it will return the accumulator from that peer and None for the other. If no peer responds, it will return None for both. We use this during the cut-and-choose protocol, to find where they disagree.

Source

async fn find_who_is_lying( &mut self, peer1: u32, peer2: u32, ) -> Result<PeerCheck, WireError>

Find which peer is lying about what the accumulator state is at a given point

This function will ask peers their accumulator for a given block, and check whether they agree or not. If they don’t, we cut the search in half and keep looking for the fork point. Once we find the last agreed accumulator, we ask for the block and proof that comes after it, update the accumulator from that point, and find who is lying.

If successful returns the PeerCheck enum, representing whether peers are:

  • Lying
  • Unresponsive
Source

async fn get_block_and_proof( &mut self, peer: u32, block_hash: BlockHash, ) -> Result<InflightBlock, WireError>

Requests a block and its proof from a peer

If you need to see a peer’s version of a given block, you can use this method to request a block from a specific peer.

Source

fn update_acc( &self, acc: Stump, block: &Block, proof: Proof, leaf_data: &[CompactLeafData], height: u32, ) -> Result<Stump, WireError>

Updates a Stump, with the data from a block and its proof

Source

async fn find_accumulator_for_block( &mut self, height: u32, hash: BlockHash, ) -> Result<Stump, WireError>

Finds the accumulator for one block

This method will find what the accumulator looks like for a block with (height, hash). Check-out this post to learn how the cut-and-choose protocol works

Source

async fn empty_headers_message(&mut self, peer: u32) -> Result<(), WireError>

If we get an empty headers message, our next action depends on which state are we in:

  • If we are downloading headers for the first time, this means we’ve just finished and should go to the next phase
  • If we are checking with our peer if they have an alternative tip, this peer has send all blocks they have. Once all peers have finished, we just pick the most PoW chain among all chains that we got
Source

async fn is_our_chain_invalid( &mut self, other_tip: BlockHash, ) -> Result<(), WireError>

Source

fn ban_peers_on_tip(&mut self, tip: BlockHash) -> Result<(), WireError>

Source

async fn check_tips(&mut self) -> Result<(), WireError>

Source

fn request_headers(&mut self, tip: BlockHash) -> Result<(), WireError>

Ask for headers, given a tip

This function will send a getheaders request to our peers, assuming this peer is following a chain with tip inside it. We use this in case some of our peer is in a fork, so we can learn about all blocks in that fork and compare the candidate chains to pick the best one.

Source

fn poke_peers(&self) -> Result<(), WireError>

Sends a getheaders to all our peers

After we download all blocks from one peer, we ask our peers if they agree with our sync peer on what is the best chain. If they are in a fork, we’ll download that fork and compare with our own chain. We should always pick the most PoW one.

Source

pub async fn run(&mut self) -> Result<(), WireError>

Source

fn can_start_headers_sync(&self) -> bool

Whether we have enough peers to start downloading headers

Source

async fn maintenance_tick(&mut self) -> Result<LoopControl, WireError>

Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.

Returns LoopControl::Break if we need to break the main ChainSelector loop, either because the kill signal was set or because the header chain is synced.

Source

async fn find_accumulator_for_block_step( &mut self, block: BlockHash, height: u32, ) -> Result<FindAccResult, WireError>

Source

async fn handle_notification( &mut self, notification: NodeNotification, ) -> Result<(), WireError>

Source

async fn handle_peer_notification( &mut self, notification: PeerMessages, peer: u32, time: Instant, ) -> Result<(), WireError>

Source§

impl<T, Chain> UtreexoNode<Chain, T>
where T: 'static + Default + NodeContext, Chain: ChainBackend + 'static, WireError: From<Chain::Error>,

Source

pub(crate) fn create_connection( &mut self, conn_kind: ConnectionKind, ) -> Result<(), WireError>

Create a new outgoing connection, selecting an appropriate peer address.

If a fixed peer is set via the --connect CLI argument, its connection kind will always be coerced to ConnectionKind::Manual. Otherwise, an address is selected from the AddressMan based on the required [ServiceFlags] for the given connection_kind.

If no address is available and the kind is not ConnectionKind::Manual, hardcoded addresses are loaded into the AddressMan as a fallback.

Source

pub(crate) fn open_feeler_connection(&mut self) -> Result<(), WireError>

Source

pub(crate) fn open_connection( &mut self, kind: ConnectionKind, peer_id: usize, peer_address: LocalAddress, allow_v1_fallback: bool, ) -> Result<(), WireError>

Creates a new outgoing connection with address.

kind may or may not be a ConnectionKind::Feeler, a special connection type that is used to learn about good peers, but are not kept after handshake (others are ConnectionKind::Regular, ConnectionKind::Manual and ConnectionKind::Extra).

We will always try to open a V2 connection first. If the allow_v1_fallback is set, we may retry the connection with the old V1 protocol if the V2 connection fails. We don’t open the connection here, we create a Peer actor that will try to open a connection with the given address and kind. If it succeeds, it will send a PeerMessages::Ready to the node after handshaking.

Source

pub(crate) async fn open_non_proxy_connection( kind: ConnectionKind, peer_address: LocalAddress, requests_rx: UnboundedReceiver<NodeRequest>, peer_id_count: u32, mempool: Arc<Mutex<Mempool>>, network: Network, node_tx: UnboundedSender<NodeNotification>, our_user_agent: String, our_best_block: u32, allow_v1_fallback: bool, ) -> Result<(), WireError>

Opens a new connection that doesn’t require a proxy and includes the functionalities of create_outbound_connection.

Source

pub(crate) async fn open_proxy_connection( proxy: SocketAddr, kind: ConnectionKind, mempool: Arc<Mutex<Mempool>>, network: Network, node_tx: UnboundedSender<NodeNotification>, peer_address: LocalAddress, requests_rx: UnboundedReceiver<NodeRequest>, peer_id_count: u32, our_user_agent: String, our_best_block: u32, allow_v1_fallback: bool, ) -> Result<(), WireError>

Opens a connection through a socks5 interface

Source

pub(crate) fn resolve_connect_host( address: &str, default_port: u16, ) -> Result<LocalAddress, AddrParseError>

Resolves a string address into a LocalAddress

This function should get an address in the format <address>[<:port>] and return a usable LocalAddress. It can be an ipv4, ipv6 or a hostname. In case of hostnames, we resolve them using the system’s DNS resolver and return an ip address. Errors if the provided address is invalid, or we can’t resolve it.

TODO: Allow for non-clearnet addresses like onion services and i2p.

Source

pub(crate) fn get_port(network: Network) -> u16

Source

pub(crate) fn get_peers_from_dns(&self) -> Result<(), WireError>

Fetch peers from DNS seeds, sending a NodeNotification with found ones. Returns immediately after spawning a background blocking task that performs the work.

Source

fn maybe_ask_dns_seed_for_addresses(&mut self)

Check whether it’s necessary to request more addresses from DNS seeds.

Perform another address request from DNS seeds if we still don’t have enough addresses on the AddressMan and the last address request from DNS seeds was over 2 minutes ago.

Source

fn maybe_use_hardcoded_addresses(&mut self)

If we don’t have any peers, we use the hardcoded addresses.

This is only done if we don’t have any peers for a long time, or we can’t find a Utreexo peer in a context we need them. This function won’t do anything if --connect was used

Source

pub(crate) fn init_peers(&mut self) -> Result<(), WireError>

Source

pub(crate) fn maybe_open_connection( &mut self, required_service: ServiceFlags, ) -> Result<(), WireError>

Source

pub(crate) fn maybe_open_connection_with_added_peers( &mut self, ) -> Result<(), WireError>

Source§

impl<T, Chain> UtreexoNode<Chain, T>
where T: 'static + Default + NodeContext, Chain: ChainBackend + 'static, WireError: From<Chain::Error>,

Source

fn choose_peer_by_latency( &self, service: ServiceFlags, ) -> Option<(&u32, &LocalPeerView)>

Picks a Ready peer supporting service, biased toward lower message latency.

Each candidate weight is computed as lowest_time / time_i. For instance, if we have two candidates with latencies of 50ms and 100ms, weights are 1.0 and 0.5 respectively, and the probability of being chosen is 2/3 and 1/3.

Source

pub(crate) fn connected_peers(&self) -> usize

Returns how many connected peers we have.

This function will only count peers that completed handshake and are ready to be used.

Source

pub(crate) fn send_to_fast_peer( &self, request: NodeRequest, required_service: ServiceFlags, ) -> Result<u32, WireError>

Sends a request to an initialized peer that supports required_service, chosen via a latency-weighted distribution (lower latency => more likely).

Returns an error if no ready peer has required_service or if sending the request failed.

Source

pub(crate) fn send_to_random_peer( &mut self, req: NodeRequest, required_service: ServiceFlags, ) -> Result<u32, WireError>

Source

pub(crate) fn send_to_peer( &self, peer_id: u32, req: NodeRequest, ) -> Result<(), WireError>

Source

pub(crate) fn broadcast_to_peers(&mut self, request: NodeRequest)

Sends the same request to all connected peers

This function is best-effort, meaning that some peers may not receive the request if they are disconnected or if there is an error sending the request. We intentionally won’t propagate the error to the caller, as this would request an early return from the function, which would prevent us from sending the request to the peers the comes after the first erroing one.

Source

pub(crate) fn ask_for_addresses(&mut self) -> Result<(), WireError>

Source

fn is_peer_good(peer: &LocalPeerView, needs: ServiceFlags) -> bool

Source

pub(crate) fn handle_peer_ready( &mut self, peer: u32, version: Version, ) -> Result<(), WireError>

Source

pub(crate) fn handle_notfound_msg( &mut self, inv: Inventory, ) -> Result<(), WireError>

Handles a NOTFOUND inventory by completing any matching inflight user request with None.

Source

pub(crate) fn handle_tx_msg(&mut self, tx: Transaction) -> Result<(), WireError>

Handles an incoming mempool transaction by completing any matching inflight user request.

Source

pub(crate) fn handle_peer_msg_common( &mut self, msg: PeerMessages, peer: u32, ) -> Result<Option<PeerMessages>, WireError>

Handles peer messages where behavior is common to all node contexts, returning Some only for peer messages that require context-specific handling.

Source

pub(crate) fn handle_disconnection( &mut self, peer: u32, idx: usize, ) -> Result<(), WireError>

Source

pub(crate) fn increase_banscore( &mut self, peer_id: u32, factor: u32, ) -> Result<(), WireError>

Increases the “banscore” of a peer.

This is a always increasing number that, if reaches our max_banscore setting, will cause our peer to be banned for one BANTIME. The amount of each increment is given by factor, and it’s calibrated for each misbehaving action that a peer may incur in.

Source

pub(crate) fn disconnect_and_ban(&mut self, peer: u32) -> Result<(), WireError>

Disconnects a peer and bans it for T::BAN_TIME.

Source

pub(crate) fn check_for_timeout(&mut self) -> Result<(), WireError>

Checks whether some of our inflight requests have timed out.

This function will check if any of our inflight requests have timed out, and if so, it will remove them from the inflight list and increase the banscore of the peer that sent the request. It will also resend the request to another peer.

Source

pub(crate) fn handle_addresses_from_peer( &mut self, peer: u32, addresses: Vec<AddrV2Message>, ) -> Result<(), WireError>

Source

pub(crate) fn redo_inflight_request( &mut self, req: &InflightRequests, ) -> Result<(), WireError>

Source

pub(crate) fn save_peers(&self) -> Result<(), WireError>

Source

pub(crate) fn save_utreexo_peers(&self) -> Result<(), WireError>

Saves the utreexo peers to disk so we can reconnect with them later

Source

pub(crate) fn register_message_time( &mut self, notification: &PeerMessages, peer: u32, read_at: Instant, ) -> Option<()>

Register a message on self.inflights and record the time taken to respond to it.

We need this information for two purposes:

  1. To calculate the average time taken to respond to messages from peers, which we use to select the fastest peer when sending requests.
  2. If metrics feature is enabled, we record the time taken for all peers on a histogram, and expose it as a prometheus metric.
Source

pub(crate) fn update_peer_metrics(&self)

Source

pub(crate) fn has_utreexo_peers(&self) -> bool

Source

pub(crate) fn has_compact_filters_peer(&self) -> bool

Source

pub(crate) fn get_peer_info(&self, peer_id: &u32) -> Option<PeerInfo>

Source

pub(crate) fn to_addr_v2(&self, addr: IpAddr) -> AddrV2

Helper function to resolve an IpAddr to AddrV2 This is a little bit of a hack while rust-bitcoin do not have an from or into that do IpAddr <> AddrV2

Source

pub fn handle_addnode_add_peer( &mut self, addr: IpAddr, port: u16, v2_transport: bool, ) -> Result<(), WireError>

Handles addnode-RPC Add requests, adding a new peer to the added_peers list. This means the peer is marked as a “manually added peer”. We then try to connect to it, or retry later.

Source

pub fn handle_addnode_remove_peer( &mut self, addr: IpAddr, port: u16, ) -> Result<(), WireError>

Handles remove node requests, removing a peer from the node.

Removes a node from the added_peers list but does not disconnect the node if it was already connected. It only ensures that the node is no longer treated as a manually added node (i.e., it won’t be reconnected if disconnected).

If someone wants to remove a peer, it should be done using the disconnectnode.

Source

pub fn handle_disconnect_peer( &mut self, addr: IpAddr, port: u16, ) -> Result<(), WireError>

Handles the node request for immediate disconnection from a peer.

Source

pub fn handle_addnode_onetry_peer( &mut self, addr: IpAddr, port: u16, v2_transport: bool, ) -> Result<(), WireError>

Handles addnode onetry requests, connecting to the node and this will try to connect to the given address and port. If it’s successful, it will add the node to the peers list, but not to the added_peers list (e.g., it won’t be reconnected if disconnected).

Source§

impl<Chain> UtreexoNode<Chain, RunningNode>
where Chain: ThreadSafeChain + Clone, WireError: From<Chain::Error>, Chain::Error: From<UtreexoLeafError>,

Source

fn send_addresses(&mut self) -> Result<(), WireError>

Source

pub async fn catch_up(self) -> Result<Self, WireError>

Every time we restart the node, we’ll be a few blocks behind the tip. This function will start a sync node that will request, download and validate all blocks from the last validation index to the tip. This function will block until the sync node is finished.

On the first startup, if we use either assumeutreexo or pow fraud proofs, this function will only download the blocks that are after the one that got assumed. So, for PoW fraud proofs, this means the last 100 blocks, and for assumeutreexo, this means however many blocks from the hard-coded value in the config file.

Source

fn check_connections(&mut self) -> Result<(), WireError>

This function is called periodically to check if we have:

  • 10 connections
  • At least one utreexo peer
  • At least one compact filters peer

If we are missing the special peers but have 10 connections, we should disconnect one random peer and try to connect to a utreexo and a compact filters peer.

Source

pub fn backfill(&self, done_flag: Sender<()>) -> Result<bool, WireError>

If either PoW fraud proofs or assumeutreexo are enabled, we will “skip” IBD for all historical blocks. This allow us to start the node faster, making it usable in a few minutes. If you still want to validate all blocks, you can enable the backfill option.

This function will spawn a background task that will download and validate all blocks that got assumed. After completion, the task will shutdown and the node will continue running normally. If we ever assume an invalid chain, the node will halt and catch fire.

Source

pub async fn run(self, stop_signal: Sender<()>)

Source

async fn maintenance_tick(&mut self) -> LoopControl

Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.

Returns LoopControl::Break if we need to stop the node due to the kill signal being set.

Source

fn download_filters(&mut self) -> Result<(), WireError>

Source

fn ask_missed_block(&mut self) -> Result<(), WireError>

Source

fn get_peer_score(&self, peer: u32) -> u32

If we think our tip is stale, we may disconnect one peer and try to get a new one. In this process, if the extra peer gives us a new block, we should drop one of our already connected peers to keep the number of connections stable. This function decides which peer to drop based on whether they’ve timely inv-ed us about the last 6 blocks.

Source

fn check_for_stale_tip(&mut self) -> Result<(), WireError>

This function checks how many time has passed since our last tip update, if it’s been more than 15 minutes, try to update it.

Source

fn handle_new_block( &mut self, block: BlockHash, peer: u32, ) -> Result<(), WireError>

Source

async fn handle_notification( &mut self, notification: NodeNotification, ) -> Result<(), WireError>

Source§

impl<Chain> UtreexoNode<Chain, SyncNode>
where Chain: ThreadSafeChain, WireError: From<Chain::Error>, Chain::Error: From<UtreexoLeafError>,

Node methods for a UtreexoNode where its Context is a SyncNode. See node for more information.

Source

fn get_blocks_to_download(&mut self)

Computes the next blocks to request, and sends a GETDATA request

We send block requests in batches of four, and we can always have two such batches inflight. Therefore, we can have at most eight inflight blocks.

This function sends exactly one GETDATA, therefore ask for four blocks. It will compute the next blocks we need, given our tip, validation index, inflight requests and cached blocks. We then select a random peer and send the request.

TODO: Be smarter when selecting peers to send, like taking in consideration already inflight blocks and latency.

Source

fn ask_for_missed_blocks(&mut self) -> Result<(), WireError>

Source

fn check_connections(&mut self) -> Result<(), WireError>

This function will periodically check our connections, to ensure that:

  • we have enough utreexo peers to download proofs from (at least 2)
  • we have enough peers to download blocks from (at most MAX_OUTGOING_PEERS)
  • if some of peers are too slow, and potentially stalling our block download (TODO)
Source

pub async fn run(self, done_cb: impl FnOnce(&Chain)) -> Self

Starts the sync node by updating the last block requested and starting the main loop. This loop to the following tasks, in order: - Receives messages from our peers through the node_tx channel. - Handles the message received. - Checks if the kill signal is set, if so, breaks the loop. - Checks if the chain is in IBD and disables it if it’s not (e.g. if the chain is synced). - Checks if our tip is obsolete and requests a new one, creating a new connection. - Handles timeouts for inflight requests. - If were low on inflights, requests new blocks to validate.

Source

async fn maintenance_tick(&mut self) -> LoopControl

Performs the periodic maintenance tasks, including checking for the cancel signal, peer connections, and inflight request timeouts.

Returns LoopControl::Break if we need to break the main SyncNode loop, either because the kill signal was set or because the chain is synced.

Source

async fn handle_message( &mut self, msg: NodeNotification, ) -> Result<(), WireError>

Process a message from a peer and handle it accordingly between the variants of PeerMessages.

Source§

impl<T, Chain> UtreexoNode<Chain, T>
where T: 'static + Default + NodeContext, Chain: ChainBackend + 'static, WireError: From<Chain::Error>,

Source

pub fn get_handle(&self) -> NodeInterface

Returns a handle to the node interface that we can use to request data from our node. This struct is thread safe, so we can use it from multiple threads and have multiple handles. It also doesn’t require a mutable reference to the node, or any synchronization mechanism.

Source

fn handle_get_peer_info(&self, responder: Sender<NodeResponse>)

Handles getpeerinfo requests, returning a list of all connected peers and some useful information about it.

Source

pub(crate) async fn perform_user_request( &mut self, user_req: UserRequest, responder: Sender<NodeResponse>, )

Actually perform the user request

These are requests made by some consumer of floresta-wire using the NodeInterface, and may be a mempool transaction, a block, or a connection request.

Source

pub(crate) fn check_is_user_block_and_reply( &mut self, block: Block, ) -> Result<Option<Block>, WireError>

Check if this block request is made by a user through the user interface and answer it back to the user if so.

This function will return the given block if it isn’t a user request. This is to avoid cloning the block.

Source§

impl<T, Chain> UtreexoNode<Chain, T>
where T: 'static + Default + NodeContext, Chain: ChainBackend + 'static, WireError: From<Chain::Error>,

Source

pub fn new( config: UtreexoNodeConfig, chain: Chain, mempool: Arc<Mutex<Mempool>>, block_filters: Option<Arc<NetworkFilters<FlatFiltersStore>>>, kill_signal: Arc<RwLock<bool>>, address_man: AddressMan, ) -> Result<Self, WireError>

Source

pub(crate) fn shutdown(&mut self)

Trait Implementations§

Source§

impl<Chain: ChainBackend, T> Deref for UtreexoNode<Chain, T>

Source§

type Target = NodeCommon<Chain>

The resulting type after dereferencing.
Source§

fn deref(&self) -> &Self::Target

Dereferences the value.
Source§

impl<T, Chain: ChainBackend> DerefMut for UtreexoNode<Chain, T>

Source§

fn deref_mut(&mut self) -> &mut Self::Target

Mutably dereferences the value.

Auto Trait Implementations§

§

impl<Chain, Context> Freeze for UtreexoNode<Chain, Context>
where Context: Freeze, Chain: Freeze,

§

impl<Chain, Context = RunningNode> !RefUnwindSafe for UtreexoNode<Chain, Context>

§

impl<Chain, Context> Send for UtreexoNode<Chain, Context>
where Context: Send, Chain: Send,

§

impl<Chain, Context> Sync for UtreexoNode<Chain, Context>
where Context: Sync, Chain: Sync,

§

impl<Chain, Context> Unpin for UtreexoNode<Chain, Context>
where Context: Unpin, Chain: Unpin,

§

impl<Chain, Context = RunningNode> !UnwindSafe for UtreexoNode<Chain, Context>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<P, T> Receiver for P
where P: Deref<Target = T> + ?Sized, T: ?Sized,

Source§

type Target = T

🔬This is a nightly-only experimental API. (arbitrary_self_types)
The target type on which the method may be called.
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more