Struct ParquetDataCatalog

Source
pub struct ParquetDataCatalog { /* private fields */ }
Expand description

A high-performance data catalog for storing and retrieving financial market data using Apache Parquet format.

The ParquetDataCatalog provides a comprehensive solution for managing large volumes of financial market data with efficient storage, querying, and consolidation capabilities. It supports various object store backends including local filesystems, AWS S3, and other cloud storage providers.

§Features

  • Efficient Storage: Uses Apache Parquet format with configurable compression
  • Object Store Backend: Supports multiple storage backends through the object_store crate
  • Time-based Organization: Organizes data by timestamp ranges for optimal query performance
  • Data Validation: Ensures timestamp ordering and interval consistency
  • Consolidation: Merges multiple files to reduce storage overhead and improve query speed
  • Type Safety: Strongly typed data handling with compile-time guarantees

§Data Organization

Data is organized hierarchically by data type and instrument:

  • data/{data_type}/{instrument_id}/{start_ts}-{end_ts}.parquet
  • Files are named with their timestamp ranges for efficient range queries
  • Intervals are validated to be disjoint to prevent data overlap

§Performance Considerations

  • Batch Size: Controls memory usage during data processing
  • Compression: SNAPPY compression provides good balance of speed and size
  • Row Group Size: Affects query performance and memory usage
  • File Consolidation: Reduces the number of files for better query performance

Implementations§

Source§

impl ParquetDataCatalog

Source

pub fn new( base_path: PathBuf, storage_options: Option<HashMap<String, String>>, batch_size: Option<usize>, compression: Option<Compression>, max_row_group_size: Option<usize>, ) -> Self

Creates a new ParquetDataCatalog instance from a local file path.

This is a convenience constructor that converts a local path to a URI format and delegates to Self::from_uri.

§Parameters
  • base_path: The base directory path for data storage.
  • storage_options: Optional HashMap containing storage-specific configuration options.
  • batch_size: Number of records to process in each batch (default: 5000).
  • compression: Parquet compression algorithm (default: SNAPPY).
  • max_row_group_size: Maximum rows per Parquet row group (default: 5000).
§Panics

Panics if the path cannot be converted to a valid URI or if the object store cannot be created from the path.

§Examples
use std::path::PathBuf;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(
    PathBuf::from("/tmp/posei_trader"),
    None,        // no storage options
    Some(1000),  // smaller batch size
    None,        // default compression
    None,        // default row group size
);
Source

pub fn from_uri( uri: &str, storage_options: Option<HashMap<String, String>>, batch_size: Option<usize>, compression: Option<Compression>, max_row_group_size: Option<usize>, ) -> Result<Self>

Creates a new ParquetDataCatalog instance from a URI with optional storage options.

Supports various URI schemes including local file paths and multiple cloud storage backends supported by the object_store crate.

§Supported URI Schemes
  • AWS S3: s3://bucket/path
  • Google Cloud Storage: gs://bucket/path or gcs://bucket/path
  • Azure Blob Storage: azure://account/container/path or abfs://container@account.dfs.core.windows.net/path
  • HTTP/WebDAV: http:// or https://
  • Local files: file://path or plain paths
§Parameters
  • uri: The URI for the data storage location.
  • storage_options: Optional HashMap containing storage-specific configuration options:
    • For S3: endpoint_url, region, access_key_id, secret_access_key, session_token, etc.
    • For GCS: service_account_path, service_account_key, project_id, etc.
    • For Azure: account_name, account_key, sas_token, etc.
  • batch_size: Number of records to process in each batch (default: 5000).
  • compression: Parquet compression algorithm (default: SNAPPY).
  • max_row_group_size: Maximum rows per Parquet row group (default: 5000).
§Errors

Returns an error if:

  • The URI format is invalid or unsupported.
  • The object store cannot be created or accessed.
  • Authentication fails for cloud storage backends.
§Examples
use std::collections::HashMap;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

// Local filesystem
let local_catalog = ParquetDataCatalog::from_uri(
    "/tmp/posei_trader",
    None, None, None, None
)?;

// S3 bucket
let s3_catalog = ParquetDataCatalog::from_uri(
    "s3://my-bucket/nautilus-data",
    None, None, None, None
)?;

// Google Cloud Storage
let gcs_catalog = ParquetDataCatalog::from_uri(
    "gs://my-bucket/nautilus-data",
    None, None, None, None
)?;

// Azure Blob Storage
let azure_catalog = ParquetDataCatalog::from_uri(
    "azure://account/container/nautilus-data",
    None, None, None, None
)?;

// S3 with custom endpoint and credentials
let mut storage_options = HashMap::new();
storage_options.insert("endpoint_url".to_string(), "https://my-s3-endpoint.com".to_string());
storage_options.insert("access_key_id".to_string(), "my-key".to_string());
storage_options.insert("secret_access_key".to_string(), "my-secret".to_string());

let s3_catalog = ParquetDataCatalog::from_uri(
    "s3://my-bucket/nautilus-data",
    Some(storage_options),
    None, None, None,
)?;
Source

pub fn write_data_enum( &self, data: Vec<Data>, start: Option<UnixNanos>, end: Option<UnixNanos>, ) -> Result<()>

Writes mixed data types to the catalog by separating them into type-specific collections.

This method takes a heterogeneous collection of market data and separates it by type, then writes each type to its appropriate location in the catalog. This is useful when processing mixed data streams or bulk data imports.

§Parameters
  • data: A vector of mixed [Data] enum variants.
  • start: Optional start timestamp to override the data’s natural range.
  • end: Optional end timestamp to override the data’s natural range.
§Notes
  • Data is automatically sorted by type before writing.
  • Each data type is written to its own directory structure.
  • Instrument data handling is not yet implemented (TODO).
§Examples
use nautilus_model::data::Data;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);
let mixed_data: Vec<Data> = vec![/* mixed data types */];

catalog.write_data_enum(mixed_data, None, None)?;
Source

pub fn write_to_parquet<T>( &self, data: Vec<T>, start: Option<UnixNanos>, end: Option<UnixNanos>, ) -> Result<PathBuf>
where T: HasTsInit + EncodeToRecordBatch + CatalogPathPrefix,

Writes typed data to a Parquet file in the catalog.

This is the core method for persisting market data to the catalog. It handles data validation, batching, compression, and ensures proper file organization with timestamp-based naming.

§Type Parameters
  • T: The data type to write, must implement required traits for serialization and cataloging.
§Parameters
  • data: Vector of data records to write (must be in ascending timestamp order).
  • start: Optional start timestamp to override the natural data range.
  • end: Optional end timestamp to override the natural data range.
§Returns

Returns the PathBuf of the created file, or an empty path if no data was provided.

§Errors

This function will return an error if:

  • Data serialization to Arrow record batches fails
  • Object store write operations fail
  • File path construction fails
  • Timestamp interval validation fails after writing
§Panics

Panics if:

  • Data timestamps are not in ascending order
  • Record batches are empty after conversion
  • Required metadata is missing from the schema
§Examples
use nautilus_model::data::QuoteTick;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);
let quotes: Vec<QuoteTick> = vec![/* quote data */];

let path = catalog.write_to_parquet(quotes, None, None)?;
println!("Data written to: {:?}", path);
Source

pub fn write_to_json<T>( &self, data: Vec<T>, path: Option<PathBuf>, write_metadata: bool, ) -> Result<PathBuf>
where T: HasTsInit + Serialize + CatalogPathPrefix + EncodeToRecordBatch,

Writes typed data to a JSON file in the catalog.

This method provides an alternative to Parquet format for data export and debugging. JSON files are human-readable but less efficient for large datasets.

§Type Parameters
  • T: The data type to write, must implement serialization and cataloging traits.
§Parameters
  • data: Vector of data records to write (must be in ascending timestamp order).
  • path: Optional custom directory path (defaults to catalog’s standard structure).
  • write_metadata: Whether to write a separate metadata file alongside the data.
§Returns

Returns the PathBuf of the created JSON file.

§Errors

This function will return an error if:

  • JSON serialization fails
  • Object store write operations fail
  • File path construction fails
§Panics

Panics if data timestamps are not in ascending order.

§Examples
use std::path::PathBuf;
use nautilus_model::data::TradeTick;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);
let trades: Vec<TradeTick> = vec![/* trade data */];

let path = catalog.write_to_json(
    trades,
    Some(PathBuf::from("/custom/path")),
    true  // write metadata
)?;
Source

pub fn data_to_record_batches<T>( &self, data: Vec<T>, ) -> Result<Vec<RecordBatch>>
where T: HasTsInit + EncodeToRecordBatch,

Converts data into Arrow record batches for Parquet serialization.

This method chunks the data according to the configured batch size and converts each chunk into an Arrow record batch with appropriate metadata.

§Type Parameters
  • T: The data type to convert, must implement required encoding traits.
§Parameters
  • data: Vector of data records to convert
§Returns

Returns a vector of Arrow [RecordBatch] instances ready for Parquet serialization.

§Errors

Returns an error if record batch encoding fails for any chunk.

Source

pub fn extend_file_name( &self, data_cls: &str, instrument_id: Option<String>, start: UnixNanos, end: UnixNanos, ) -> Result<()>

Extends the timestamp range of an existing Parquet file by renaming it.

This method finds an existing file that is adjacent to the specified time range and renames it to include the new range. This is useful when appending data that extends the time coverage of existing files.

§Parameters
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
  • start: Start timestamp of the new range to extend to.
  • end: End timestamp of the new range to extend to.
§Returns

Returns Ok(()) on success, or an error if the operation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed.
  • No adjacent file is found to extend.
  • File rename operations fail.
  • Interval validation fails after extension.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;

let catalog = ParquetDataCatalog::new(/* ... */);

// Extend a file's range backwards or forwards
catalog.extend_file_name(
    "quotes",
    Some("BTCUSD".to_string()),
    UnixNanos::from(1609459200000000000),
    UnixNanos::from(1609545600000000000)
)?;
Source

pub fn consolidate_catalog( &self, start: Option<UnixNanos>, end: Option<UnixNanos>, ensure_contiguous_files: Option<bool>, ) -> Result<()>

Consolidates all data files in the catalog by merging multiple files into single files per directory.

This method finds all leaf data directories in the catalog and consolidates the Parquet files within each directory. Consolidation improves query performance by reducing the number of files that need to be read and can also reduce storage overhead.

§Parameters
  • start: Optional start timestamp to limit consolidation to files within this range.
  • end: Optional end timestamp to limit consolidation to files within this range.
  • ensure_contiguous_files: Whether to validate that consolidated intervals are contiguous (default: true).
§Returns

Returns Ok(()) on success, or an error if consolidation fails for any directory.

§Errors

This function will return an error if:

  • Directory listing fails.
  • File consolidation operations fail.
  • Interval validation fails (when ensure_contiguous_files is true).
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;

let catalog = ParquetDataCatalog::new(/* ... */);

// Consolidate all files in the catalog
catalog.consolidate_catalog(None, None, None)?;

// Consolidate only files within a specific time range
catalog.consolidate_catalog(
    Some(UnixNanos::from(1609459200000000000)),
    Some(UnixNanos::from(1609545600000000000)),
    Some(true)
)?;
Source

pub fn consolidate_data( &self, type_name: &str, instrument_id: Option<String>, start: Option<UnixNanos>, end: Option<UnixNanos>, ensure_contiguous_files: Option<bool>, ) -> Result<()>

Consolidates data files for a specific data type and instrument.

This method consolidates Parquet files within a specific directory (defined by data type and optional instrument ID) by merging multiple files into a single file. This improves query performance and can reduce storage overhead.

§Parameters
  • type_name: The data type directory name (e.g., “quotes”, “trades”, “bars”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
  • start: Optional start timestamp to limit consolidation to files within this range.
  • end: Optional end timestamp to limit consolidation to files within this range.
  • ensure_contiguous_files: Whether to validate that consolidated intervals are contiguous (default: true).
§Returns

Returns Ok(()) on success, or an error if consolidation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed
  • File consolidation operations fail
  • Interval validation fails (when ensure_contiguous_files is true)
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;

let catalog = ParquetDataCatalog::new(/* ... */);

// Consolidate all quote files for a specific instrument
catalog.consolidate_data(
    "quotes",
    Some("BTCUSD".to_string()),
    None,
    None,
    None
)?;

// Consolidate trade files within a time range
catalog.consolidate_data(
    "trades",
    None,
    Some(UnixNanos::from(1609459200000000000)),
    Some(UnixNanos::from(1609545600000000000)),
    Some(true)
)?;
Source

pub fn reset_catalog_file_names(&self) -> Result<()>

Resets the filenames of all Parquet files in the catalog to match their actual content timestamps.

This method scans all leaf data directories in the catalog and renames files based on the actual timestamp range of their content. This is useful when files have been modified or when filename conventions have changed.

§Returns

Returns Ok(()) on success, or an error if the operation fails.

§Errors

This function will return an error if:

  • Directory listing fails
  • File metadata reading fails
  • File rename operations fail
  • Interval validation fails after renaming
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

// Reset all filenames in the catalog
catalog.reset_catalog_file_names()?;
Source

pub fn reset_data_file_names( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<()>

Resets the filenames of Parquet files for a specific data type and instrument ID.

This method renames files in a specific directory based on the actual timestamp range of their content. This is useful for correcting filenames after data modifications or when filename conventions have changed.

§Parameters
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
§Returns

Returns Ok(()) on success, or an error if the operation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed
  • File metadata reading fails
  • File rename operations fail
  • Interval validation fails after renaming
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

// Reset filenames for all quote files
catalog.reset_data_file_names("quotes", None)?;

// Reset filenames for a specific instrument's trade files
catalog.reset_data_file_names("trades", Some("BTCUSD".to_string()))?;
Source

pub fn find_leaf_data_directories(&self) -> Result<Vec<String>>

Finds all leaf data directories in the catalog.

A leaf directory is one that contains data files but no subdirectories. This method is used to identify directories that can be processed for consolidation or other operations.

§Returns

Returns a vector of directory path strings representing leaf directories, or an error if directory traversal fails.

§Errors

This function will return an error if:

  • Object store listing operations fail
  • Directory structure cannot be analyzed
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

let leaf_dirs = catalog.find_leaf_data_directories()?;
for dir in leaf_dirs {
    println!("Found leaf directory: {}", dir);
}
Source

pub fn query<T>( &mut self, instrument_ids: Option<Vec<String>>, start: Option<UnixNanos>, end: Option<UnixNanos>, where_clause: Option<&str>, ) -> Result<QueryResult>
where T: DecodeDataFromRecordBatch + CatalogPathPrefix,

Query data loaded in the catalog

Source

pub fn query_files( &self, data_cls: &str, instrument_ids: Option<Vec<String>>, start: Option<UnixNanos>, end: Option<UnixNanos>, ) -> Result<Vec<String>>

Queries all Parquet files for a specific data type and optional instrument IDs.

This method finds all Parquet files that match the specified criteria and returns their full URIs. The files are filtered by data type, instrument IDs (if provided), and timestamp range (if provided).

§Parameters
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_ids: Optional list of instrument IDs to filter by.
  • start: Optional start timestamp to filter files by their time range.
  • end: Optional end timestamp to filter files by their time range.
§Returns

Returns a vector of file URI strings that match the query criteria, or an error if the query fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed.
  • Object store listing operations fail.
  • URI reconstruction fails.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;

let catalog = ParquetDataCatalog::new(/* ... */);

// Query all quote files
let files = catalog.query_files("quotes", None, None, None)?;

// Query trade files for specific instruments within a time range
let files = catalog.query_files(
    "trades",
    Some(vec!["BTCUSD".to_string(), "ETHUSD".to_string()]),
    Some(UnixNanos::from(1609459200000000000)),
    Some(UnixNanos::from(1609545600000000000))
)?;
Source

pub fn get_missing_intervals_for_request( &self, start: u64, end: u64, data_cls: &str, instrument_id: Option<String>, ) -> Result<Vec<(u64, u64)>>

Finds the missing time intervals for a specific data type and instrument ID.

This method compares a requested time range against the existing data coverage and returns the gaps that need to be filled. This is useful for determining what data needs to be fetched or backfilled.

§Parameters
  • start: Start timestamp of the requested range (Unix nanoseconds).
  • end: End timestamp of the requested range (Unix nanoseconds).
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
§Returns

Returns a vector of (start, end) tuples representing the missing intervals, or an error if the operation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed
  • Interval retrieval fails
  • Gap calculation fails
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

// Find missing intervals for quote data
let missing = catalog.get_missing_intervals_for_request(
    1609459200000000000,  // start
    1609545600000000000,  // end
    "quotes",
    Some("BTCUSD".to_string())
)?;

for (start, end) in missing {
    println!("Missing data from {} to {}", start, end);
}
Source

pub fn query_last_timestamp( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<Option<u64>>

Gets the last (most recent) timestamp for a specific data type and instrument ID.

This method finds the latest timestamp covered by existing data files for the specified data type and instrument. This is useful for determining the most recent data available or for incremental data updates.

§Parameters
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
§Returns

Returns Some(timestamp) if data exists, None if no data is found, or an error if the operation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed
  • Interval retrieval fails
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

// Get the last timestamp for quote data
if let Some(last_ts) = catalog.query_last_timestamp("quotes", Some("BTCUSD".to_string()))? {
    println!("Last quote timestamp: {}", last_ts);
} else {
    println!("No quote data found");
}
Source

pub fn get_intervals( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<Vec<(u64, u64)>>

Gets the time intervals covered by Parquet files for a specific data type and instrument ID.

This method returns all time intervals covered by existing data files for the specified data type and instrument. The intervals are sorted by start time and represent the complete data coverage available.

§Parameters
  • data_cls: The data type directory name (e.g., “quotes”, “trades”).
  • instrument_id: Optional instrument ID to target a specific instrument’s data.
§Returns

Returns a vector of (start, end) tuples representing the covered intervals, sorted by start time, or an error if the operation fails.

§Errors

This function will return an error if:

  • The directory path cannot be constructed.
  • Directory listing fails.
  • Filename parsing fails.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;

let catalog = ParquetDataCatalog::new(/* ... */);

// Get all intervals for quote data
let intervals = catalog.get_intervals("quotes", Some("BTCUSD".to_string()))?;
for (start, end) in intervals {
    println!("Data available from {} to {}", start, end);
}

Trait Implementations§

Source§

impl Debug for ParquetDataCatalog

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

§

impl<T> Instrument for T

§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided [Span], returning an Instrumented wrapper. Read more
§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
§

impl<T> Pointable for T

§

const ALIGN: usize

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
§

impl<T> PolicyExt for T
where T: ?Sized,

§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns [Action::Follow] only if self and other return Action::Follow. Read more
§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns [Action::Follow] if either self or other returns Action::Follow. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T> WithSubscriber for T

§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a [WithDispatch] wrapper. Read more
§

impl<T> ErasedDestructor for T
where T: 'static,

§

impl<T> Ungil for T
where T: Send,