pub struct ParquetDataCatalog { /* private fields */ }
Expand description
A high-performance data catalog for storing and retrieving financial market data using Apache Parquet format.
The ParquetDataCatalog
provides a comprehensive solution for managing large volumes of financial
market data with efficient storage, querying, and consolidation capabilities. It supports various
object store backends including local filesystems, AWS S3, and other cloud storage providers.
§Features
- Efficient Storage: Uses Apache Parquet format with configurable compression
- Object Store Backend: Supports multiple storage backends through the
object_store
crate - Time-based Organization: Organizes data by timestamp ranges for optimal query performance
- Data Validation: Ensures timestamp ordering and interval consistency
- Consolidation: Merges multiple files to reduce storage overhead and improve query speed
- Type Safety: Strongly typed data handling with compile-time guarantees
§Data Organization
Data is organized hierarchically by data type and instrument:
data/{data_type}/{instrument_id}/{start_ts}-{end_ts}.parquet
- Files are named with their timestamp ranges for efficient range queries
- Intervals are validated to be disjoint to prevent data overlap
§Performance Considerations
- Batch Size: Controls memory usage during data processing
- Compression: SNAPPY compression provides good balance of speed and size
- Row Group Size: Affects query performance and memory usage
- File Consolidation: Reduces the number of files for better query performance
Implementations§
Source§impl ParquetDataCatalog
impl ParquetDataCatalog
Sourcepub fn new(
base_path: PathBuf,
storage_options: Option<HashMap<String, String>>,
batch_size: Option<usize>,
compression: Option<Compression>,
max_row_group_size: Option<usize>,
) -> Self
pub fn new( base_path: PathBuf, storage_options: Option<HashMap<String, String>>, batch_size: Option<usize>, compression: Option<Compression>, max_row_group_size: Option<usize>, ) -> Self
Creates a new ParquetDataCatalog
instance from a local file path.
This is a convenience constructor that converts a local path to a URI format
and delegates to Self::from_uri
.
§Parameters
base_path
: The base directory path for data storage.storage_options
: OptionalHashMap
containing storage-specific configuration options.batch_size
: Number of records to process in each batch (default: 5000).compression
: Parquet compression algorithm (default: SNAPPY).max_row_group_size
: Maximum rows per Parquet row group (default: 5000).
§Panics
Panics if the path cannot be converted to a valid URI or if the object store cannot be created from the path.
§Examples
use std::path::PathBuf;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(
PathBuf::from("/tmp/posei_trader"),
None, // no storage options
Some(1000), // smaller batch size
None, // default compression
None, // default row group size
);
Sourcepub fn from_uri(
uri: &str,
storage_options: Option<HashMap<String, String>>,
batch_size: Option<usize>,
compression: Option<Compression>,
max_row_group_size: Option<usize>,
) -> Result<Self>
pub fn from_uri( uri: &str, storage_options: Option<HashMap<String, String>>, batch_size: Option<usize>, compression: Option<Compression>, max_row_group_size: Option<usize>, ) -> Result<Self>
Creates a new ParquetDataCatalog
instance from a URI with optional storage options.
Supports various URI schemes including local file paths and multiple cloud storage backends
supported by the object_store
crate.
§Supported URI Schemes
- AWS S3:
s3://bucket/path
- Google Cloud Storage:
gs://bucket/path
orgcs://bucket/path
- Azure Blob Storage:
azure://account/container/path
orabfs://container@account.dfs.core.windows.net/path
- HTTP/WebDAV:
http://
orhttps://
- Local files:
file://path
or plain paths
§Parameters
uri
: The URI for the data storage location.storage_options
: OptionalHashMap
containing storage-specific configuration options:- For S3:
endpoint_url
, region,access_key_id
,secret_access_key
,session_token
, etc. - For GCS:
service_account_path
,service_account_key
,project_id
, etc. - For Azure:
account_name
,account_key
,sas_token
, etc.
- For S3:
batch_size
: Number of records to process in each batch (default: 5000).compression
: Parquet compression algorithm (default: SNAPPY).max_row_group_size
: Maximum rows per Parquet row group (default: 5000).
§Errors
Returns an error if:
- The URI format is invalid or unsupported.
- The object store cannot be created or accessed.
- Authentication fails for cloud storage backends.
§Examples
use std::collections::HashMap;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
// Local filesystem
let local_catalog = ParquetDataCatalog::from_uri(
"/tmp/posei_trader",
None, None, None, None
)?;
// S3 bucket
let s3_catalog = ParquetDataCatalog::from_uri(
"s3://my-bucket/nautilus-data",
None, None, None, None
)?;
// Google Cloud Storage
let gcs_catalog = ParquetDataCatalog::from_uri(
"gs://my-bucket/nautilus-data",
None, None, None, None
)?;
// Azure Blob Storage
let azure_catalog = ParquetDataCatalog::from_uri(
"azure://account/container/nautilus-data",
None, None, None, None
)?;
// S3 with custom endpoint and credentials
let mut storage_options = HashMap::new();
storage_options.insert("endpoint_url".to_string(), "https://my-s3-endpoint.com".to_string());
storage_options.insert("access_key_id".to_string(), "my-key".to_string());
storage_options.insert("secret_access_key".to_string(), "my-secret".to_string());
let s3_catalog = ParquetDataCatalog::from_uri(
"s3://my-bucket/nautilus-data",
Some(storage_options),
None, None, None,
)?;
Sourcepub fn write_data_enum(
&self,
data: Vec<Data>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
) -> Result<()>
pub fn write_data_enum( &self, data: Vec<Data>, start: Option<UnixNanos>, end: Option<UnixNanos>, ) -> Result<()>
Writes mixed data types to the catalog by separating them into type-specific collections.
This method takes a heterogeneous collection of market data and separates it by type, then writes each type to its appropriate location in the catalog. This is useful when processing mixed data streams or bulk data imports.
§Parameters
data
: A vector of mixed [Data
] enum variants.start
: Optional start timestamp to override the data’s natural range.end
: Optional end timestamp to override the data’s natural range.
§Notes
- Data is automatically sorted by type before writing.
- Each data type is written to its own directory structure.
- Instrument data handling is not yet implemented (TODO).
§Examples
use nautilus_model::data::Data;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
let mixed_data: Vec<Data> = vec![/* mixed data types */];
catalog.write_data_enum(mixed_data, None, None)?;
Sourcepub fn write_to_parquet<T>(
&self,
data: Vec<T>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
) -> Result<PathBuf>where
T: HasTsInit + EncodeToRecordBatch + CatalogPathPrefix,
pub fn write_to_parquet<T>(
&self,
data: Vec<T>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
) -> Result<PathBuf>where
T: HasTsInit + EncodeToRecordBatch + CatalogPathPrefix,
Writes typed data to a Parquet file in the catalog.
This is the core method for persisting market data to the catalog. It handles data validation, batching, compression, and ensures proper file organization with timestamp-based naming.
§Type Parameters
T
: The data type to write, must implement required traits for serialization and cataloging.
§Parameters
data
: Vector of data records to write (must be in ascending timestamp order).start
: Optional start timestamp to override the natural data range.end
: Optional end timestamp to override the natural data range.
§Returns
Returns the PathBuf
of the created file, or an empty path if no data was provided.
§Errors
This function will return an error if:
- Data serialization to Arrow record batches fails
- Object store write operations fail
- File path construction fails
- Timestamp interval validation fails after writing
§Panics
Panics if:
- Data timestamps are not in ascending order
- Record batches are empty after conversion
- Required metadata is missing from the schema
§Examples
use nautilus_model::data::QuoteTick;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
let quotes: Vec<QuoteTick> = vec![/* quote data */];
let path = catalog.write_to_parquet(quotes, None, None)?;
println!("Data written to: {:?}", path);
Sourcepub fn write_to_json<T>(
&self,
data: Vec<T>,
path: Option<PathBuf>,
write_metadata: bool,
) -> Result<PathBuf>where
T: HasTsInit + Serialize + CatalogPathPrefix + EncodeToRecordBatch,
pub fn write_to_json<T>(
&self,
data: Vec<T>,
path: Option<PathBuf>,
write_metadata: bool,
) -> Result<PathBuf>where
T: HasTsInit + Serialize + CatalogPathPrefix + EncodeToRecordBatch,
Writes typed data to a JSON file in the catalog.
This method provides an alternative to Parquet format for data export and debugging. JSON files are human-readable but less efficient for large datasets.
§Type Parameters
T
: The data type to write, must implement serialization and cataloging traits.
§Parameters
data
: Vector of data records to write (must be in ascending timestamp order).path
: Optional custom directory path (defaults to catalog’s standard structure).write_metadata
: Whether to write a separate metadata file alongside the data.
§Returns
Returns the PathBuf
of the created JSON file.
§Errors
This function will return an error if:
- JSON serialization fails
- Object store write operations fail
- File path construction fails
§Panics
Panics if data timestamps are not in ascending order.
§Examples
use std::path::PathBuf;
use nautilus_model::data::TradeTick;
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
let trades: Vec<TradeTick> = vec![/* trade data */];
let path = catalog.write_to_json(
trades,
Some(PathBuf::from("/custom/path")),
true // write metadata
)?;
Sourcepub fn data_to_record_batches<T>(
&self,
data: Vec<T>,
) -> Result<Vec<RecordBatch>>where
T: HasTsInit + EncodeToRecordBatch,
pub fn data_to_record_batches<T>(
&self,
data: Vec<T>,
) -> Result<Vec<RecordBatch>>where
T: HasTsInit + EncodeToRecordBatch,
Converts data into Arrow record batches for Parquet serialization.
This method chunks the data according to the configured batch size and converts each chunk into an Arrow record batch with appropriate metadata.
§Type Parameters
T
: The data type to convert, must implement required encoding traits.
§Parameters
data
: Vector of data records to convert
§Returns
Returns a vector of Arrow [RecordBatch
] instances ready for Parquet serialization.
§Errors
Returns an error if record batch encoding fails for any chunk.
Sourcepub fn extend_file_name(
&self,
data_cls: &str,
instrument_id: Option<String>,
start: UnixNanos,
end: UnixNanos,
) -> Result<()>
pub fn extend_file_name( &self, data_cls: &str, instrument_id: Option<String>, start: UnixNanos, end: UnixNanos, ) -> Result<()>
Extends the timestamp range of an existing Parquet file by renaming it.
This method finds an existing file that is adjacent to the specified time range and renames it to include the new range. This is useful when appending data that extends the time coverage of existing files.
§Parameters
data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_id
: Optional instrument ID to target a specific instrument’s data.start
: Start timestamp of the new range to extend to.end
: End timestamp of the new range to extend to.
§Returns
Returns Ok(())
on success, or an error if the operation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed.
- No adjacent file is found to extend.
- File rename operations fail.
- Interval validation fails after extension.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;
let catalog = ParquetDataCatalog::new(/* ... */);
// Extend a file's range backwards or forwards
catalog.extend_file_name(
"quotes",
Some("BTCUSD".to_string()),
UnixNanos::from(1609459200000000000),
UnixNanos::from(1609545600000000000)
)?;
Sourcepub fn consolidate_catalog(
&self,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
ensure_contiguous_files: Option<bool>,
) -> Result<()>
pub fn consolidate_catalog( &self, start: Option<UnixNanos>, end: Option<UnixNanos>, ensure_contiguous_files: Option<bool>, ) -> Result<()>
Consolidates all data files in the catalog by merging multiple files into single files per directory.
This method finds all leaf data directories in the catalog and consolidates the Parquet files within each directory. Consolidation improves query performance by reducing the number of files that need to be read and can also reduce storage overhead.
§Parameters
start
: Optional start timestamp to limit consolidation to files within this range.end
: Optional end timestamp to limit consolidation to files within this range.ensure_contiguous_files
: Whether to validate that consolidated intervals are contiguous (default: true).
§Returns
Returns Ok(())
on success, or an error if consolidation fails for any directory.
§Errors
This function will return an error if:
- Directory listing fails.
- File consolidation operations fail.
- Interval validation fails (when
ensure_contiguous_files
is true).
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;
let catalog = ParquetDataCatalog::new(/* ... */);
// Consolidate all files in the catalog
catalog.consolidate_catalog(None, None, None)?;
// Consolidate only files within a specific time range
catalog.consolidate_catalog(
Some(UnixNanos::from(1609459200000000000)),
Some(UnixNanos::from(1609545600000000000)),
Some(true)
)?;
Sourcepub fn consolidate_data(
&self,
type_name: &str,
instrument_id: Option<String>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
ensure_contiguous_files: Option<bool>,
) -> Result<()>
pub fn consolidate_data( &self, type_name: &str, instrument_id: Option<String>, start: Option<UnixNanos>, end: Option<UnixNanos>, ensure_contiguous_files: Option<bool>, ) -> Result<()>
Consolidates data files for a specific data type and instrument.
This method consolidates Parquet files within a specific directory (defined by data type and optional instrument ID) by merging multiple files into a single file. This improves query performance and can reduce storage overhead.
§Parameters
type_name
: The data type directory name (e.g., “quotes”, “trades”, “bars”).instrument_id
: Optional instrument ID to target a specific instrument’s data.start
: Optional start timestamp to limit consolidation to files within this range.end
: Optional end timestamp to limit consolidation to files within this range.ensure_contiguous_files
: Whether to validate that consolidated intervals are contiguous (default: true).
§Returns
Returns Ok(())
on success, or an error if consolidation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed
- File consolidation operations fail
- Interval validation fails (when
ensure_contiguous_files
is true)
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;
let catalog = ParquetDataCatalog::new(/* ... */);
// Consolidate all quote files for a specific instrument
catalog.consolidate_data(
"quotes",
Some("BTCUSD".to_string()),
None,
None,
None
)?;
// Consolidate trade files within a time range
catalog.consolidate_data(
"trades",
None,
Some(UnixNanos::from(1609459200000000000)),
Some(UnixNanos::from(1609545600000000000)),
Some(true)
)?;
Sourcepub fn reset_catalog_file_names(&self) -> Result<()>
pub fn reset_catalog_file_names(&self) -> Result<()>
Resets the filenames of all Parquet files in the catalog to match their actual content timestamps.
This method scans all leaf data directories in the catalog and renames files based on the actual timestamp range of their content. This is useful when files have been modified or when filename conventions have changed.
§Returns
Returns Ok(())
on success, or an error if the operation fails.
§Errors
This function will return an error if:
- Directory listing fails
- File metadata reading fails
- File rename operations fail
- Interval validation fails after renaming
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
// Reset all filenames in the catalog
catalog.reset_catalog_file_names()?;
Sourcepub fn reset_data_file_names(
&self,
data_cls: &str,
instrument_id: Option<String>,
) -> Result<()>
pub fn reset_data_file_names( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<()>
Resets the filenames of Parquet files for a specific data type and instrument ID.
This method renames files in a specific directory based on the actual timestamp range of their content. This is useful for correcting filenames after data modifications or when filename conventions have changed.
§Parameters
data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_id
: Optional instrument ID to target a specific instrument’s data.
§Returns
Returns Ok(())
on success, or an error if the operation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed
- File metadata reading fails
- File rename operations fail
- Interval validation fails after renaming
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
// Reset filenames for all quote files
catalog.reset_data_file_names("quotes", None)?;
// Reset filenames for a specific instrument's trade files
catalog.reset_data_file_names("trades", Some("BTCUSD".to_string()))?;
Sourcepub fn find_leaf_data_directories(&self) -> Result<Vec<String>>
pub fn find_leaf_data_directories(&self) -> Result<Vec<String>>
Finds all leaf data directories in the catalog.
A leaf directory is one that contains data files but no subdirectories. This method is used to identify directories that can be processed for consolidation or other operations.
§Returns
Returns a vector of directory path strings representing leaf directories, or an error if directory traversal fails.
§Errors
This function will return an error if:
- Object store listing operations fail
- Directory structure cannot be analyzed
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
let leaf_dirs = catalog.find_leaf_data_directories()?;
for dir in leaf_dirs {
println!("Found leaf directory: {}", dir);
}
Sourcepub fn query<T>(
&mut self,
instrument_ids: Option<Vec<String>>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
where_clause: Option<&str>,
) -> Result<QueryResult>where
T: DecodeDataFromRecordBatch + CatalogPathPrefix,
pub fn query<T>(
&mut self,
instrument_ids: Option<Vec<String>>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
where_clause: Option<&str>,
) -> Result<QueryResult>where
T: DecodeDataFromRecordBatch + CatalogPathPrefix,
Query data loaded in the catalog
Sourcepub fn query_files(
&self,
data_cls: &str,
instrument_ids: Option<Vec<String>>,
start: Option<UnixNanos>,
end: Option<UnixNanos>,
) -> Result<Vec<String>>
pub fn query_files( &self, data_cls: &str, instrument_ids: Option<Vec<String>>, start: Option<UnixNanos>, end: Option<UnixNanos>, ) -> Result<Vec<String>>
Queries all Parquet files for a specific data type and optional instrument IDs.
This method finds all Parquet files that match the specified criteria and returns their full URIs. The files are filtered by data type, instrument IDs (if provided), and timestamp range (if provided).
§Parameters
data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_ids
: Optional list of instrument IDs to filter by.start
: Optional start timestamp to filter files by their time range.end
: Optional end timestamp to filter files by their time range.
§Returns
Returns a vector of file URI strings that match the query criteria, or an error if the query fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed.
- Object store listing operations fail.
- URI reconstruction fails.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
use nautilus_core::UnixNanos;
let catalog = ParquetDataCatalog::new(/* ... */);
// Query all quote files
let files = catalog.query_files("quotes", None, None, None)?;
// Query trade files for specific instruments within a time range
let files = catalog.query_files(
"trades",
Some(vec!["BTCUSD".to_string(), "ETHUSD".to_string()]),
Some(UnixNanos::from(1609459200000000000)),
Some(UnixNanos::from(1609545600000000000))
)?;
Sourcepub fn get_missing_intervals_for_request(
&self,
start: u64,
end: u64,
data_cls: &str,
instrument_id: Option<String>,
) -> Result<Vec<(u64, u64)>>
pub fn get_missing_intervals_for_request( &self, start: u64, end: u64, data_cls: &str, instrument_id: Option<String>, ) -> Result<Vec<(u64, u64)>>
Finds the missing time intervals for a specific data type and instrument ID.
This method compares a requested time range against the existing data coverage and returns the gaps that need to be filled. This is useful for determining what data needs to be fetched or backfilled.
§Parameters
start
: Start timestamp of the requested range (Unix nanoseconds).end
: End timestamp of the requested range (Unix nanoseconds).data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_id
: Optional instrument ID to target a specific instrument’s data.
§Returns
Returns a vector of (start, end) tuples representing the missing intervals, or an error if the operation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed
- Interval retrieval fails
- Gap calculation fails
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
// Find missing intervals for quote data
let missing = catalog.get_missing_intervals_for_request(
1609459200000000000, // start
1609545600000000000, // end
"quotes",
Some("BTCUSD".to_string())
)?;
for (start, end) in missing {
println!("Missing data from {} to {}", start, end);
}
Sourcepub fn query_last_timestamp(
&self,
data_cls: &str,
instrument_id: Option<String>,
) -> Result<Option<u64>>
pub fn query_last_timestamp( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<Option<u64>>
Gets the last (most recent) timestamp for a specific data type and instrument ID.
This method finds the latest timestamp covered by existing data files for the specified data type and instrument. This is useful for determining the most recent data available or for incremental data updates.
§Parameters
data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_id
: Optional instrument ID to target a specific instrument’s data.
§Returns
Returns Some(timestamp)
if data exists, None
if no data is found,
or an error if the operation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed
- Interval retrieval fails
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
// Get the last timestamp for quote data
if let Some(last_ts) = catalog.query_last_timestamp("quotes", Some("BTCUSD".to_string()))? {
println!("Last quote timestamp: {}", last_ts);
} else {
println!("No quote data found");
}
Sourcepub fn get_intervals(
&self,
data_cls: &str,
instrument_id: Option<String>,
) -> Result<Vec<(u64, u64)>>
pub fn get_intervals( &self, data_cls: &str, instrument_id: Option<String>, ) -> Result<Vec<(u64, u64)>>
Gets the time intervals covered by Parquet files for a specific data type and instrument ID.
This method returns all time intervals covered by existing data files for the specified data type and instrument. The intervals are sorted by start time and represent the complete data coverage available.
§Parameters
data_cls
: The data type directory name (e.g., “quotes”, “trades”).instrument_id
: Optional instrument ID to target a specific instrument’s data.
§Returns
Returns a vector of (start, end) tuples representing the covered intervals, sorted by start time, or an error if the operation fails.
§Errors
This function will return an error if:
- The directory path cannot be constructed.
- Directory listing fails.
- Filename parsing fails.
§Examples
use nautilus_persistence::backend::catalog::ParquetDataCatalog;
let catalog = ParquetDataCatalog::new(/* ... */);
// Get all intervals for quote data
let intervals = catalog.get_intervals("quotes", Some("BTCUSD".to_string()))?;
for (start, end) in intervals {
println!("Data available from {} to {}", start, end);
}
Trait Implementations§
Auto Trait Implementations§
impl Freeze for ParquetDataCatalog
impl !RefUnwindSafe for ParquetDataCatalog
impl Send for ParquetDataCatalog
impl Sync for ParquetDataCatalog
impl Unpin for ParquetDataCatalog
impl !UnwindSafe for ParquetDataCatalog
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more