pub struct Parser<L> { /* private fields */ }Expand description
Parses JSON text at a syntax level.
A Parser wraps any value that implements the lexical::Analyzer trait in a lightweight,
stream-oriented, parsing layer that understands JSON syntax.
§Maximum nesting level
Every parser has a configurable maximum nesting level. The default maximum
level is 128, but this can be raised or lowered either on construction using
with_max_level or after construction with set_max_level.
§Performance considerations
Parser is a very lightweight type. Its performance, allocation behavior, and memory
consumption are almost entirely determined by the wrapped lexical analyzer, meaning a parser is
as performant as its underlying lexer implementation.
The next method only triggers allocation in two scenarios:
- The underlying lexical analyzer’s
nextmethod allocates. - A
{or[causes the parser’s current nesting level to exceed128(which is only possible if the maximum level is set to a higher value).
The next method’s companion methods, next_non_white and next_meaningful have the same
underlying behavior as next.
The content method has the same performance characteristics as the underlying lexer’s
content method.
No other method of parser allocates.
§Memory considerations
The content method passes through the underlying lexical analyzer’s content. The content
value may contain references to internal buffers that will not be deallocated until the content
value is dropped. Refer to the specific lexical analyzer’s documentation for more.
§Continuous parsing
A Parser is designed to parse a single complete JSON text, after which it expects the end of
the input stream, Token::Eof.
Some use cases involve parsing a continuous stream of JSON texts one after the other (a.k.a.
JSON streaming). This use case includes newline-delimited formats like
NDJSON and JSONL as well as other formats. It is also the natural input format for tools like
the jq command-line JSON processor.
To parse a continuous stream of JSON texts, unwrap the inner lexical analyzer at the end of each
JSON text using into_inner and construct a fresh parser using the same lexer for the next
JSON text in the stream.
§Examples
Create a parser with new:
// Create the parser by wrapping a lexical analyzer.
let lexer = FixedAnalyzer::new(&b"[1, 2, 3]"[..]);
let mut parser = Parser::new(lexer);
// Use the parser ...Convert back to the underlying lexical analyzer at any time with into_inner:
let parser = Parser::new(FixedAnalyzer::new(&b"[1, 2, 3]"[..]));
let lexer = parser.into_inner();Create a parser with a very high maximum nesting level:
// Create the parser by wrapping a lexical analyzer.
let lexer = FixedAnalyzer::new(&b"[1, 2, 3]"[..]);
let mut parser = Parser::with_max_level(lexer, 1_000_000);
// Use the parser ...Verify the syntax of a JSON text.
let mut parser = Parser::new(FixedAnalyzer::new(r#"{"key": [1, 2,]}"#.as_bytes()));
let result = loop {
match parser.next() {
Token::Eof => break Ok(()),
Token::Err => break Err(parser.err()),
_ => (),
}
};
assert_eq!(
"syntax error: expected value but got ] at line 1, column 15 (offset: 14)",
format!("{}", result.unwrap_err()),
);Skip insignificant whitespace and unnecessary punctuation.
let mut parser = Parser::new(FixedAnalyzer::new(r#"{"key": [1, 2]}"#.as_bytes()));
let mut significant = Vec::new();
loop {
match parser.next_meaningful() {
Token::Eof | Token::Err => break,
t => significant.push((t, parser.content().literal().to_string())),
}
};
// Whitespace is skipped by `next_meaningful`, as are the : and , punctuation characters that
// are required by the JSON specification, but redundant when trying to make sense of the parsed
// text.
assert_eq!(
[
(Token::ObjBegin, "{"),
(Token::Str, r#""key""#),
(Token::ArrBegin, "["),
(Token::Num, "1"),
(Token::Num, "2"),
(Token::ArrEnd, "]"),
(Token::ObjEnd, "}"),
].into_iter().map(|(t, s)| (t, s.to_string())).collect::<Vec<_>>(),
significant,
);Implementations§
Source§impl<L> Parser<L>
impl<L> Parser<L>
Sourcepub fn new(lexer: L) -> Self
pub fn new(lexer: L) -> Self
Constructs a new parser wrapping an underlying lexical analyzer.
The lexer can be unwrapped using into_inner.
Use with_max_level to construct a new parser with a specific
maximum nesting level.
§Example
// Create the parser by wrapping a lexical analyzer.
let lexer = FixedAnalyzer::new(&b"[1, 2, 3]"[..]);
let mut parser = Parser::new(lexer);
// Use the parser ...Sourcepub fn next(&mut self) -> Token
pub fn next(&mut self) -> Token
Returns the next syntactically valid lexical token.
If a lexical or syntax error is detected, returns Token::Err and the specific error can
be obtained from err. Otherwise, returns the next non-error token and the
token content can be obtained from content.
§Example
let mut parser = FixedAnalyzer::new(&b"{"[..]).into_parser();
assert_eq!(Token::ObjBegin, parser.next());
assert_eq!(Token::Err, parser.next());
let err = parser.err();
assert_eq!(
"syntax error: expected object member name or } but got EOF at line 1, column 2 (offset: 1)",
format!("{err}")
);Sourcepub fn next_non_white(&mut self) -> Token
pub fn next_non_white(&mut self) -> Token
Returns the next syntactically-valid non-whitespace token, i.e. next but skips
whitespace.
This is a convenience method to simplify parsing in use cases where whitespace should be discarded.
See also next_meaningful.
§Example
Pretty-print some JSON text.
use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::{Error, Parser}};
fn pretty_print(json_text: &str) -> Result<String, Error> {
let mut parser = Parser::new(FixedAnalyzer::new(json_text.as_bytes()));
let mut pretty = String::new();
let indent = |pretty: &mut String, level: usize| {
pretty.push_str(&" ".repeat(level * 2));
};
loop {
let token = parser.next_non_white();
match token {
Token::Eof => break Ok(pretty),
Token::Err => break Err(parser.err()),
Token::ObjBegin | Token::ArrBegin => {
pretty.push_str(token.static_content().unwrap());
pretty.push('\n');
indent(&mut pretty, parser.level());
},
Token::ObjEnd | Token::ArrEnd => {
pretty.push('\n');
indent(&mut pretty, parser.level());
pretty.push_str(token.static_content().unwrap());
},
Token::NameSep => pretty.push_str(": "),
Token::ValueSep => {
pretty.push_str(",\n");
indent(&mut pretty, parser.level());
},
_ => pretty.push_str(parser.content().literal()),
}
}
}
let expect = r#"{
"foo": "bar",
"baz": [
1,
2
]
}"#;
let actual = pretty_print(r#"{"foo":"bar","baz":[1,2]}"#).unwrap();
assert_eq!(expect, actual);Sourcepub fn next_meaningful(&mut self) -> Token
pub fn next_meaningful(&mut self) -> Token
Returns the next syntactically-valid meaningful lexical token.
This method skips whitespace like next_non_white but also skips past the following
meaningless punctuation characters:
:orToken::NameSep;,orToken::ValueSep.
The colon : and comma , are meaningless because, even though they are required by the
JSON spec (and sometimes necessary for tokenization), they don’t add any meaning to
the stream of lexical tokens.
§Example
Consider the following JSON example text: {"foo": "baz", "bar": "qux"}. This text contains
an object value with two members named “foo” and “bar”. Since the parser already ensures the
text is syntactically valid, the consumer does not benefit from receiving the colon and
comma tokens (or the whitespace). When the parser is inside an object, the members will
always come in pairs where the first element is a Token::Str containing the name and the
second member is the stream of tokens that comprise the value. In the case of the given
example text, the values are the string tokens “baz” and “qux”.
let mut parser = FixedAnalyzer::new(r#"{"foo": "baz", "bar": "qux"}"#.as_bytes()).into_parser();
assert_eq!(Token::ObjBegin, parser.next_meaningful());
assert_eq!(Token::Str, parser.next_meaningful());
assert_eq!(Token::Str, parser.next_meaningful());
assert_eq!(Token::Str, parser.next_meaningful());
assert_eq!(Token::Str, parser.next_meaningful());
assert_eq!(Token::ObjEnd, parser.next_meaningful());
assert_eq!(Token::Eof, parser.next_meaningful());Sourcepub fn next_end(&mut self) -> Token
pub fn next_end(&mut self) -> Token
Returns the token that ends the current structured value, skipping all content within it.
If the parser is currently inside an array or object, this method consumes tokens until the
matching ] or } is reached at the same nesting level, and returns that end token
(Token::ArrEnd or Token::ObjEnd).
If the parser is not inside a structured value, this method consumes tokens until
Token::Eof is reached.
§Examples
Skip the contents of a nested array.
use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::Parser};
let mut parser = Parser::new(FixedAnalyzer::new(r#"{"a": [1, 2, 3], "b": true}"#.as_bytes()));
assert_eq!(Token::ObjBegin, parser.next_meaningful()); // {
assert_eq!(Token::Str, parser.next_meaningful()); // "a"
assert_eq!(Token::ArrBegin, parser.next_meaningful()); // [
assert_eq!(Token::ArrEnd, parser.next_end()); // skip 1, 2, 3 and return ]
assert_eq!(1, parser.level());
assert_eq!(Token::Str, parser.next_meaningful()); // "b"
assert_eq!(Token::LitTrue, parser.next_meaningful()); // true
assert_eq!(Token::ObjEnd, parser.next_meaningful()); // }
assert_eq!(Token::Eof, parser.next_meaningful());Skip an entire top-level value when not inside a structured value.
use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::Parser};
let mut parser = Parser::new(FixedAnalyzer::new(r#"[1, 2, 3]"#.as_bytes()));
assert_eq!(Token::Eof, parser.next_end());Sourcepub fn content(&self) -> L::Content
pub fn content(&self) -> L::Content
Fetches the text content for the current non-error token.
The current token is the token most recently returned by next, next_non_white, or
next_meaningful.
This method does not allocate unless the underlying lexical analyzer’s try_content
method allocates.
§Panics
Panics if the current token is Token::Err.
§Example
let mut parser = FixedAnalyzer::new(&b"[ 1, 2]"[..]).into_parser();
assert_eq!(Token::ArrBegin, parser.next());
assert_eq!(Token::Num, parser.next_non_white());
assert_eq!("1", parser.content().literal());
assert_eq!(Token::Num, parser.next_meaningful());
assert_eq!("2", parser.content().literal());Sourcepub fn err(&self) -> Error
pub fn err(&self) -> Error
Fetches the error value associated with the current error token.
The current token is the token most recently returned by next, next_non_white, or
next_meaningful.
§Panics
Panics if the current token is not Token::Err.
§Example
use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::ErrorKind};
let mut parser = FixedAnalyzer::new(&b"{]}"[..]).into_parser();
assert_eq!(Token::ObjBegin, parser.next());
assert_eq!(Token::Err, parser.next());
assert!(matches!(
parser.err().kind(),
ErrorKind::Syntax { context: _, token: Token::ArrEnd },
));Sourcepub fn pos(&self) -> &Pos
pub fn pos(&self) -> &Pos
Returns the position of the current lexical token.
The current lexical token is the token last returned by next, next_non_white, or
next_meaningful.
Sourcepub fn try_content(&self) -> Result<L::Content, Error>
pub fn try_content(&self) -> Result<L::Content, Error>
Fetches the content or error associated with the current token.
The current token is the token most recently returned by next, next_non_white, or
next_meaningful.
If the current token is Token::Err, an Err result is returned. Otherwise, an Ok
result containing the text content of the recognized lexical token is returned.
This method does not allocate unless the underlying lexical analyzer’s try_content
method allocates.
§Examples
An Ok value is returned as long as the parser isn’t in an error state.
let mut parser = FixedAnalyzer::new(&b"[123"[..]).into_parser();
assert_eq!(Token::ArrBegin, parser.next());
assert_eq!(Token::Num, parser.next());
assert!(matches!(parser.try_content(), Ok(c) if c.literal() == "123"));Once the parser detects an error, it will return an Err value describing the error.
use bufjson::{Pos, lexical::{Token, fixed::FixedAnalyzer}, syntax::ErrorKind};
let mut parser = FixedAnalyzer::new(&b"[123"[..]).into_parser();
assert_eq!(Token::ArrBegin, parser.next());
assert_eq!(Token::Num, parser.next());
assert_eq!(Token::Err, parser.next());
let error_kind = parser.try_content().unwrap_err().kind().clone();
assert!(matches!(error_kind, ErrorKind::Syntax { context: _, token: Token::Eof }));Sourcepub fn context(&self) -> &Context
pub fn context(&self) -> &Context
Returns the current parse context, which includes the nesting state and next expected token.
§Examples
Before observing any tokens, there is no nesting and the parser expects any valid JSON value.
use bufjson::{lexical::fixed::FixedAnalyzer, syntax::Expect};
let mut parser = FixedAnalyzer::new(&b"\"hello\""[..]).into_parser();
assert_eq!(0, parser.context().level());
assert_eq!(Expect::Value, parser.context().expect());After observing an object start token, the nesting level increases to 1 and the parser now expects either a string (containing the first member name) or an object end token.
use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::{Expect, StructKind}};
let mut parser = FixedAnalyzer::new(&b"{"[..]).into_parser();
assert_eq!(Token::ObjBegin, parser.next());
assert_eq!(1, parser.context().level());
assert_eq!(StructKind::Obj, parser.context().struct_kind().unwrap());
assert_eq!(Expect::ObjNameOrEnd, parser.context().expect());Sourcepub fn level(&self) -> usize
pub fn level(&self) -> usize
Returns the current nesting level of the parse.
This is a convenience method that returns the level of the parse context obtainable via the
context method.
§Example
use bufjson::lexical::{Token, fixed::FixedAnalyzer};
let mut parser = FixedAnalyzer::new(&b"[{}]"[..]).into_parser();
assert_eq!(0, parser.level());
assert_eq!(Token::ArrBegin, parser.next());
assert_eq!(1, parser.level());
assert_eq!(Token::ObjBegin, parser.next());
assert_eq!(2, parser.level());
assert_eq!(Token::ObjEnd, parser.next());
assert_eq!(1, parser.level());
assert_eq!(Token::ArrEnd, parser.next());
assert_eq!(0, parser.level());Sourcepub fn max_level(&self) -> usize
pub fn max_level(&self) -> usize
Returns the maximum nesting level the parser will allow.
When the parser’s current nesting level has reached the maximum level, the
start of an array or object will trigger a Level error.
The maximum nesting level can set at construction time via with_max_level or after
construction with set_max_level.
§Default
The default value is 128.
When the maximum nesting level is set at the default value or lower, the parser will never allocate (apart from any allocations performed by the underlying lexer).
§Purpose
The maximum nesting level places a limit on the number of allocations that the parser will do to maintain the bookkeeping data structure that tracks the current nesting level. This is useful in controlling performance and protecting the parser from malicious or degenerate inputs.
For example, consider a 1 GB stream of JSON data consisting only of { left brace
characters. If the maximum nesting level is set to 1,000,000,000 then the parser would,
after several allocations and reallocations, end up with a 125 MB block of memory to
track the nesting level. This is almost certainly a malicious, or, at the very minimum,
erroneous, input; and it could easily bring down a multi-tenant system like a web server.
The maximum nesting level allows problematic inputs of this type to be detected early,
before they cause an impact.
Sourcepub fn set_max_level(&mut self, max_level: usize)
pub fn set_max_level(&mut self, max_level: usize)
Sets the maximum nesting level the parser will allow.
The current value is returned by max_level. It can also be set at construction time
using with_max_level.
§Panics
Panics if the new maximum level exceeds the current nesting level.
§Examples
Set the maximum level to the highest possible value to effectively remove all nesting limits.
let mut parser = FixedAnalyzer::new(&b"\"hello\""[..]).into_parser();
parser.set_max_level(usize::MAX);use bufjson::{lexical::{Token, fixed::FixedAnalyzer}, syntax::{Error, ErrorKind}};
fn parse_primitive(json_text: &str) -> Result<(Token, String), Error> {
let mut parser = FixedAnalyzer::new(json_text.as_bytes()).into_parser();
parser.set_max_level(0); // Disable all nesting.
let token = parser.next_meaningful();
Ok((token, parser.try_content()?.literal().to_string()))
}
// Flat primitive values can still be parsed.
assert_eq!((Token::LitTrue, "true".to_string()), parse_primitive("true").unwrap());
assert_eq!((Token::Num, "123".to_string()), parse_primitive("\n 123").unwrap());
// Arrays and objects will produce a nesting error because we have set max level to 0.
let err = parse_primitive("[]").unwrap_err();
assert!(matches!(err.kind(), ErrorKind::Level { level: 1, token: Token::ArrBegin }));Sourcepub fn with_max_level(lexer: L, max_level: usize) -> Self
pub fn with_max_level(lexer: L, max_level: usize) -> Self
Constructs a new parser with the given maximum nesting level.
This is a convenience method that combines new with set_max_level
Sourcepub fn into_inner(self) -> L
pub fn into_inner(self) -> L
Returns the contained lexical analyzer, consuming the self value.
§Examples
let mut parser = FixedAnalyzer::new(&b"{]"[..]).into_parser();
// Read next token from parser.
assert_eq!(Token::ObjBegin, parser.next());
// Unwrap the lexical analyzer.
let mut lexer = parser.into_inner();
// Read next token from lexer (this would cause a syntax error if it was read from a
// parser).
assert_eq!(Token::ArrEnd, lexer.next());Auto Trait Implementations§
impl<L> Freeze for Parser<L>where
L: Freeze,
impl<L> !RefUnwindSafe for Parser<L>
impl<L> Send for Parser<L>where
L: Send,
impl<L> Sync for Parser<L>where
L: Sync,
impl<L> Unpin for Parser<L>where
L: Unpin,
impl<L> UnsafeUnpin for Parser<L>where
L: UnsafeUnpin,
impl<L> !UnwindSafe for Parser<L>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> FmtForward for T
impl<T> FmtForward for T
Source§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self to use its Binary implementation when Debug-formatted.Source§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self to use its Display implementation when
Debug-formatted.Source§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self to use its LowerExp implementation when
Debug-formatted.Source§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self to use its LowerHex implementation when
Debug-formatted.Source§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self to use its Octal implementation when Debug-formatted.Source§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self to use its Pointer implementation when
Debug-formatted.Source§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self to use its UpperExp implementation when
Debug-formatted.Source§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self to use its UpperHex implementation when
Debug-formatted.Source§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
Source§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
Source§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read moreSource§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self and passes that borrow into the pipe function. Read moreSource§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
Source§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R,
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R, ) -> R
Source§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self, then passes self.as_ref() into the pipe function.Source§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self, then passes self.as_mut() into the pipe
function.Source§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self, then passes self.deref() into the pipe function.Source§impl<T> Tap for T
impl<T> Tap for T
Source§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B> of a value. Read moreSource§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B> of a value. Read moreSource§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R> view of a value. Read moreSource§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R> view of a value. Read moreSource§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target of a value. Read moreSource§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target of a value. Read moreSource§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap() only in debug builds, and is erased in release builds.Source§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut() only in debug builds, and is erased in release
builds.Source§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow() only in debug builds, and is erased in release
builds.Source§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut() only in debug builds, and is erased in release
builds.Source§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref() only in debug builds, and is erased in release
builds.Source§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut() only in debug builds, and is erased in release
builds.Source§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref() only in debug builds, and is erased in release
builds.