buffer_redux/lib.rs
1// Original implementation Copyright 2013 The Rust Project Developers <https://github.com/rust-lang>
2// Original source file: https://github.com/rust-lang/rust/blob/master/src/libstd/io/buffered.P
3// Additions copyright 2016-2018 Austin Bonander <austin.bonander@gmail.com>
4
5//! Drop-in replacements for buffered I/O types in `std::io`.
6//!
7//! These replacements retain the method names/signatures and implemented traits of their stdlib
8//! counterparts, making replacement as simple as swapping the import of the type:
9//!
10//! #### `BufReader`:
11//! ```notest
12//! - use std::io::BufReader;
13//! + use buffer_redux::BufReader;
14//! ```
15//! #### `BufWriter`:
16//! ```notest
17//! - use std::io::BufWriter;
18//! + use buffer_redux::BufWriter;
19//! ```
20//! #### `LineWriter`:
21//! ```notest
22//! - use std::io::LineWriter;
23//! + use buffer_redux::LineWriter;
24//! ```
25//!
26//! ### More Direct Control
27//! All replacement types provide methods to:
28//!
29//! * Increase the capacity of the buffer
30//! * Get the number of available bytes as well as the total capacity of the buffer
31//! * Consume the wrapper without losing data
32//!
33//! `BufReader` provides methods to:
34//!
35//! * Access the buffer through an `&`-reference without performing I/O
36//! * Force unconditional reads into the buffer
37//! * Get a `Read` adapter which empties the buffer and then pulls from the inner reader directly
38//! * Shuffle bytes down to the beginning of the buffer to make room for more reading
39//! * Get inner reader and trimmed buffer with the remaining data
40//!
41//! `BufWriter` and `LineWriter` provides methods to:
42//!
43//! * Flush the buffer and unwrap the inner writer unconditionally.
44//! * Get the inner writer and trimmed buffer with the unflushed data.
45//!
46//! ### More Sensible and Customizable Buffering Behavior
47//! Tune the behavior of the buffer to your specific use-case using the types in the
48//! [`policy` module]:
49//!
50//! * Refine `BufReader`'s behavior by implementing the [`ReaderPolicy` trait] or use
51//! an existing implementation like [`MinBuffered`] to ensure the buffer always contains
52//! a minimum number of bytes (until the underlying reader is empty).
53//!
54//! * Refine `BufWriter`'s behavior by implementing the [`WriterPolicy` trait]
55//! or use an existing implementation like [`FlushOn`] to flush when a particular byte
56//! appears in the buffer (used to implement [`LineWriter`]).
57//!
58//! [`policy` module]: policy
59//! [`ReaderPolicy` trait]: policy::ReaderPolicy
60//! [`MinBuffered`]: policy::MinBuffered
61//! [`WriterPolicy`]: policy::WriterPolicy
62//! [`FlushOn`]: policy::FlushOn
63//! [`LineWriter`]: LineWriter
64//!
65//! ### Making Room
66//! The buffered types of this crate and their `std::io` counterparts, by default, use `Box<[u8]>`
67//! as their buffer types ([`Buffer`](Buffer) is included as well since it is used internally
68//! by the other types in this crate).
69//!
70//! When one of these types inserts bytes into its buffer, via `BufRead::fill_buf()` (implicitly
71//! called by `Read::read()`) in `BufReader`'s case or `Write::write()` in `BufWriter`'s case,
72//! the entire buffer is provided to be read/written into and the number of bytes written is saved.
73//! The read/written data then resides in the `[0 .. bytes_inserted]` slice of the buffer.
74//!
75//! When bytes are consumed from the buffer, via `BufRead::consume()` or `Write::flush()`,
76//! the number of bytes consumed is added to the start of the slice such that the remaining
77//! data resides in the `[bytes_consumed .. bytes_inserted]` slice of the buffer.
78//!
79//! The `std::io` buffered types, and their counterparts in this crate with their default policies,
80//! don't have to deal with partially filled buffers as `BufReader` only reads when empty and
81//! `BufWriter` only flushes when full.
82//!
83//! However, because the replacements in this crate are capable of reading on-demand and flushing
84//! less than a full buffer, they can run out of room in their buffers to read/write data into even
85//! though there is technically free space, because this free space is at the head of the buffer
86//! where reading into it would cause the data in the buffer to become non-contiguous.
87//!
88//! This isn't technically a problem as the buffer could operate like `VecDeque` in `std` and return
89//! both slices at once, but this would not fit all use-cases: the `Read::fill_buf()` interface only
90//! allows one slice to be returned at a time so the older data would need to be completely consumed
91//! before the newer data can be returned; `BufWriter` could support it as the `Write` interface
92//! doesn't make an opinion on how the buffer works, but because the data would be non-contiguous
93//! it would require two flushes to get it all, which could degrade performance.
94//!
95//! The obvious solution, then, is to move the existing data down to the beginning of the buffer
96//! when there is no more room at the end so that more reads/writes into the buffer can be issued.
97//! This works, and may suit some use-cases where the amount of data left is small and thus copying
98//! it would be inexpensive, but it is non-optimal. However, this option is provided
99//! as the `.make_room()` methods, and is utilized by [`policy::MinBuffered`](policy::MinBuffered)
100//! and [`policy::FlushExact`](policy::FlushExact).
101//!
102//! ### Ringbuffers / `slice-deque` Feature
103//! Instead of moving data, however, it is also possible to use virtual-memory tricks to
104//! allocate a ringbuffer that loops around on itself in memory and thus is always contiguous,
105//! as described in [the Wikipedia article on Ringbuffers][ringbuf-wikipedia].
106//!
107//! This is the exact trick used by [the `slice-deque` crate](https://crates.io/crates/slice-deque),
108//! which is now provided as an optional feature `slice-deque` exposed via the
109//! `new_ringbuf()` and `with_capacity_ringbuf()` constructors added to the buffered types here.
110//! When a buffered type is constructed using one of these functions, `.make_room()` is turned into
111//! a no-op as consuming bytes from the head of the buffer simultaneously makes room at the tail.
112//! However, this has some caveats:
113//!
114//! * It is only available on target platforms with virtual memory support, namely fully fledged
115//! OSes such as Windows and Unix-derivative platforms like Linux, OS X, BSD variants, etc.
116//!
117//! * The default capacity varies based on platform, and custom capacities are rounded up to a
118//! multiple of their minimum size, typically the page size of the platform.
119//! Windows' minimum size is comparably quite large (**64 KiB**) due to some legacy reasons,
120//! so this may be less optimal than the default capacity for a normal buffer (8 KiB) for some
121//! use-cases.
122//!
123//! * Due to the nature of the virtual-memory trick, the virtual address space the buffer
124//! allocates will be double its capacity. This means that your program will *appear* to use more
125//! memory than it would if it was using a normal buffer of the same capacity. The physical memory
126//! usage will be the same in both cases, but if address space is at a premium in your application
127//! (32-bit targets) then this may be a concern.
128//!
129//! [ringbuf-wikipedia]: https://en.wikipedia.org/wiki/Circular_buffer#Optimization
130#![warn(missing_docs)]
131
132// std::io's tests require exact allocation which slice_deque cannot provide
133mod buffer;
134#[cfg(all(test, feature = "slice-deque"))]
135mod ringbuf_tests;
136#[cfg(test)]
137mod std_tests;
138
139pub mod policy;
140
141use std::any::Any;
142use std::cell::RefCell;
143use std::io::prelude::*;
144use std::io::SeekFrom;
145use std::mem::ManuallyDrop;
146use std::mem::MaybeUninit;
147use std::{cmp, error, fmt, io, ptr};
148
149use self::policy::{FlushOnNewline, ReaderPolicy, StdPolicy, WriterPolicy};
150use crate::buffer::BufImpl;
151
152const DEFAULT_BUF_SIZE: usize = 8 * 1024;
153
154/// A drop-in replacement for `std::io::BufReader` with more functionality.
155///
156/// Original method names/signatures and implemented traits are left untouched,
157/// making replacement as simple as swapping the import of the type.
158///
159/// By default this type implements the behavior of its `std` counterpart: it only reads into
160/// the buffer when it is empty.
161///
162/// To change this type's behavior, change the policy with [`.set_policy()`] using a type
163/// from the [`policy` module] or your own implementation of [`ReaderPolicy`].
164///
165/// Policies that perform alternating reads and consumes without completely emptying the buffer
166/// may benefit from using a ringbuffer via the [`new_ringbuf()`] and [`with_capacity_ringbuf()`]
167/// constructors. Ringbuffers are only available on supported platforms with the
168/// `slice-deque` feature and have some other caveats; see [the crate root docs][ringbufs-root]
169/// for more details.
170///
171/// [`.set_policy()`]: BufReader::set_policy
172/// [`policy` module]: policy
173/// [`ReaderPolicy`]: policy::ReaderPolicy
174/// [`new_ringbuf()`]: BufReader::new_ringbuf
175/// [`with_capacity_ringbuf()`]: BufReader::with_capacity_ringbuf
176/// [ringbufs-root]: index.html#ringbuffers--slice-deque-feature
177pub struct BufReader<R, P = StdPolicy> {
178 // First field for null pointer optimization.
179 buf: Buffer,
180 inner: R,
181 policy: P,
182}
183
184impl<R> BufReader<R, StdPolicy> {
185 /// Create a new `BufReader` wrapping `inner`, utilizing a buffer of
186 /// default capacity and the default [`ReaderPolicy`](policy::ReaderPolicy).
187 pub fn new(inner: R) -> Self {
188 Self::with_capacity(DEFAULT_BUF_SIZE, inner)
189 }
190
191 /// Create a new `BufReader` wrapping `inner`, utilizing a buffer with a capacity
192 /// of *at least* `cap` bytes and the default [`ReaderPolicy`](policy::ReaderPolicy).
193 ///
194 /// The actual capacity of the buffer may vary based on implementation details of the global
195 /// allocator.
196 pub fn with_capacity(cap: usize, inner: R) -> Self {
197 Self::with_buffer(Buffer::with_capacity(cap), inner)
198 }
199
200 /// Create a new `BufReader` wrapping `inner`, utilizing a ringbuffer with the default capacity
201 /// and `ReaderPolicy`.
202 ///
203 /// A ringbuffer never has to move data to make room; consuming bytes from the head
204 /// simultaneously makes room at the tail. This is useful in conjunction with a policy like
205 /// [`MinBuffered`](policy::MinBuffered) to ensure there is always room to read more data
206 /// if necessary, without expensive copying operations.
207 ///
208 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
209 /// enabled. The default capacity will differ between Windows and Unix-derivative targets.
210 /// See [`Buffer::new_ringbuf()`](struct.Buffer.html#method.new_ringbuf)
211 /// or [the crate root docs](index.html#ringbuffers--slice-deque-feature) for more info.
212 #[cfg(feature = "slice-deque")]
213 pub fn new_ringbuf(inner: R) -> Self {
214 Self::with_capacity_ringbuf(DEFAULT_BUF_SIZE, inner)
215 }
216
217 /// Create a new `BufReader` wrapping `inner`, utilizing a ringbuffer with *at least* the given
218 /// capacity and the default `ReaderPolicy`.
219 ///
220 /// A ringbuffer never has to move data to make room; consuming bytes from the head
221 /// simultaneously makes room at the tail. This is useful in conjunction with a policy like
222 /// [`MinBuffered`](policy::MinBuffered) to ensure there is always room to read more data
223 /// if necessary, without expensive copying operations.
224 ///
225 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
226 /// enabled. The capacity will be rounded up to the minimum size for the target platform.
227 /// See [`Buffer::with_capacity_ringbuf()`](struct.Buffer.html#method.with_capacity_ringbuf)
228 /// or [the crate root docs](index.html#ringbuffers--slice-deque-feature) for more info.
229 #[cfg(feature = "slice-deque")]
230 pub fn with_capacity_ringbuf(cap: usize, inner: R) -> Self {
231 Self::with_buffer(Buffer::with_capacity_ringbuf(cap), inner)
232 }
233
234 /// Wrap `inner` with an existing `Buffer` instance and the default `ReaderPolicy`.
235 ///
236 /// ### Note
237 /// Does **not** clear the buffer first! If there is data already in the buffer
238 /// then it will be returned in `read()` and `fill_buf()` ahead of any data from `inner`.
239 pub fn with_buffer(buf: Buffer, inner: R) -> Self {
240 BufReader {
241 buf,
242 inner,
243 policy: StdPolicy,
244 }
245 }
246}
247
248impl<R, P> BufReader<R, P> {
249 /// Apply a new `ReaderPolicy` to this `BufReader`, returning the transformed type.
250 pub fn set_policy<P_: ReaderPolicy>(self, policy: P_) -> BufReader<R, P_> {
251 BufReader {
252 inner: self.inner,
253 buf: self.buf,
254 policy,
255 }
256 }
257
258 /// Mutate the current [`ReaderPolicy`](policy::ReaderPolicy) in-place.
259 ///
260 /// If you want to change the type, use `.set_policy()`.
261 pub fn policy_mut(&mut self) -> &mut P {
262 &mut self.policy
263 }
264
265 /// Inspect the current `ReaderPolicy`.
266 pub fn policy(&self) -> &P {
267 &self.policy
268 }
269
270 /// Move data to the start of the buffer, making room at the end for more
271 /// reading.
272 ///
273 /// This is a no-op with the `*_ringbuf()` constructors (requires `slice-deque` feature).
274 pub fn make_room(&mut self) {
275 self.buf.make_room();
276 }
277
278 /// Ensure room in the buffer for *at least* `additional` bytes. May not be
279 /// quite exact due to implementation details of the buffer's allocator.
280 pub fn reserve(&mut self, additional: usize) {
281 self.buf.reserve(additional);
282 }
283
284 // RFC: pub fn shrink(&mut self, new_len: usize) ?
285
286 /// Get the section of the buffer containing valid data; may be empty.
287 ///
288 /// Call `.consume()` to remove bytes from the beginning of this section.
289 #[inline]
290 pub fn buffer(&self) -> &[u8] {
291 self.buf.buf()
292 }
293
294 /// Get the current number of bytes available in the buffer.
295 pub fn buf_len(&self) -> usize {
296 self.buf.len()
297 }
298
299 /// Get the total buffer capacity.
300 pub fn capacity(&self) -> usize {
301 self.buf.capacity()
302 }
303
304 /// Get an immutable reference to the underlying reader.
305 pub fn get_ref(&self) -> &R {
306 &self.inner
307 }
308
309 /// Get a mutable reference to the underlying reader.
310 ///
311 /// ## Note
312 /// Reading directly from the underlying reader is not recommended, as some
313 /// data has likely already been moved into the buffer.
314 pub fn get_mut(&mut self) -> &mut R {
315 &mut self.inner
316 }
317
318 /// Consume `self` and return the inner reader only.
319 pub fn into_inner(self) -> R {
320 self.inner
321 }
322
323 /// Consume `self` and return both the underlying reader and the buffer.
324 ///
325 /// See also: `BufReader::unbuffer()`
326 pub fn into_inner_with_buffer(self) -> (R, Buffer) {
327 (self.inner, self.buf)
328 }
329
330 /// Consume `self` and return an adapter which implements `Read` and will
331 /// empty the buffer before reading directly from the underlying reader.
332 pub fn unbuffer(self) -> Unbuffer<R> {
333 Unbuffer {
334 inner: self.inner,
335 buf: Some(self.buf),
336 }
337 }
338}
339
340impl<R, P: ReaderPolicy> BufReader<R, P> {
341 #[inline]
342 fn should_read(&mut self) -> bool {
343 self.policy.before_read(&mut self.buf).0
344 }
345}
346
347impl<R: Read, P> BufReader<R, P> {
348 /// Unconditionally perform a read into the buffer.
349 ///
350 /// Does not invoke `ReaderPolicy` methods.
351 ///
352 /// If the read was successful, returns the number of bytes read.
353 pub fn read_into_buf(&mut self) -> io::Result<usize> {
354 self.buf.read_from(&mut self.inner)
355 }
356
357 /// Box the inner reader without losing data.
358 pub fn boxed<'a>(self) -> BufReader<Box<dyn Read + 'a>, P>
359 where
360 R: 'a,
361 {
362 let inner: Box<dyn Read + 'a> = Box::new(self.inner);
363
364 BufReader {
365 inner,
366 buf: self.buf,
367 policy: self.policy,
368 }
369 }
370}
371
372impl<R: Read, P: ReaderPolicy> Read for BufReader<R, P> {
373 fn read(&mut self, out: &mut [u8]) -> io::Result<usize> {
374 // If we don't have any buffered data and we're doing a read matching
375 // or exceeding the internal buffer's capacity, bypass the buffer.
376 if self.buf.is_empty() && out.len() >= self.buf.capacity() {
377 return self.inner.read(out);
378 }
379
380 let nread = self.fill_buf()?.read(out)?;
381 self.consume(nread);
382 Ok(nread)
383 }
384}
385
386impl<R: Read, P: ReaderPolicy> BufRead for BufReader<R, P> {
387 fn fill_buf(&mut self) -> io::Result<&[u8]> {
388 // If we've reached the end of our internal buffer then we need to fetch
389 // some more data from the underlying reader.
390 // This execution order is important; the policy may want to resize the buffer or move data
391 // before reading into it.
392 while self.should_read() && self.buf.usable_space() > 0 {
393 if self.read_into_buf()? == 0 {
394 break;
395 };
396 }
397
398 Ok(self.buffer())
399 }
400
401 fn consume(&mut self, mut amt: usize) {
402 amt = cmp::min(amt, self.buf_len());
403 self.buf.consume(amt);
404 self.policy.after_consume(&mut self.buf, amt);
405 }
406}
407
408impl<R: fmt::Debug, P: fmt::Debug> fmt::Debug for BufReader<R, P> {
409 fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
410 fmt.debug_struct("buffer_redux::BufReader")
411 .field("reader", &self.inner)
412 .field("buf_len", &self.buf_len())
413 .field("capacity", &self.capacity())
414 .field("policy", &self.policy)
415 .finish()
416 }
417}
418
419impl<R: Seek, P: ReaderPolicy> Seek for BufReader<R, P> {
420 /// Seek to an ofPet, in bytes, in the underlying reader.
421 ///
422 /// The position used for seeking with `SeekFrom::Current(_)` is the
423 /// position the underlying reader would be at if the `BufReader` had no
424 /// internal buffer.
425 ///
426 /// Seeking always discards the internal buffer, even if the seek position
427 /// would otherwise fall within it. This guarantees that calling
428 /// `.unwrap()` immediately after a seek yields the underlying reader at
429 /// the same position.
430 ///
431 /// See `std::io::Seek` for more details.
432 ///
433 /// Note: In the edge case where you're seeking with `SeekFrom::Current(n)`
434 /// where `n` minus the internal buffer length underflows an `i64`, two
435 /// seeks will be performed instead of one. If the second seek returns
436 /// `Err`, the underlying reader will be left at the same position it would
437 /// have if you seeked to `SeekFrom::Current(0)`.
438 fn seek(&mut self, pos: SeekFrom) -> io::Result<u64> {
439 let result: u64;
440 if let SeekFrom::Current(n) = pos {
441 let remainder = self.buf_len() as i64;
442 // it should be safe to assume that remainder fits within an i64 as the alternative
443 // means we managed to allocate 8 ebibytes and that's absurd.
444 // But it's not out of the realm of possibility for some weird underlying reader to
445 // support seeking by i64::min_value() so we need to handle underflow when subtracting
446 // remainder.
447 if let Some(offset) = n.checked_sub(remainder) {
448 result = self.inner.seek(SeekFrom::Current(offset))?;
449 } else {
450 // seek backwards by our remainder, and then by the offset
451 self.inner.seek(SeekFrom::Current(-remainder))?;
452 self.buf.clear(); // empty the buffer
453 result = self.inner.seek(SeekFrom::Current(n))?;
454 }
455 } else {
456 // Seeking with Start/End doesn't care about our buffer length.
457 result = self.inner.seek(pos)?;
458 }
459 self.buf.clear();
460 Ok(result)
461 }
462}
463
464/// A drop-in replacement for `std::io::BufWriter` with more functionality.
465///
466/// Original method names/signatures and implemented traits are left untouched,
467/// making replacement as simple as swapping the import of the type.
468///
469/// By default this type implements the behavior of its `std` counterpart: it only flushes
470/// the buffer if an incoming write is larger than the remaining space.
471///
472/// To change this type's behavior, change the policy with [`.set_policy()`] using a type
473/// from the [`policy` module] or your own implentation of [`WriterPolicy`].
474///
475/// Policies that perform alternating writes and flushes without completely emptying the buffer
476/// may benefit from using a ringbuffer via the [`new_ringbuf()`] and [`with_capacity_ringbuf()`]
477/// constructors. Ringbuffers are only available on supported platforms with the
478/// `slice-deque` feature and have some caveats; see [the docs at the crate root][ringbufs-root]
479/// for more details.
480///
481/// [`.set_policy()`]: BufWriter::set_policy
482/// [`policy` module]: policy
483/// [`WriterPolicy`]: policy::WriterPolicy
484/// [`new_ringbuf()`]: BufWriter::new_ringbuf
485/// [`with_capacity_ringbuf()`]: BufWriter::with_capacity_ringbuf
486/// [ringbufs-root]: index.html#ringbuffers--slice-deque-feature
487pub struct BufWriter<W: Write, P = StdPolicy> {
488 buf: Buffer,
489 inner: W,
490 policy: P,
491 panicked: bool,
492}
493
494impl<W: Write> BufWriter<W> {
495 /// Create a new `BufWriter` wrapping `inner` with the default buffer capacity and
496 /// [`WriterPolicy`](policy::WriterPolicy).
497 pub fn new(inner: W) -> Self {
498 Self::with_buffer(Buffer::new(), inner)
499 }
500
501 /// Create a new `BufWriter` wrapping `inner`, utilizing a buffer with a capacity
502 /// of *at least* `cap` bytes and the default [`WriterPolicy`](policy::WriterPolicy).
503 ///
504 /// The actual capacity of the buffer may vary based on implementation details of the global
505 /// allocator.
506 pub fn with_capacity(cap: usize, inner: W) -> Self {
507 Self::with_buffer(Buffer::with_capacity(cap), inner)
508 }
509
510 /// Create a new `BufWriter` wrapping `inner`, utilizing a ringbuffer with the default
511 /// capacity and [`WriterPolicy`](policy::WriterPolicy).
512 ///
513 /// A ringbuffer never has to move data to make room; consuming bytes from the head
514 /// simultaneously makes room at the tail. This is useful in conjunction with a policy like
515 /// [`FlushExact`](policy::FlushExact) to ensure there is always room to write more data if
516 /// necessary, without expensive copying operations.
517 ///
518 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
519 /// enabled. The default capacity will differ between Windows and Unix-derivative targets.
520 /// See [`Buffer::new_ringbuf()`](Buffer::new_ringbuf)
521 /// or [the crate root docs](index.html#ringbuffers--slice-deque-feature) for more info.
522 #[cfg(feature = "slice-deque")]
523 pub fn new_ringbuf(inner: W) -> Self {
524 Self::with_buffer(Buffer::new_ringbuf(), inner)
525 }
526
527 /// Create a new `BufWriter` wrapping `inner`, utilizing a ringbuffer with *at least* `cap`
528 /// capacity and the default [`WriterPolicy`](policy::WriterPolicy).
529 ///
530 /// A ringbuffer never has to move data to make room; consuming bytes from the head
531 /// simultaneously makes room at the tail. This is useful in conjunction with a policy like
532 /// [`FlushExact`](policy::FlushExact) to ensure there is always room to write more data if
533 /// necessary, without expensive copying operations.
534 ///
535 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
536 /// enabled. The capacity will be rounded up to the minimum size for the target platform.
537 /// See [`Buffer::with_capacity_ringbuf()`](Buffer::with_capacity_ringbuf)
538 /// or [the crate root docs](index.html#ringbuffers--slice-deque-feature) for more info.
539 #[cfg(feature = "slice-deque")]
540 pub fn with_capacity_ringbuf(cap: usize, inner: W) -> Self {
541 Self::with_buffer(Buffer::with_capacity_ringbuf(cap), inner)
542 }
543
544 /// Create a new `BufWriter` wrapping `inner`, utilizing the existing [`Buffer`](Buffer)
545 /// instance and the default [`WriterPolicy`](policy::WriterPolicy).
546 ///
547 /// ### Note
548 /// Does **not** clear the buffer first! If there is data already in the buffer
549 /// it will be written out on the next flush!
550 pub fn with_buffer(buf: Buffer, inner: W) -> BufWriter<W> {
551 BufWriter {
552 buf,
553 inner,
554 policy: StdPolicy,
555 panicked: false,
556 }
557 }
558}
559
560impl<W: Write, P> BufWriter<W, P> {
561 /// Set a new [`WriterPolicy`](policy::WriterPolicy), returning the transformed type.
562 pub fn set_policy<P_: WriterPolicy>(self, policy: P_) -> BufWriter<W, P_> {
563 let panicked = self.panicked;
564 let (inner, buf) = self.into_inner_();
565
566 BufWriter {
567 inner,
568 buf,
569 policy,
570 panicked,
571 }
572 }
573
574 /// Mutate the current [`WriterPolicy`](policy::WriterPolicy).
575 pub fn policy_mut(&mut self) -> &mut P {
576 &mut self.policy
577 }
578
579 /// Inspect the current `WriterPolicy`.
580 pub fn policy(&self) -> &P {
581 &self.policy
582 }
583
584 /// Get a reference to the inner writer.
585 pub fn get_ref(&self) -> &W {
586 &self.inner
587 }
588
589 /// Get a mutable reference to the inner writer.
590 ///
591 /// ### Note
592 /// If the buffer has not been flushed, writing directly to the inner type will cause
593 /// data inconsistency.
594 pub fn get_mut(&mut self) -> &mut W {
595 &mut self.inner
596 }
597
598 /// Get the capacty of the inner buffer.
599 pub fn capacity(&self) -> usize {
600 self.buf.capacity()
601 }
602
603 /// Get the number of bytes currently in the buffer.
604 pub fn buf_len(&self) -> usize {
605 self.buf.len()
606 }
607
608 /// Reserve space in the buffer for at least `additional` bytes. May not be
609 /// quite exact due to implementation details of the buffer's allocator.
610 pub fn reserve(&mut self, additional: usize) {
611 self.buf.reserve(additional);
612 }
613
614 /// Move data to the start of the buffer, making room at the end for more
615 /// writing.
616 ///
617 /// This is a no-op with the `*_ringbuf()` constructors (requires `slice-deque` feature).
618 pub fn make_room(&mut self) {
619 self.buf.make_room();
620 }
621
622 /// Consume `self` and return both the underlying writer and the buffer
623 pub fn into_inner_with_buffer(self) -> (W, Buffer) {
624 self.into_inner_()
625 }
626
627 // copy the fields out and forget `self` to avoid dropping twice
628 fn into_inner_(self) -> (W, Buffer) {
629 let s = ManuallyDrop::new(self);
630 unsafe {
631 // safe because we immediately forget `self`
632 let inner = ptr::read(&s.inner);
633 let buf = ptr::read(&s.buf);
634 (inner, buf)
635 }
636 }
637
638 fn flush_buf(&mut self, amt: usize) -> io::Result<()> {
639 if amt == 0 || amt > self.buf.len() {
640 return Ok(());
641 }
642
643 self.panicked = true;
644 let ret = self.buf.write_max(amt, &mut self.inner);
645 self.panicked = false;
646 ret
647 }
648}
649
650impl<W: Write, P: WriterPolicy> BufWriter<W, P> {
651 /// Flush the buffer and unwrap, returning the inner writer on success,
652 /// or a type wrapping `self` plus the error otherwise.
653 pub fn into_inner(mut self) -> Result<W, IntoInnerError<Self>> {
654 match self.flush() {
655 Err(e) => Err(IntoInnerError(self, e)),
656 Ok(()) => Ok(self.into_inner_().0),
657 }
658 }
659
660 /// Flush the buffer and unwrap, returning the inner writer and
661 /// any error encountered during flushing.
662 pub fn into_inner_with_err(mut self) -> (W, Option<io::Error>) {
663 let err = self.flush().err();
664 (self.into_inner_().0, err)
665 }
666}
667
668impl<W: Write, P: WriterPolicy> Write for BufWriter<W, P> {
669 fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
670 let flush_amt = self.policy.before_write(&mut self.buf, buf.len()).0;
671 self.flush_buf(flush_amt)?;
672
673 let written = if self.buf.is_empty() && buf.len() >= self.buf.capacity() {
674 self.panicked = true;
675 let result = self.inner.write(buf);
676 self.panicked = false;
677 result?
678 } else {
679 self.buf.copy_from_slice(buf)
680 };
681
682 let flush_amt = self.policy.after_write(&self.buf).0;
683
684 let _ = self.flush_buf(flush_amt);
685
686 Ok(written)
687 }
688
689 fn flush(&mut self) -> io::Result<()> {
690 let flush_amt = self.buf.len();
691 self.flush_buf(flush_amt)?;
692 self.inner.flush()
693 }
694}
695
696impl<W: Write + Seek, P: WriterPolicy> Seek for BufWriter<W, P> {
697 /// Seek to the ofPet, in bytes, in the underlying writer.
698 ///
699 /// Seeking always writes out the internal buffer before seeking.
700 fn seek(&mut self, pos: SeekFrom) -> io::Result<u64> {
701 self.flush().and_then(|_| self.get_mut().seek(pos))
702 }
703}
704
705impl<W: Write + fmt::Debug, P: fmt::Debug> fmt::Debug for BufWriter<W, P> {
706 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
707 f.debug_struct("buffer_redux::BufWriter")
708 .field("writer", &self.inner)
709 .field("capacity", &self.capacity())
710 .field("policy", &self.policy)
711 .finish()
712 }
713}
714
715/// Attempt to flush the buffer to the underlying writer.
716///
717/// If an error occurs, the thread-local handler is invoked, if one was previously
718/// set by [`set_drop_err_handler`](set_drop_err_handler) for this thread.
719impl<W: Write, P> Drop for BufWriter<W, P> {
720 fn drop(&mut self) {
721 if !self.panicked {
722 // instead of ignoring a failed flush, call the handler
723 let buf_len = self.buf.len();
724 if let Err(err) = self.flush_buf(buf_len) {
725 DROP_ERR_HANDLER.with(|deh| (*deh.borrow())(&mut self.inner, &mut self.buf, err));
726 }
727 }
728 }
729}
730
731/// A drop-in replacement for `std::io::LineWriter` with more functionality.
732///
733/// This is, in fact, only a thin wrapper around
734/// [`BufWriter`](BufWriter)`<W, `[`policy::FlushOnNewline`](policy::FlushOnNewline)`>`, which
735/// demonstrates the power of custom [`WriterPolicy`](policy::WriterPolicy) implementations.
736pub struct LineWriter<W: Write>(BufWriter<W, FlushOnNewline>);
737
738impl<W: Write> LineWriter<W> {
739 /// Wrap `inner` with the default buffer capacity.
740 pub fn new(inner: W) -> Self {
741 Self::with_buffer(Buffer::new(), inner)
742 }
743
744 /// Wrap `inner` with the given buffer capacity.
745 pub fn with_capacity(cap: usize, inner: W) -> Self {
746 Self::with_buffer(Buffer::with_capacity(cap), inner)
747 }
748
749 /// Wrap `inner` with the default buffer capacity using a ringbuffer.
750 #[cfg(feature = "slice-deque")]
751 pub fn new_ringbuf(inner: W) -> Self {
752 Self::with_buffer(Buffer::new_ringbuf(), inner)
753 }
754
755 /// Wrap `inner` with the given buffer capacity using a ringbuffer.
756 #[cfg(feature = "slice-deque")]
757 pub fn with_capacity_ringbuf(cap: usize, inner: W) -> Self {
758 Self::with_buffer(Buffer::with_capacity_ringbuf(cap), inner)
759 }
760
761 /// Wrap `inner` with an existing `Buffer` instance.
762 ///
763 /// ### Note
764 /// Does **not** clear the buffer first! If there is data already in the buffer
765 /// it will be written out on the next flush!
766 pub fn with_buffer(buf: Buffer, inner: W) -> LineWriter<W> {
767 LineWriter(BufWriter::with_buffer(buf, inner).set_policy(FlushOnNewline))
768 }
769
770 /// Get a reference to the inner writer.
771 pub fn get_ref(&self) -> &W {
772 self.0.get_ref()
773 }
774
775 /// Get a mutable reference to the inner writer.
776 ///
777 /// ### Note
778 /// If the buffer has not been flushed, writing directly to the inner type will cause
779 /// data inconsistency.
780 pub fn get_mut(&mut self) -> &mut W {
781 self.0.get_mut()
782 }
783
784 /// Get the capacity of the inner buffer.
785 pub fn capacity(&self) -> usize {
786 self.0.capacity()
787 }
788
789 /// Get the number of bytes currently in the buffer.
790 pub fn buf_len(&self) -> usize {
791 self.0.buf_len()
792 }
793
794 /// Ensure enough space in the buffer for *at least* `additional` bytes. May not be
795 /// quite exact due to implementation details of the buffer's allocator.
796 pub fn reserve(&mut self, additional: usize) {
797 self.0.reserve(additional);
798 }
799
800 /// Flush the buffer and unwrap, returning the inner writer on success,
801 /// or a type wrapping `self` plus the error otherwise.
802 pub fn into_inner(self) -> Result<W, IntoInnerError<Self>> {
803 self.0
804 .into_inner()
805 .map_err(|IntoInnerError(inner, e)| IntoInnerError(LineWriter(inner), e))
806 }
807
808 /// Flush the buffer and unwrap, returning the inner writer and
809 /// any error encountered during flushing.
810 pub fn into_inner_with_err(self) -> (W, Option<io::Error>) {
811 self.0.into_inner_with_err()
812 }
813
814 /// Consume `self` and return both the underlying writer and the buffer.
815 pub fn into_inner_with_buf(self) -> (W, Buffer) {
816 self.0.into_inner_with_buffer()
817 }
818}
819
820impl<W: Write> Write for LineWriter<W> {
821 fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
822 self.0.write(buf)
823 }
824
825 fn flush(&mut self) -> io::Result<()> {
826 self.0.flush()
827 }
828}
829
830impl<W: Write + fmt::Debug> fmt::Debug for LineWriter<W> {
831 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
832 f.debug_struct("buffer_redux::LineWriter")
833 .field("writer", self.get_ref())
834 .field("capacity", &self.capacity())
835 .finish()
836 }
837}
838
839/// The error type for `BufWriter::into_inner()`,
840/// contains the `BufWriter` as well as the error that occurred.
841#[derive(Debug)]
842pub struct IntoInnerError<W>(pub W, pub io::Error);
843
844impl<W> IntoInnerError<W> {
845 /// Get the error
846 pub fn error(&self) -> &io::Error {
847 &self.1
848 }
849
850 /// Take the writer.
851 pub fn into_inner(self) -> W {
852 self.0
853 }
854}
855
856impl<W> From<IntoInnerError<W>> for io::Error {
857 fn from(val: IntoInnerError<W>) -> Self {
858 val.1
859 }
860}
861
862impl<W: Any + Send + fmt::Debug> error::Error for IntoInnerError<W> {
863 fn cause(&self) -> Option<&dyn error::Error> {
864 Some(&self.1)
865 }
866}
867
868impl<W> fmt::Display for IntoInnerError<W> {
869 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
870 self.error().fmt(f)
871 }
872}
873
874/// A deque-like datastructure for managing bytes.
875///
876/// Supports interacting via I/O traits like `Read` and `Write`, and direct access.
877pub struct Buffer {
878 buf: BufImpl,
879 zeroed: usize,
880}
881
882impl Default for Buffer {
883 fn default() -> Self {
884 Self::with_capacity(DEFAULT_BUF_SIZE)
885 }
886}
887
888impl Buffer {
889 /// Create a new buffer with a default capacity.
890 pub fn new() -> Self {
891 Self::default()
892 }
893
894 /// Create a new buffer with *at least* the given capacity.
895 ///
896 /// If the global allocator returns extra capacity, `Buffer` will use all of it.
897 pub fn with_capacity(cap: usize) -> Self {
898 Buffer {
899 buf: BufImpl::with_capacity(cap),
900 zeroed: 0,
901 }
902 }
903
904 /// Allocate a buffer with a default capacity that never needs to move data to make room
905 /// (consuming from the head simultaneously makes more room at the tail).
906 ///
907 /// The default capacity varies based on the target platform:
908 ///
909 /// * Unix-derivative platforms; Linux, OS X, BSDs, etc: **8KiB** (the default buffer size for
910 /// `std::io` buffered types)
911 /// * Windows: **64KiB** because of legacy reasons, of course (see below)
912 ///
913 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
914 /// enabled. The current platforms that are supported/tested are listed
915 /// [in the README for the `slice-deque` crate][slice-deque].
916 ///
917 /// [slice-deque]: https://github.com/gnzlbg/slice_deque#platform-support
918 #[cfg(feature = "slice-deque")]
919 pub fn new_ringbuf() -> Self {
920 Self::with_capacity_ringbuf(DEFAULT_BUF_SIZE)
921 }
922
923 /// Allocate a buffer with *at least* the given capacity that never needs to move data to
924 /// make room (consuming from the head simultaneously makes more room at the tail).
925 ///
926 /// The capacity will be rounded up to the minimum size for the current target:
927 ///
928 /// * Unix-derivative platforms; Linux, OS X, BSDs, etc: the next multiple of the page size
929 /// (typically 4KiB but can vary based on system configuration)
930 /// * Windows: the next muliple of **64KiB**; see [this Microsoft dev blog post][Win-why-64k]
931 /// for why it's 64KiB and not the page size (TL;DR: Alpha AXP needs it and it's applied on
932 /// all targets for consistency/portability)
933 ///
934 /// [Win-why-64k]: https://blogs.msdn.microsoft.com/oldnewthing/20031008-00/?p=42223
935 ///
936 /// Only available on platforms with virtual memory support and with the `slice-deque` feature
937 /// enabled. The current platforms that are supported/tested are listed
938 /// [in the README for the `slice-deque` crate][slice-deque].
939 ///
940 /// [slice-deque]: https://github.com/gnzlbg/slice_deque#platform-support
941 #[cfg(feature = "slice-deque")]
942 pub fn with_capacity_ringbuf(cap: usize) -> Self {
943 Buffer {
944 buf: BufImpl::with_capacity_ringbuf(cap),
945 zeroed: 0,
946 }
947 }
948
949 /// Return `true` if this is a ringbuffer.
950 pub fn is_ringbuf(&self) -> bool {
951 self.buf.is_ringbuf()
952 }
953
954 /// Return the number of bytes currently in this buffer.
955 ///
956 /// Equivalent to `self.buf().len()`.
957 pub fn len(&self) -> usize {
958 self.buf.len()
959 }
960
961 /// Return the number of bytes that can be read into this buffer before it needs
962 /// to grow or the data in the buffer needs to be moved.
963 ///
964 /// This may not constitute all free space in the buffer if bytes have been consumed
965 /// from the head. Use `free_space()` to determine the total free space in the buffer.
966 pub fn usable_space(&self) -> usize {
967 self.buf.usable_space()
968 }
969
970 /// Returns the total amount of free space in the buffer, including bytes
971 /// already consumed from the head.
972 ///
973 /// This will be greater than or equal to `usable_space()`. On supported platforms
974 /// with the `slice-deque` feature enabled, it should be equal.
975 pub fn free_space(&self) -> usize {
976 self.capacity() - self.len()
977 }
978
979 /// Return the total capacity of this buffer.
980 pub fn capacity(&self) -> usize {
981 self.buf.capacity()
982 }
983
984 /// Returns `true` if there are no bytes in the buffer, false otherwise.
985 pub fn is_empty(&self) -> bool {
986 self.len() == 0
987 }
988
989 /// Move bytes down in the buffer to maximize usable space.
990 ///
991 /// This is a no-op on supported platforms with the `slice-deque` feature enabled.
992 pub fn make_room(&mut self) {
993 self.buf.make_room();
994 }
995
996 /// Ensure space for at least `additional` more bytes in the buffer.
997 ///
998 /// This is a no-op if `usable_space() >= additional`. Note that this will reallocate
999 /// even if there is enough free space at the head of the buffer for `additional` bytes,
1000 /// because that free space is not at the tail where it can be read into.
1001 /// If you prefer copying data down in the buffer before attempting to reallocate you may wish
1002 /// to call `.make_room()` first.
1003 ///
1004 /// ### Panics
1005 /// If `self.capacity() + additional` overflows.
1006 pub fn reserve(&mut self, additional: usize) {
1007 // Returns `true` if we reallocated out-of-place and thus need to re-zero.
1008 if self.buf.reserve(additional) {
1009 self.zeroed = 0;
1010 }
1011 }
1012
1013 /// Get an immutable slice of the available bytes in this buffer.
1014 ///
1015 /// Call `.consume()` to remove bytes from the beginning of this slice.
1016 #[inline]
1017 pub fn buf(&self) -> &[u8] {
1018 self.buf.buf()
1019 }
1020
1021 /// Get a mutable slice representing the available bytes in this buffer.
1022 ///
1023 /// Call `.consume()` to remove bytes from the beginning of this slice.
1024 pub fn buf_mut(&mut self) -> &mut [u8] {
1025 self.buf.buf_mut()
1026 }
1027
1028 /// Read from `rdr`, returning the number of bytes read or any errors.
1029 ///
1030 /// If there is no more room at the head of the buffer, this will return `Ok(0)`.
1031 ///
1032 /// ### Panics
1033 /// If the returned count from `rdr.read()` overflows the tail cursor of this buffer.
1034 pub fn read_from<R: Read + ?Sized>(&mut self, rdr: &mut R) -> io::Result<usize> {
1035 if self.usable_space() == 0 {
1036 return Ok(0);
1037 }
1038
1039 let cap = self.capacity();
1040 if self.zeroed < cap {
1041 unsafe {
1042 let buf = self.buf.write_buf();
1043 // TODO: use MaybeUninit::fill once stabilized
1044 for el in buf {
1045 el.write(0);
1046 }
1047 }
1048
1049 self.zeroed = cap;
1050 }
1051
1052 let read = {
1053 let buf = unsafe { self.buf.write_buf() };
1054 // SAFETY: everything upto `cap` was zeroed above
1055 // TODO: use MaybeUninit::slice_assume_init_mut once stabilized
1056 let buf = unsafe { &mut *(buf as *mut [MaybeUninit<u8>] as *mut [u8]) };
1057 // TODO: use BorrowedCursor once stabilized
1058 rdr.read(buf)?
1059 };
1060
1061 unsafe {
1062 self.buf.bytes_written(read);
1063 }
1064
1065 Ok(read)
1066 }
1067
1068 /// Copy from `src` to the tail of this buffer. Returns the number of bytes copied.
1069 ///
1070 /// This will **not** grow the buffer if `src` is larger than `self.usable_space()`; instead,
1071 /// it will fill the usable space and return the number of bytes copied. If there is no usable
1072 /// space, this returns 0.
1073 pub fn copy_from_slice(&mut self, src: &[u8]) -> usize {
1074 let len = unsafe {
1075 let buf = self.buf.write_buf();
1076 let len = cmp::min(buf.len(), src.len());
1077
1078 // TODO: use MaybeUninit::copy_from_slice once stabilized
1079
1080 // SAFETY: &[T] and &[MaybeUninit<T>] have the same layout
1081 let uninit_src: &[MaybeUninit<u8>] = std::mem::transmute(src);
1082 buf[..len].copy_from_slice(&uninit_src[..len]);
1083
1084 len
1085 };
1086
1087 unsafe {
1088 self.buf.bytes_written(len);
1089 }
1090
1091 len
1092 }
1093
1094 /// Write bytes from this buffer to `wrt`. Returns the number of bytes written or any errors.
1095 ///
1096 /// If the buffer is empty, returns `Ok(0)`.
1097 ///
1098 /// ### Panics
1099 /// If the count returned by `wrt.write()` would cause the head cursor to overflow or pass
1100 /// the tail cursor if added to it.
1101 pub fn write_to<W: Write + ?Sized>(&mut self, wrt: &mut W) -> io::Result<usize> {
1102 if self.is_empty() {
1103 return Ok(0);
1104 }
1105
1106 let written = wrt.write(self.buf())?;
1107 self.consume(written);
1108 Ok(written)
1109 }
1110
1111 /// Write, at most, the given number of bytes from this buffer to `wrt`, continuing
1112 /// to write and ignoring interrupts until the number is reached or the buffer is empty.
1113 ///
1114 /// ### Panics
1115 /// If the count returned by `wrt.write()` would cause the head cursor to overflow or pass
1116 /// the tail cursor if added to it.
1117 pub fn write_max<W: Write + ?Sized>(&mut self, mut max: usize, wrt: &mut W) -> io::Result<()> {
1118 while !self.is_empty() && max > 0 {
1119 let len = cmp::min(self.len(), max);
1120 let n = match wrt.write(&self.buf()[..len]) {
1121 Ok(0) => {
1122 return Err(io::Error::new(
1123 io::ErrorKind::WriteZero,
1124 "Buffer::write_all() got zero-sized write",
1125 ))
1126 }
1127 Ok(n) => n,
1128 Err(ref e) if e.kind() == io::ErrorKind::Interrupted => continue,
1129 Err(e) => return Err(e),
1130 };
1131
1132 self.consume(n);
1133 max = max.saturating_sub(n);
1134 }
1135
1136 Ok(())
1137 }
1138
1139 /// Write all bytes in this buffer to `wrt`, ignoring interrupts. Continues writing until
1140 /// the buffer is empty or an error is returned.
1141 ///
1142 /// ### Panics
1143 /// If `self.write_to(wrt)` panics.
1144 pub fn write_all<W: Write + ?Sized>(&mut self, wrt: &mut W) -> io::Result<()> {
1145 while !self.is_empty() {
1146 match self.write_to(wrt) {
1147 Ok(0) => {
1148 return Err(io::Error::new(
1149 io::ErrorKind::WriteZero,
1150 "Buffer::write_all() got zero-sized write",
1151 ))
1152 }
1153 Ok(_) => (),
1154 Err(ref e) if e.kind() == io::ErrorKind::Interrupted => (),
1155 Err(e) => return Err(e),
1156 }
1157 }
1158
1159 Ok(())
1160 }
1161
1162 /// Copy bytes to `out` from this buffer, returning the number of bytes written.
1163 pub fn copy_to_slice(&mut self, out: &mut [u8]) -> usize {
1164 let len = {
1165 let buf = self.buf();
1166
1167 let len = cmp::min(buf.len(), out.len());
1168 out[..len].copy_from_slice(&buf[..len]);
1169 len
1170 };
1171
1172 self.consume(len);
1173
1174 len
1175 }
1176
1177 /// Push `bytes` to the end of the buffer, growing it if necessary.
1178 ///
1179 /// If you prefer moving bytes down in the buffer to reallocating, you may wish to call
1180 /// `.make_room()` first.
1181 pub fn push_bytes(&mut self, bytes: &[u8]) {
1182 let s_len = bytes.len();
1183
1184 if self.usable_space() < s_len {
1185 self.reserve(s_len * 2);
1186 }
1187
1188 unsafe {
1189 // TODO: use MaybeUninit::copy_from_slice once stabilized
1190
1191 // SAFETY: &[T] and &[MaybeUninit<T>] have the same layout
1192 let uninit_src: &[MaybeUninit<u8>] = std::mem::transmute(bytes);
1193 let buf = self.buf.write_buf();
1194 buf[..s_len].copy_from_slice(uninit_src);
1195
1196 self.buf.bytes_written(s_len);
1197 }
1198 }
1199
1200 /// Consume `amt` bytes from the head of this buffer.
1201 pub fn consume(&mut self, amt: usize) {
1202 self.buf.consume(amt);
1203 }
1204
1205 /// Empty this buffer by consuming all bytes.
1206 pub fn clear(&mut self) {
1207 let buf_len = self.len();
1208 self.consume(buf_len);
1209 }
1210}
1211
1212impl fmt::Debug for Buffer {
1213 fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
1214 f.debug_struct("buffer_redux::Buffer")
1215 .field("capacity", &self.capacity())
1216 .field("len", &self.len())
1217 .finish()
1218 }
1219}
1220
1221/// A `Read` adapter for a consumed `BufReader` which will empty bytes from the buffer before
1222/// reading from `R` directly. Frees the buffer when it has been emptied.
1223pub struct Unbuffer<R> {
1224 inner: R,
1225 buf: Option<Buffer>,
1226}
1227
1228impl<R> Unbuffer<R> {
1229 /// Returns `true` if the buffer still has some bytes left, `false` otherwise.
1230 pub fn is_buf_empty(&self) -> bool {
1231 self.buf.is_none()
1232 }
1233
1234 /// Returns the number of bytes remaining in the buffer.
1235 pub fn buf_len(&self) -> usize {
1236 self.buf.as_ref().map(Buffer::len).unwrap_or(0)
1237 }
1238
1239 /// Get a slice over the available bytes in the buffer.
1240 #[inline]
1241 pub fn buf(&self) -> &[u8] {
1242 self.buf.as_ref().map_or(&[], Buffer::buf)
1243 }
1244
1245 /// Return the underlying reader, releasing the buffer.
1246 pub fn into_inner(self) -> R {
1247 self.inner
1248 }
1249}
1250
1251impl<R: Read> Read for Unbuffer<R> {
1252 fn read(&mut self, out: &mut [u8]) -> io::Result<usize> {
1253 if let Some(ref mut buf) = self.buf.as_mut() {
1254 let read = buf.copy_to_slice(out);
1255
1256 if !out.is_empty() && read != 0 {
1257 return Ok(read);
1258 }
1259 }
1260
1261 self.buf = None;
1262
1263 self.inner.read(out)
1264 }
1265}
1266
1267impl<R: fmt::Debug> fmt::Debug for Unbuffer<R> {
1268 fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
1269 fmt.debug_struct("buffer_redux::Unbuffer")
1270 .field("reader", &self.inner)
1271 .field("buffer", &self.buf)
1272 .finish()
1273 }
1274}
1275
1276/// Copy data between a `BufRead` and a `Write` without an intermediate buffer.
1277///
1278/// Retries on interrupts. Returns the total bytes copied or the first error;
1279/// even if an error is returned some bytes may still have been copied.
1280pub fn copy_buf<B: BufRead, W: Write>(b: &mut B, w: &mut W) -> io::Result<u64> {
1281 let mut total_copied = 0;
1282
1283 loop {
1284 let copied = match b.fill_buf().and_then(|buf| w.write(buf)) {
1285 Err(ref e) if e.kind() == io::ErrorKind::Interrupted => continue,
1286 Err(e) => return Err(e),
1287 Ok(buf) => buf,
1288 };
1289
1290 if copied == 0 {
1291 break;
1292 }
1293
1294 b.consume(copied);
1295
1296 total_copied += copied as u64;
1297 }
1298
1299 Ok(total_copied)
1300}
1301
1302type DropHandler = Box<dyn Fn(&mut dyn Write, &mut Buffer, io::Error)>;
1303thread_local!(
1304 static DROP_ERR_HANDLER: RefCell<DropHandler>
1305 = RefCell::new(Box::new(|_, _, _| ()))
1306);
1307
1308/// Set a thread-local handler for errors thrown in `BufWriter`'s `Drop` impl.
1309///
1310/// The `Write` impl, buffer (at the time of the erroring write) and IO error are provided.
1311///
1312/// Replaces the previous handler. By default this is a no-op.
1313///
1314/// ### Panics
1315/// If called from within a handler previously provided to this function.
1316pub fn set_drop_err_handler<F>(handler: F)
1317where
1318 F: 'static + Fn(&mut dyn Write, &mut Buffer, io::Error),
1319{
1320 DROP_ERR_HANDLER.with(|deh| *deh.borrow_mut() = Box::new(handler))
1321}