Struct infinitree::object::BufferedSink
source · pub struct BufferedSink<Writer = AEADWriter, Buffer = BlockBuffer> { /* private fields */ }
Expand description
Buffered object writer that supports std::io::Write
.
Due to performance and storage waste considerations, this will generate a new chunk roughly about every 500kB of the input stream.
You need to take this into account when you want to create the
indexes around the stream, as every ChunkPointer
is 88 bytes
in size, which will occupy memory and storage.
Note that you can’t std::io::Seek
in this stream at this point
when reading it.
Examples
use std::io::Write;
use infinitree::{*, crypto::UsernamePassword, fields::Serialized, backends::test::InMemoryBackend, object::{Stream, BufferedSink}};
let mut tree = Infinitree::<infinitree::fields::VersionedMap<String, Stream>>::empty(
InMemoryBackend::shared(),
UsernamePassword::with_credentials("username".to_string(), "password".to_string()).unwrap()
).unwrap();
let mut sink = BufferedSink::new(tree.storage_writer().unwrap());
sink.write(b"it's going in the sink");
tree.index().insert("message_1".to_string(), sink.finish().unwrap());
tree.commit(None);
Implementations§
source§impl<W> BufferedSink<W>where
W: Writer,
impl<W> BufferedSink<W>where W: Writer,
sourcepub fn new(writer: W) -> BufferedSink<W> ⓘ
pub fn new(writer: W) -> BufferedSink<W> ⓘ
Create a new BufferedSink
with the underlying
Writer
instance.
sourcepub fn with_chunk_size(writer: W, chunk_size: usize) -> Self
pub fn with_chunk_size(writer: W, chunk_size: usize) -> Self
Create a new BufferedSink
with a custom chunk size
The default chunk size is 500 * 1024
bytes, which
experientially is a good trade-off for various stream sizes,
as it will minimize storage overhead.
source§impl<W, Buffer> BufferedSink<W, Buffer>where
W: Writer,
Buffer: AsMut<[u8]>,
impl<W, Buffer> BufferedSink<W, Buffer>where W: Writer, Buffer: AsMut<[u8]>,
sourcepub fn with_buffer(writer: W, buffer: Buffer) -> Result<Self>
pub fn with_buffer(writer: W, buffer: Buffer) -> Result<Self>
Create a new BufferedSink
with the underlying
Writer
and buffer.
sourcepub fn set_chunk_size(self, size: usize) -> Result<Self>
pub fn set_chunk_size(self, size: usize) -> Result<Self>
sourcepub fn chunk_size(&self) -> usize
pub fn chunk_size(&self) -> usize
Return the current effective maximum chunk size.
Trait Implementations§
source§impl<W, Buffer> Write for BufferedSink<W, Buffer>where
W: Writer,
Buffer: AsMut<[u8]>,
impl<W, Buffer> Write for BufferedSink<W, Buffer>where W: Writer, Buffer: AsMut<[u8]>,
source§fn write(&mut self, buf: &[u8]) -> Result<usize>
fn write(&mut self, buf: &[u8]) -> Result<usize>
source§fn flush(&mut self) -> Result<()>
fn flush(&mut self) -> Result<()>
source§fn is_write_vectored(&self) -> bool
fn is_write_vectored(&self) -> bool
can_vector
)1.0.0 · source§fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
source§fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
write_all_vectored
)Auto Trait Implementations§
impl<Writer, Buffer> RefUnwindSafe for BufferedSink<Writer, Buffer>where Buffer: RefUnwindSafe, Writer: RefUnwindSafe,
impl<Writer, Buffer> Send for BufferedSink<Writer, Buffer>where Buffer: Send, Writer: Send,
impl<Writer, Buffer> Sync for BufferedSink<Writer, Buffer>where Buffer: Sync, Writer: Sync,
impl<Writer, Buffer> Unpin for BufferedSink<Writer, Buffer>where Buffer: Unpin, Writer: Unpin,
impl<Writer, Buffer> UnwindSafe for BufferedSink<Writer, Buffer>where Buffer: UnwindSafe, Writer: UnwindSafe,
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FieldWriter for Twhere
T: Write + Send,
impl<T> FieldWriter for Twhere T: Write + Send,
source§fn write_next(&mut self, obj: impl Serialize + Send)
fn write_next(&mut self, obj: impl Serialize + Send)
obj
into the index