1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
// automatically generated by rust-bindgen, then editted by hand
// # Streaming compression
//
// A `ZBUFFCompressionContext` object is required to track streaming operation.
// Use `ZBUFF_createCCtx()` and `ZBUFF_freeCCtx()` to create/release resources.
// `ZBUFFCompressionContext` objects can be reused multiple times.
//
// Start by initializing `ZBUFFCompressionContext`.
// Use `ZBUFF_compressInit()` to start a new compression operation.
// Use `ZBUFF_compressInitDictionary()` for a compression which requires a dictionary.
//
// Use `ZBUFF_compressContinue()` repetitively to consume input stream.
// *srcSizePtr and *dstCapacityPtr can be any size.
// The function will report how many bytes were read or written within *srcSizePtr and *dstCapacityPtr.
// Note that it may not consume the entire input, in which case it's up to the caller to present again remaining data.
// The content of @dst will be overwritten (up to *dstCapacityPtr) at each call, so save its content if it matters or change @dst .
// @return : a hint to preferred nb of bytes to use as input for next function call (it's just a hint, to improve latency)
// or an error code, which can be tested using ZBUFF_isError().
//
// At any moment, it's possible to flush whatever data remains within buffer, using ZBUFF_compressFlush().
// The nb of bytes written into @dst will be reported into *dstCapacityPtr.
// Note that the function cannot output more than *dstCapacityPtr,
// therefore, some content might still be left into internal buffer if *dstCapacityPtr is too small.
// @return : nb of bytes still present into internal buffer (0 if it's empty)
// or an error code, which can be tested using ZBUFF_isError().
//
// ZBUFF_compressEnd() instructs to finish a frame.
// It will perform a flush and write frame epilogue.
// The epilogue is required for decoders to consider a frame completed.
// Similar to ZBUFF_compressFlush(), it may not be able to output the entire internal buffer content if *dstCapacityPtr is too small.
// In which case, call again ZBUFF_compressFlush() to complete the flush.
// @return : nb of bytes still present into internal buffer (0 if it's empty)
// or an error code, which can be tested using ZBUFF_isError().
//
// Hint : recommended buffer sizes (not compulsory) : ZBUFF_recommendedCInSize / ZBUFF_recommendedCOutSize
// input : ZBUFF_recommendedCInSize==128 KB block size is the internal unit, it improves latency to use this value (skipped buffering).
// output : ZBUFF_recommendedCOutSize==ZSTD_compressBound(128 KB) + 3 + 3 : ensures it's always possible to write/flush/end a full block. Skip some buffering.
// By using both, it ensures that input will be entirely consumed, and output will always contain the result, reducing intermediate buffering.
//
//
// # Streaming decompression
//
// A `ZBUFFDecompressionContext` object is required to track streaming operations.
// Use ZBUFF_createDCtx() and ZBUFF_freeDCtx() to create/release resources.
// Use ZBUFF_decompressInit() to start a new decompression operation,
// or ZBUFF_decompressInitDictionary() if decompression requires a dictionary.
// Note that `ZBUFFDecompressionContext` objects can be reused multiple times.
//
// Use ZBUFF_decompressContinue() repetitively to consume your input.
// *srcSizePtr and *dstCapacityPtr can be any size.
// The function will report how many bytes were read or written by modifying *srcSizePtr and *dstCapacityPtr.
// Note that it may not consume the entire input, in which case it's up to the caller to present remaining input again.
// The content of @dst will be overwritten (up to *dstCapacityPtr) at each function call, so save its content if it matters or change @dst.
// @return : a hint to preferred nb of bytes to use as input for next function call (it's only a hint, to help latency)
// or 0 when a frame is completely decoded
// or an error code, which can be tested using ZBUFF_isError().
//
// Hint : recommended buffer sizes (not compulsory) : ZBUFF_recommendedDInSize() / ZBUFF_recommendedDOutSize()
// output : ZBUFF_recommendedDOutSize==128 KB block size is the internal unit, it ensures it's always possible to write a full block when decoded.
// input : ZBUFF_recommendedDInSize==128Kb+3; just follow indications from ZBUFF_decompressContinue() to minimize latency. It should always be <= 128 KB + 3 .
use io;
use CStr;
use ;
pub type ZBUFFCompressionContext = *mut c_void;
pub type ZBUFFDecompressionContext = *mut c_void;
pub type ZBUFFErrorCode = size_t;
/// Parse the result code
///
/// Returns the number of bytes written if the code represents success,
/// or the error message otherwise.
extern "C"