Function bao::simple::decode [] [src]

pub fn decode(encoded: &[u8], hash: &Digest) -> Result<Vec<u8>>

Recursively verify the encoded tree and return the content.

Throughout all this slicing and verifying, we never check whether a slice has more bytes than we need. That means that after we decode the last chunk, we'll ignore any trailing garbage that might be appended to the encoding, just like a streaming decoder would. As a result, THERE ARE MANY VALID ENCODINGS FOR A GIVEN INPUT, differing only in their trailing garbage. Callers that assume different encoded bytes imply different (or invalid) input bytes, could get tripped up on this.

It's tempting to solve this problem on our end, with a rule like "decoders must read to EOF and check for trailing garbage." But I think it's better to make no promises, than to make a promise we can't keep. Testing this rule across all future implementation would be very difficult. For example, an implementation might check for trailing garbage at the end of any block that it reads, and thus appear to past most tests, but forget the case where the end of the valid encoding lands precisely on a read boundary.