merkle_tree_bulletin_board/lib.rs
1//! # Merkle tree based bulletin board
2//!
3//! This is a library for a public bulletin board, allowing one to publish a series of
4//! messages, and occasional root hashes. It can then provide a proof that each element
5//! published before the root hash is referenced by the root hash. This is done via
6//! Merkle Trees.
7//!
8//! It is a method of being open, specifically of allowing external people to verify
9//! the bulletin board. After entries are added, the board publishes a public root (256 bit hash).
10//! Different people can confirm that they are told the same hash to check that they
11//! are looking at the same bulletin board and are not getting shown different data.
12//! Anyone can find a proof that their data of interest is in the board; also anyone
13//! can retrieve the entire contents of the board and do whatever is wanted with it.
14//!
15//! As an example application, imagine a public election (this is of no use for an
16//! election with private votes, which is of course often very important). Everyone
17//! submits their vote to a central "of course we are totally trustworthy" authority (CA). The CA then publishes a root
18//! hash, which everyone can telephone their friends to check is the same. Also,
19//! everyone can easily check that *their* vote is recorded correctly (in time
20//! logarithmic in the number of entries, that is, quickly). Also, anyone can
21//! check the total list of votes (in time proportional to the number of votes)
22//! and see that the announced tally is correct. This means that even people who
23//! do not trust the CA can still trust the *result*, because it is verifiable.
24//!
25//! The basic idea is that the bulletin board keeps track of a collection of items
26//! that it commits to. These items are built up into a tree, where each node is labeled
27//! by a SHA256 hash. The root is periodically published publicly. Anyone can then check
28//! that the root hash proves any particular committed element is referenced to by
29//! asking for the section of the tree containing the path from said element to the
30//! root. Each committed element is a leaf node whose label is the hash of the element and
31//! a timestamp. Each non-root, non-leaf node has two children; it is labeled by the
32//! hash of its children. The root is a hash of its children and a timestamp. The path
33//! is a proof of inclusion as it would require inverting SHA256 to make a fraudulent path,
34//! and this is currently considered computationally infeasible.
35//!
36//! The system allows censorship of individual leaves. This is obviously generally undesirable and
37//! to some extent undermines some of the point of a committed bulletin board. However,
38//! most if not all countries have censorship laws that require operators of bulletin boards
39//! to do censorship, regardless of whether or not you agree with such laws. The system is
40//! designed to have the following properties in the presence of censorship:
41//! * Censorship does not invalidate any published root.
42//! * Censorship, even post a published root, does not invalidate or even affect any proof chain
43//! other than the particular leaf being censored.
44//! * Even the leaf being censored can still be verified should you happen to know
45//! what had been present before the censorship.
46//! * It is impossible to hide the fact that a censorship post published root has occurred.
47//! * If you do not use the censorship feature, then the overhead of having it present is negligible.
48//!
49//! Censorship is accomplished by simply not providing the censored text; the leaf and its
50//! associated hash value (and timestamp) are still present. The hash value cannot be modified
51//! post published root, as that would invalidate the parent branch. The timestamp cannot
52//! however be verified unless you happen to know the uncensored text.
53//!
54
55pub mod hash;
56pub mod hash_history;
57pub mod growing_forest;
58pub mod backend_memory;
59pub mod backend_flatfile;
60pub mod backend_journal;
61pub mod deduce_journal;
62pub mod verifier;
63
64use crate::growing_forest::GrowingForest;
65use crate::hash::{FromHashValueError, HashValue};
66use crate::hash_history::{HashInfo, FullProof, HashSource, BranchHashHistory, RootHashHistory, LeafHashHistory, timestamp_now, HashInfoWithHash, Timestamp};
67use std::time::Duration;
68use std::collections::HashSet;
69use std::iter::FromIterator;
70use std::num::ParseIntError;
71use serde::{Serialize,Deserialize};
72
73/// This is the main API to the bulletin board library. This represents an entire bulletin board.
74/// You provide a backend of type [BulletinBoardBackend] (typically an indexed database),
75/// and it provides a suitable API.
76///
77/// Actually, you are likely to wrap your provided backend
78/// inside a [backend_journal::BackendJournal] to provide efficient bulk verification support,
79/// unless you want efficient censorship support. Alternatively
80/// you may use the computationally expensive [deduce_journal::deduce_journal] to compute
81/// the journal needed for bulk verification when needed. In production, you would probably
82/// want to make a separate backend with a separate database connection so [deduce_journal::deduce_journal]
83/// can run in parallel.
84///
85/// There are two simple provided backends for testing and prototyping,
86/// [backend_memory::BackendMemory] and [backend_flatfile::BackendFlatfile].
87/// In production you will probably want to use some database; this is somewhat database
88/// dependent, and so a sample mysql database backend is given
89/// in <https://github.com/RightToAskOrg/bulletin-board>.
90///
91/// There is a demo website project (bulletin-board-demo) that exposes the below API
92/// in the git repository at
93/// <https://github.com/RightToAskOrg/bulletin-board>
94/// Each API call is exposed as a REST call with relative URL
95/// the function name and the the hash query, if any, as a GET argument something like `get_hash_info?hash=a425...56`.
96/// All results are returned as JSON encodings of the actual results. The leaf is submitted as a POST with body encoded JSON object containing a single field name `data`,
97/// and censoring is similarly a POST with body encoded JSON object with a single field name `leaf_to_censor`.
98///
99///
100/// The Merkle trees are grown as described in [GrowingForest]. Each published root consists of a hash of a small O(log leafs) number
101/// of leaf or branch nodes, and the prior published root. Each branch node in it is a perfectly balanced binary tree.
102/// Verification steps are described in [backend_journal::BackendJournal]
103///
104/// # Example
105///
106/// In the following example, four elements are inserted, "a", "b", "c" and "d" into a previously empty bulletin board.
107/// A publication occurs after "c" and another publication after "d".
108///
109/// When "a" is inserted, it is a leaf forming a single tree of depth 0.
110/// When "b" is inserted after it, it is merged with "a" to make a tree of depth 1 with "a" on the left and "b" on the right.
111///
112/// When c is inserted, it forms a new single tree of depth 0. This does not merge with "a" or "b" as they are already taken,
113/// and it does not merge with the combined tree ab as that has a different depth and that would lead to an unbalanced tree.
114/// So there are now two pending trees, one containing ab and one containing c.
115///
116/// The first publication contains these two trees ab and c.
117///
118/// When d is inserted, it forms a tree with c, and the new tree cd merges with ab to make a new depth 2 tree abcd.
119/// This single tree is in the second publication.
120/// ```
121/// use merkle_tree_bulletin_board::backend_memory::BackendMemory;
122/// use merkle_tree_bulletin_board::BulletinBoard;
123/// use merkle_tree_bulletin_board::hash::HashValue;
124/// use merkle_tree_bulletin_board::hash_history::{HashSource, LeafHashHistory, HashInfo, BranchHashHistory, RootHashHistory};
125///
126/// let backend = BackendMemory::default();
127/// let mut board = BulletinBoard::new(backend).unwrap();
128/// // utility function to check that something is indeed a leaf with the expected data.
129/// fn assert_is_leaf(source:HashSource,expected_data:&str) {
130/// match source {
131/// HashSource::Leaf(LeafHashHistory{data:Some(d),timestamp:_}) => assert_eq!(d,expected_data),
132/// _ => panic!("Not a leaf"),
133/// }
134/// }
135/// assert_eq!(board.get_all_published_roots().unwrap(),vec![]);
136/// assert_eq!(board.get_most_recent_published_root().unwrap(),None);
137/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![]);
138///
139/// let hash_a : HashValue = board.submit_leaf("a").unwrap();
140/// // we have inserted A, which is a single tree but nothing is published.
141/// assert_eq!(board.get_hash_info(hash_a).unwrap().parent,None);
142/// assert_is_leaf(board.get_hash_info(hash_a).unwrap().source,"a");
143/// assert_eq!(board.get_all_published_roots().unwrap(),vec![]);
144/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![hash_a]);
145///
146/// let hash_b : HashValue = board.submit_leaf("b").unwrap();
147/// // we have inserted 'b', which will be merged into a tree with 'a' on the left and 'b' right.
148/// let branch_ab : HashValue = board.get_hash_info(hash_a).unwrap().parent.unwrap();
149/// assert_eq!(board.get_hash_info(hash_b).unwrap().parent,Some(branch_ab));
150/// assert_is_leaf(board.get_hash_info(hash_b).unwrap().source,"b");
151/// assert_eq!(board.get_all_published_roots().unwrap(),vec![]);
152/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![branch_ab]);
153/// assert_eq!(board.get_hash_info(branch_ab).unwrap(), HashInfo{
154/// source: HashSource::Branch(BranchHashHistory{left:hash_a,right:hash_b}) ,parent: None});
155///
156/// let hash_c : HashValue = board.submit_leaf("c").unwrap();
157/// // we have now inserted 'c', which will not be merged with branch_ab
158/// // as they are different depths and that would lead to an unbalanced tree.
159/// assert_eq!(board.get_hash_info(hash_c).unwrap().parent,None);
160/// assert_is_leaf(board.get_hash_info(hash_c).unwrap().source,"c");
161/// assert_eq!(board.get_all_published_roots().unwrap(),vec![]);
162/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![branch_ab,hash_c]);
163///
164/// // now publish! This will publish branch_ab and hash_c.
165/// let published1 : HashValue = board.order_new_published_root().unwrap();
166/// match board.get_hash_info(published1).unwrap().source {
167/// HashSource::Root(RootHashHistory{timestamp:_,elements:e,prior:None}) =>
168/// assert_eq!(e,vec![branch_ab,hash_c]),
169/// _ => panic!("Should be a root"),
170/// }
171/// assert_eq!(board.get_all_published_roots().unwrap(),vec![published1]);
172/// assert_eq!(board.get_most_recent_published_root().unwrap(),Some(published1));
173/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![]);
174/// // branch_ab,hash_c are still parentless and can be merged with, but are no longer unpublished.
175///
176/// // add another element 'd', which will merge with 'c', making branch_cd,
177/// // which will then merge with ab making a single tree abcd.
178/// let hash_d : HashValue = board.submit_leaf("d").unwrap();
179/// let branch_cd : HashValue = board.get_hash_info(hash_c).unwrap().parent.unwrap();
180/// assert_eq!(board.get_hash_info(hash_d).unwrap().parent,Some(branch_cd));
181/// assert_is_leaf(board.get_hash_info(hash_d).unwrap().source,"d");
182/// let branch_abcd : HashValue = board.get_hash_info(branch_ab).unwrap().parent.unwrap();
183/// assert_eq!(board.get_hash_info(branch_cd).unwrap(),HashInfo{
184/// source: HashSource::Branch(BranchHashHistory{left:hash_c,right:hash_d}) ,
185/// parent: Some(branch_abcd)});
186/// assert_eq!(board.get_hash_info(branch_abcd).unwrap(),HashInfo{
187/// source: HashSource::Branch(BranchHashHistory{left:branch_ab,right:branch_cd}) ,
188/// parent: None});
189/// assert_eq!(board.get_all_published_roots().unwrap(),vec![published1]);
190/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![branch_abcd]);
191///
192/// // do another publication, which now only has to contain abcd which includes everything,
193/// // including things from before the last publication.
194/// let published2 = board.order_new_published_root().unwrap();
195/// match board.get_hash_info(published2).unwrap().source {
196/// HashSource::Root(RootHashHistory{timestamp:_,elements:e,prior:Some(prior)}) => {
197/// assert_eq!(e,vec![branch_abcd]);
198/// assert_eq!(prior,published1);
199/// }
200/// _ => panic!("Should be a root"),
201/// }
202/// assert_eq!(board.get_all_published_roots().unwrap(),vec![published1,published2]);
203/// assert_eq!(board.get_most_recent_published_root().unwrap(),Some(published2));
204/// assert_eq!(board.get_parentless_unpublished_hash_values().unwrap(),vec![]);
205/// // branch_abcd is still parentless and can be merged with, but is no longer unpublished.
206/// ```
207///
208pub struct BulletinBoard<B:BulletinBoardBackend> {
209 pub backend : B,
210 /// None if there is an error, otherwise the currently growing forest.
211 current_forest: Option<GrowingForest>,
212}
213
214/// Possible things that could go wrong during a Bulletin Board operation.
215#[derive(Debug,Clone,Serialize,Deserialize,Eq,PartialEq,thiserror::Error)]
216pub enum BulletinBoardError {
217 #[error("Identical data has already been submitted very recently to the bulletin board")]
218 IdenticalDataAlreadySubmitted,
219 #[error("Could not initialize bulletin board from the database")]
220 CouldNotInitializeFromDatabase,
221 #[error("The published root has no information in the backend")]
222 PublishedRootHasNoInfo,
223 #[error("A new published root was ordered immediately after already publishing one, with no new data")]
224 PublishingNewRootInstantlyAfterLastRoot, // if ordering a new root to be published with same timestamp and no new data.
225 #[error("The provided hash value does not correspond to a node")]
226 NoSuchHash,
227 #[error("The proof chain is missing node {0}")]
228 ProofChainCorruptMissingPublishedNode(HashValue),
229 #[error("The published root {0} is not actually a root node")]
230 PublishedRootIsNotARoot(HashValue),
231 #[error("Multiple hash clashes indicates that the program is buggy or you are the unluckiest person in all the universes everywhere. I think it is the former")]
232 MultipleHashClashes,
233 #[error("Can only censor leaves")]
234 CanOnlyCensorLeaves,
235 #[error("The bulletin board backend is inconsistent or corrupt : {0}")]
236 BackendInconsistentError(String),
237 #[error("The bulletin board backend has an IO Error : {0}")]
238 BackendIOError(String),
239 #[error("The bulletin board backend had an error parsing a value : {0}")]
240 BackendParsingError(String),
241 #[error("system time clock is not available")]
242 ClockError,
243}
244
245
246impl From<std::io::Error> for BulletinBoardError {
247 fn from(value: std::io::Error) -> Self {
248 BulletinBoardError::BackendIOError(value.to_string())
249 }
250}
251impl From<FromHashValueError> for BulletinBoardError {
252 fn from(value: FromHashValueError) -> Self {
253 BulletinBoardError::BackendParsingError(value.to_string())
254 }
255}
256
257impl From<ParseIntError> for BulletinBoardError {
258 fn from(value: ParseIntError) -> Self {
259 BulletinBoardError::BackendParsingError(value.to_string())
260 }
261}
262
263/// Adding one element to a set to be committed may result in a variety of elements being produced.
264/// A database may have the ability to do transactions, in which case this can be made safer by committing
265/// all the modifications needed by a single API call so that the database doesn't have dangling elements.
266#[derive(Default)]
267pub struct DatabaseTransaction {
268 pub pending : Vec<(HashValue,HashSource)>,
269}
270
271impl DatabaseTransaction {
272 /// Add a new leaf hash to the database.
273 pub fn add_leaf_hash(&mut self,new_hash:HashValue,history:LeafHashHistory) { self.pending.push((new_hash,HashSource::Leaf(history))) }
274 /// Add a new branch hash to the database. (not leaf, not root).
275 pub fn add_branch_hash(&mut self,new_hash:HashValue,history:BranchHashHistory) { self.pending.push((new_hash,HashSource::Branch(history))) }
276 /// Add a new root hash to the database.
277 pub fn add_root_hash(&mut self, new_hash:HashValue, history: RootHashHistory) { self.pending.push((new_hash,HashSource::Root(history))) }
278
279 fn get_hash_info(&self,query:HashValue) -> Option<HashSource> { self.pending.iter().find(|(hash,_)| *hash == query).map(|(_,source)|source.clone()) }
280 /// check for a hash collision by looking up both this and the database backend.
281 fn get_hash_info_completely(&self,backend:&impl BulletinBoardBackend,query:HashValue) -> Result<Option<HashSource>,BulletinBoardError> {
282 if let Some(info) = backend.get_hash_info(query)? { Ok(Some(info.source)) }
283 else { Ok(self.get_hash_info(query)) }
284 }
285
286 /// make a transaction containing a single entry.
287 pub fn singleton(hash:HashValue,source:HashSource) -> DatabaseTransaction {
288 DatabaseTransaction{ pending:vec![(hash,source)]}
289 }
290}
291
292/// The data from the bulletin board needs to be stored somewhere.
293/// Typically this will be a database, but for generality anything implementing
294/// this trait can be used.
295pub trait BulletinBoardBackend {
296 /// Get all published roots, for all time.
297 fn get_all_published_roots(&self) -> Result<Vec<HashValue>,BulletinBoardError>;
298 /// Get the most recently published root, should it exist.
299 fn get_most_recent_published_root(&self) -> Result<Option<HashValue>,BulletinBoardError>;
300 /// Get all leaves and branches without a parent branch. Published nodes do not count as a parent.
301 /// This is used to recompute the starting state.
302 fn get_all_leaves_and_branches_without_a_parent(&self) -> Result<Vec<HashValue>,BulletinBoardError>;
303 /// given a hash, get information about what it represents, if anything.
304 fn get_hash_info(&self, query:HashValue) -> Result<Option<HashInfo>,BulletinBoardError>;
305
306 /// Store a transaction in the database.
307 fn publish(&mut self,transaction:&DatabaseTransaction) -> Result<(),BulletinBoardError>;
308
309 /// Remove the text associated with a leaf.
310 fn censor_leaf(&mut self,leaf_to_censor:HashValue) -> Result<(),BulletinBoardError>;
311
312 /// Get the depth of a subtree rooted at a given leaf or branch node) by following elements down the left side of each branch.
313 /// A leaf node has depth 0.
314 /// A branch node has depth 1 or more.
315 ///
316 /// The default implementation is to repeatedly call get_hash_info depth times; this is usually adequate as this is only used during startup.
317 fn left_depth(&self,hash:HashValue) -> Result<usize,BulletinBoardError> {
318 let mut res = 0;
319 let mut hash = hash;
320 loop {
321 match self.get_hash_info(hash)? {
322 Some(HashInfo{source:HashSource::Branch(history),..}) => {
323 res+=1;
324 hash = history.left;
325 }
326 _ => break
327 }
328 }
329 Ok(res)
330 }
331
332
333 /// Deduce the current forest structure.
334 /// * First find leaf or branch elements that do not have a parent. These are the trees that are in the forest
335 /// * Find the depth of each of these elements.
336 /// * Sort, highest first.
337 ///
338 /// The default implementation is usually adequate as it is only used during startup.
339 fn compute_current_forest(&self) -> Result<GrowingForest,BulletinBoardError> {
340 GrowingForest::new(&self.get_all_leaves_and_branches_without_a_parent()?,|h|self.left_depth(h))
341 }
342
343}
344
345fn bb_timestamp_now() -> Result<Timestamp, BulletinBoardError> {
346 timestamp_now().map_err(|_|BulletinBoardError::ClockError)
347}
348
349impl <B:BulletinBoardBackend> BulletinBoard<B> {
350
351 /// called when the current_forest field is corrupt. Make it valid, if possible.
352 fn reload_current_forest(&mut self) -> Result<(),BulletinBoardError> {
353 match self.backend.compute_current_forest() {
354 Ok(f) => {
355 self.current_forest = Some(f);
356 Ok(())
357 }
358 Err(e) => {
359 self.current_forest = None;
360 Err(e)
361 }
362 }
363 }
364
365
366 /// Helper used in submit_leaf to wrap errors so that it is easy to reload the current forest if a recoverable error (e.g. resubmitted data) occurs during this step.
367 fn submit_leaf_work(&mut self,data:String) -> Result<HashValue,BulletinBoardError> {
368 let history = LeafHashHistory{ timestamp: bb_timestamp_now()?, data: Some(data) };
369 let new_hash = history.compute_hash().unwrap();
370 match self.backend.get_hash_info(new_hash)? {
371 Some(HashInfo{source:HashSource::Leaf(other_history), .. }) if other_history==history => {
372 Err(BulletinBoardError::IdenticalDataAlreadySubmitted)
373 }
374 Some(hash_collision) => { // The below case is absurdly unlikely to happen.
375 eprintln!("Time to enter the lottery! Actually you have probably won without entering. You have just done a submission and found a hash collision between {:?} and {:?}",&hash_collision,&history);
376 std::thread::sleep(Duration::from_secs(1)); // work around - wait a second and retry, with a new timestamp.
377 self.submit_leaf_work(history.data.unwrap())
378 }
379 _ => { // no hash collision, all is good. Should go here 99.99999999999999999999999999999..% of the time.
380 let mut transaction = DatabaseTransaction::default();
381 transaction.add_leaf_hash(new_hash,history);
382 self.current_forest.as_mut().ok_or_else(||BulletinBoardError::CouldNotInitializeFromDatabase)?.add_leaf(new_hash, &self.backend, &mut transaction)?;
383 self.backend.publish(&transaction)?;
384 Ok(new_hash)
385 }
386 }
387 }
388
389
390 /// Submit some data to be included in the bulletin board, and get back a HashValue that the
391 /// board commits to having in the history.
392 /// Note that if the same data is submitted twice in the same second it will return an error (as this probably is)
393 ///
394 /// # Example
395 ///
396 /// ```
397 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
398 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
399 /// board.submit_leaf("A").unwrap();
400 /// // the board now has one leaf!
401 ///```
402 pub fn submit_leaf(&mut self,data:&str) -> Result<HashValue,BulletinBoardError> {
403 let res = self.submit_leaf_work(data.to_string());
404 if res.is_err() { self.reload_current_forest()? }
405 res
406 }
407
408 /// Create a new bulletin board from a backend.
409 pub fn new(backend:B) -> Result<Self,BulletinBoardError> {
410 let mut res = BulletinBoard { backend, current_forest : None };
411 res.reload_current_forest()?;
412 Ok(res)
413 }
414
415 /// Get a valid forest reference, or an error.
416 fn forest_or_err(&self) -> Result<&GrowingForest,BulletinBoardError> {
417 self.current_forest.as_ref().ok_or_else(||BulletinBoardError::CouldNotInitializeFromDatabase)
418 }
419
420 /// Get the current published head that everyone knows. Everyone who is paying attention, that is. And who can remember 256 bits of gibberish.
421 ///
422 /// # Example
423 ///
424 /// ```
425 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
426 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
427 /// assert_eq!(None,board.get_most_recent_published_root().unwrap());
428 /// board.submit_leaf("A").unwrap();
429 /// let hash1 = board.order_new_published_root().unwrap();
430 /// assert_eq!(Some(hash1),board.get_most_recent_published_root().unwrap());
431 /// board.submit_leaf("B").unwrap();
432 /// assert_eq!(Some(hash1),board.get_most_recent_published_root().unwrap());
433 /// let hash2 = board.order_new_published_root().unwrap();
434 /// assert_eq!(Some(hash2),board.get_most_recent_published_root().unwrap());
435 ///```
436 pub fn get_most_recent_published_root(&self) -> Result<Option<HashValue>,BulletinBoardError> {
437 self.backend.get_most_recent_published_root()
438 }
439
440 /// Get a list of all published roots, ordered oldest to newest.
441 ///
442 /// # Example
443 ///
444 /// ```
445 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
446 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
447 /// assert!(board.get_all_published_roots().unwrap().is_empty());
448 /// board.submit_leaf("A").unwrap();
449 /// let hash1 = board.order_new_published_root().unwrap();
450 /// assert_eq!(vec![hash1],board.get_all_published_roots().unwrap());
451 /// board.submit_leaf("B").unwrap();
452 /// assert_eq!(vec![hash1],board.get_all_published_roots().unwrap());
453 /// let hash2 = board.order_new_published_root().unwrap();
454 /// assert_eq!(vec![hash1,hash2],board.get_all_published_roots().unwrap());
455 ///```
456 pub fn get_all_published_roots(&self) -> Result<Vec<HashValue>,BulletinBoardError> {
457 self.backend.get_all_published_roots()
458 }
459
460 /// Get the currently committed to, but not yet published, hash values.
461 /// Equivalently, get all branches and leaves that do not have parents, and which are not included in the last published root.
462 ///
463 /// # Example
464 ///
465 /// ```
466 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
467 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
468 /// assert!(board.get_parentless_unpublished_hash_values().unwrap().is_empty());
469 /// let hash = board.submit_leaf("A").unwrap();
470 /// assert_eq!(vec![hash],board.get_parentless_unpublished_hash_values().unwrap());
471 /// board.order_new_published_root().unwrap();
472 /// assert!(board.get_parentless_unpublished_hash_values().unwrap().is_empty());
473 /// board.submit_leaf("B").unwrap();
474 /// assert_eq!(1,board.get_parentless_unpublished_hash_values().unwrap().len());
475 /// // will be the tree formed from "A" and "B", not "B" itself.
476 ///```
477 pub fn get_parentless_unpublished_hash_values(&self) -> Result<Vec<HashValue>,BulletinBoardError> {
478 let mut currently_used : Vec<HashValue> = self.forest_or_err()?.get_subtrees();
479 if let Some(published_root) = self.backend.get_most_recent_published_root()? {
480 if let Some(HashInfo{source:HashSource::Root(history),..}) = self.backend.get_hash_info(published_root)? {
481 currently_used.retain(|h|!history.elements.contains(h)) // remove already published elements.
482 } else { return Err(BulletinBoardError::PublishedRootHasNoInfo) }
483 }
484 Ok(currently_used)
485 }
486
487 /// Request a new published root. This will contain a reference to each tree in
488 /// the current forest. That is, each leaf or branch node that doesn't have a parent.
489 /// This will return an error if called twice in rapid succession (same timestamp) with nothing added in the meantime, as it would otherwise produce the same hash, and is almost certainly not what was intended anyway.
490 pub fn order_new_published_root(&mut self) -> Result<HashValue,BulletinBoardError> {
491 let history = RootHashHistory { timestamp: bb_timestamp_now()?, elements: self.forest_or_err()?.get_subtrees(), prior : self.get_most_recent_published_root()? };
492 let new_hash = history.compute_hash();
493 match self.backend.get_hash_info(new_hash)? {
494 Some(HashInfo{source:HashSource::Root(other_history), .. }) if other_history==history => {
495 Err(BulletinBoardError::PublishingNewRootInstantlyAfterLastRoot)
496 }
497 Some(hash_collision) => {
498 println!("Time to enter the lottery! Actually you have probably won without entering. You have just done a submission and found a hash collision between {:?} and {:?}",&hash_collision,&history);
499 std::thread::sleep(Duration::from_secs(1)); // work around - wait a second and retry, with a new timestamp.
500 self.order_new_published_root()
501 }
502 _ => { // no hash collision, all is good. Should go here 99.99999999999999999999999999999..% of the time.
503 let mut transaction = DatabaseTransaction::default();
504 transaction.add_root_hash(new_hash,history);
505 self.backend.publish(&transaction)?;
506 Ok(new_hash)
507 }
508 }
509 }
510
511 /// Get information about a HashValue, assuming it exists.
512 /// This includes its parent branch, if any, and how it is created.
513 ///
514 /// # Example
515 ///
516 /// ```
517 /// use merkle_tree_bulletin_board::hash_history::{HashSource, LeafHashHistory};
518 ///
519 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
520 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
521 /// let hash = board.submit_leaf("A").unwrap();
522 /// let info = board.get_hash_info(hash).unwrap();
523 /// assert_eq!(info.parent,None);
524 /// match info.source {
525 /// HashSource::Leaf(LeafHashHistory{data:Some(d),timestamp:_}) => assert_eq!(d,"A"),
526 /// _ => panic!("Not a leaf"),
527 /// }
528 /// ```
529 pub fn get_hash_info(&self, query:HashValue) -> Result<HashInfo,BulletinBoardError> {
530 self.backend.get_hash_info(query)?.ok_or_else(||BulletinBoardError::NoSuchHash)
531 }
532
533
534 /// Convenience method to get a whole proof chain at once, that is, the chain
535 /// from the provided hashvalue back to the most recent published root.
536 ///
537 /// This could easily be done via multiple calls
538 /// to the other APIs, and indeed that is how this is implemented.
539 ///
540 /// See [verifier::verify_proof] for how to verify the proof.
541 ///
542 /// # Example
543 ///
544 /// ```
545 /// use merkle_tree_bulletin_board::hash_history::{HashSource, BranchHashHistory};
546 /// use merkle_tree_bulletin_board::verifier::verify_proof;
547 ///
548 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
549 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
550 /// let hash_a = board.submit_leaf("a").unwrap();
551 /// let hash_b = board.submit_leaf("b").unwrap(); // made a branch out of a and b
552 /// let branch = board.get_parentless_unpublished_hash_values().unwrap()[0];
553 /// let root = board.order_new_published_root().unwrap();
554 /// let proof = board.get_proof_chain(hash_a).unwrap(); // get the inclusion proof for "a".
555 /// assert_eq!(proof.published_root.clone().unwrap().hash,root); // the proof is for this root
556 /// match &proof.published_root.as_ref().unwrap().source {
557 /// HashSource::Root(history) => assert_eq!(history.elements,vec![branch]),
558 /// _ => panic!("Not root")
559 /// }
560 /// assert_eq!(proof.chain.len(),2);
561 /// assert_eq!(proof.chain[0].hash,hash_a); // the leaf we asked for
562 /// assert_eq!(proof.chain[1].hash,branch); // the parent of the leaf we asked for
563 /// // the chain does not continue up as chain[1].hash is in the published root.
564 /// assert_eq!(proof.chain[1].source,
565 /// HashSource::Branch(BranchHashHistory{left: hash_a,right: hash_b}));
566 /// assert_eq!(verify_proof("a",root,&proof),None); // A thorough check.
567 /// ```
568 pub fn get_proof_chain(&self,query:HashValue) -> Result<FullProof,BulletinBoardError> {
569 let mut chain = vec![];
570 let mut node = query;
571 let mut published_root : Option<HashInfoWithHash> = {
572 if let Ok(Some(published_root_hash)) = self.get_most_recent_published_root() {
573 if let Ok(node_info) = self.get_hash_info(published_root_hash) {
574 Some(node_info.add_hash(published_root_hash))
575 } else { return Err(BulletinBoardError::ProofChainCorruptMissingPublishedNode(published_root_hash)); } // There is a break in the logic!!!
576 } else {None }
577 };
578 let published: HashSet<HashValue> = {
579 if let Some(info) = &published_root {
580 if let HashSource::Root(history) = &info.source {
581 HashSet::from_iter(history.elements.iter().cloned())
582 } else { return Err(BulletinBoardError::PublishedRootIsNotARoot(node)); } // There is a break in the logic!!!
583 } else { HashSet::default() }
584 };
585 loop {
586 if let Ok(node_info) = self.get_hash_info(node) {
587 chain.push(node_info.add_hash(node));
588 if published.contains(&node) { break; }
589 match node_info.parent {
590 Some(parent) => node=parent,
591 None => {
592 published_root=None; // got to the end of the line without finding something in the published root.
593 break
594 },
595 }
596 } else {
597 return Err(if query==node { BulletinBoardError::NoSuchHash } else { BulletinBoardError::ProofChainCorruptMissingPublishedNode(node)});
598 } // There is a break in the logic!!!
599 }
600 Ok(FullProof{ chain, published_root })
601 }
602
603 /// Censor a leaf!
604 ///
605 /// The system allows censorship of individual leaves. This is obviously generally undesirable and
606 /// to some extent undermines some of the point of a committed bulletin board. However,
607 /// most if not all countries have censorship laws that require operators of bulletin boards
608 /// to do censorship, regardless of whether or not you agree with such laws. The system is
609 /// designed to have the following properties in the presence of censorship:
610 /// * Censorship does not invalidate any published root.
611 /// * Censorship, even post a published root, does not invalidate or even affect any proof chain
612 /// other than the particular leaf being censored.
613 /// * Even the leaf being censored can still be verified should you happen to know
614 /// what had been present before the censorship.
615 /// * It is impossible to hide the fact that a censorship post published root has occurred.
616 /// * If you do not use the censorship feature, then the overhead of having it present is negligible.
617 ///
618 /// Censorship is accomplished by simply not providing the censored text; the leaf and its
619 /// associated hash value (and timestamp) are still present. The hash value cannot be modified
620 /// post published root, as that would invalidate the parent branch. The timestamp cannot
621 /// however be verified unless you happen to know the uncensored text.
622 ///
623 /// # Example
624 ///
625 /// ```
626 /// use merkle_tree_bulletin_board::hash_history::{HashSource, LeafHashHistory};
627 ///
628 /// let mut board = merkle_tree_bulletin_board::BulletinBoard::new(
629 /// merkle_tree_bulletin_board::backend_memory::BackendMemory::default()).unwrap();
630 /// let hash = board.submit_leaf("A").unwrap();
631 /// // get uncensored leaf
632 /// let info = board.get_hash_info(hash).unwrap();
633 /// assert_eq!(info.parent,None);
634 /// match info.source {
635 /// HashSource::Leaf(LeafHashHistory{data:Some(d),timestamp:_}) => assert_eq!(d,"A"),
636 /// _ => panic!("Not an uncensored leaf"),
637 /// }
638 ///
639 /// board.censor_leaf(hash).unwrap();
640 /// // get censored leaf. Identical except data is missing.
641 /// let info = board.get_hash_info(hash).unwrap();
642 /// assert_eq!(info.parent,None);
643 /// match info.source {
644 /// HashSource::Leaf(LeafHashHistory{data:None,timestamp:_}) => {}
645 /// _ => panic!("Not a censored leaf"),
646 /// }
647 /// ```
648 pub fn censor_leaf(&mut self,leaf_to_censor:HashValue) -> Result<(),BulletinBoardError> {
649 self.backend.censor_leaf(leaf_to_censor)
650 }
651}