1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
use std::collections::hash_map::DefaultHasher; use std::fs::File; use std::hash::{Hash, Hasher}; use std::io::{BufReader, Write, Read}; use std::path::Path; use std::sync::Arc; use std::sync::Mutex; use std::thread; use std::time::Duration; fn read_from_file<A: serde::de::DeserializeOwned>(database_name: &str) -> Vec<A> { let fullpath = format!(".{}{}.{}", "/SixthDatabase/", database_name, "6db"); let backup_path = format!(".{}{}.{}", "/SixthDatabase/", database_name, "6db.bak"); let path = Path::new(fullpath.as_str()); if !path.exists() { let out = vec![]; return out; } else { let file = File::open(path).unwrap(); let mut line: Vec<u8> = vec![]; BufReader::new(file).read_to_end(&mut line).expect("Could not read file"); let out = bincode::deserialize(&line).expect("Failed to deserialize"); return out; } } fn write_to_file<T>(data: &T, path: &str) where T: serde::ser::Serialize { let fullpath = format!(".{}{}.{}", "/SixthDatabase/", path, "6db"); let path = Path::new(fullpath.as_str()); if !path.exists() { if !std::fs::read_dir("./SixthDatabase/").is_ok() { std::fs::create_dir("./SixthDatabase/").expect("Could not create SixthDatabase directory"); } } { let mut f = File::create(path).expect("Error opening file"); let serialized = bincode::serialize(&data).expect("serialization failed"); f.write_all(&serialized).expect("Could not write serialized data to file"); } } fn make_thread<A: 'static>(instance: &Arc<Mutex<Database<A>>>) where A: std::marker::Send, A: serde::de::DeserializeOwned, A: serde::ser::Serialize, A: std::hash::Hash { let reference = instance.clone(); instance.lock().unwrap().inner.thread = Some(thread::spawn(move || { loop { let mut lock1 = reference.lock().expect("Failed to obtain 6db lock in saving thread"); let current_hash = hashme(&lock1.inner.data); if current_hash != lock1.inner.old_hash { lock1.inner.old_hash = current_hash; write_to_file(&lock1.inner.data, lock1.inner.database_name); } if lock1.inner.shutdown { break; } thread::sleep(Duration::from_secs(15)); } })); } fn hashme<T>(obj: &T) -> u64 where T: Hash, { let mut hasher = DefaultHasher::new(); obj.hash(&mut hasher); hasher.finish() } pub struct SixthDatabaseInner<A> { pub database_name: &'static str, pub data: Vec<A>, old_hash: u64, thread: Option<thread::JoinHandle<()>>, shutdown: bool, } pub struct Database<A> { pub inner: SixthDatabaseInner<A> } impl<A: 'static> Database<A> where A: std::marker::Send, A: serde::de::DeserializeOwned, A: serde::ser::Serialize, A: std::hash::Hash { pub fn new(db_name: &'static str) -> Arc<Mutex<Database<A>>> { let from_disk = read_from_file(db_name); let hashed = hashme(&from_disk); let object = Arc::new(Mutex::new(Database { inner: (SixthDatabaseInner { database_name: db_name, data: from_disk, old_hash: hashed, thread: None, shutdown: false }) })); make_thread(&object); return object; } pub fn drop(mut self) { self.inner.shutdown = true; let thread = self.inner.thread.expect("Thread did not exist at cleanup"); thread.join().expect("could not join thread at cleanup"); } }