Struct git_odb::store::Handle

source ·
pub struct Handle<S>where
    S: Deref<Target = Store> + Clone,
{ pub refresh: RefreshMode, pub max_recursion_depth: usize, pub ignore_replacements: bool, /* private fields */ }
Expand description

This effectively acts like a handle but exists to be usable from the actual crate::Handle implementation which adds caches on top. Each store is quickly cloned and contains thread-local state for shared packs.

Fields§

§refresh: RefreshMode

Defines what happens when there is no more indices to load.

§max_recursion_depth: usize

The maximum recursion depth for resolving ref-delta base objects, that is objects referring to other objects within a pack. Recursive loops are possible only in purposefully crafted packs. This value doesn’t have to be huge as in typical scenarios, these kind of objects are rare and chains supposedly are even more rare.

§ignore_replacements: bool

If true, replacements will not be performed even if these are available.

Implementations§

Return the exact number of packed objects after loading all currently available indices as last seen on disk.

Given a prefix candidate with an object id and an initial hex_len, check if it only matches a single object within the entire object database and increment its hex_len by one until it is unambiguous. Return Ok(None) if no object with that prefix exists.

Find the only object matching prefix and return it as Ok(Some(Ok(<ObjectId>))), or return Ok(Some(Err(())) if multiple different objects with the same prefix were found.

Return Ok(None) if no object matched the prefix.

Pass candidates to obtain the set of all object ids matching prefix, with the same return value as one would have received if it remained None.

Performance Note
  • Unless the handles refresh mode is set to Never, each lookup will trigger a refresh of the object databases files on disk if the prefix doesn’t lead to ambiguous results.
  • Since all objects need to be examined to assure non-ambiguous return values, after calling this method all indices will be loaded.
  • If candidates is Some(…), the traversal will continue to obtain all candidates, which takes more time as there is no early abort.
Examples found in repository?
src/store_impls/dynamic/prefix.rs (line 111)
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
    pub fn disambiguate_prefix(
        &self,
        mut candidate: disambiguate::Candidate,
    ) -> Result<Option<git_hash::Prefix>, disambiguate::Error> {
        let max_hex_len = candidate.id().kind().len_in_hex();
        if candidate.hex_len() == max_hex_len {
            return Ok(self.contains(candidate.id()).then(|| candidate.to_prefix()));
        }

        while candidate.hex_len() != max_hex_len {
            let res = self.lookup_prefix(candidate.to_prefix(), None)?;
            match res {
                Some(Ok(_id)) => return Ok(Some(candidate.to_prefix())),
                Some(Err(())) => {
                    candidate.inc_hex_len();
                    continue;
                }
                None => return Ok(None),
            }
        }
        Ok(Some(candidate.to_prefix()))
    }

Return an iterator over all, possibly duplicate, objects, first the ones in all packs of all linked databases (via alternates), followed by all loose objects.

Call once if pack ids are stored and later used for lookup, meaning they should always remain mapped and not be unloaded even if they disappear from disk. This must be called if there is a chance that git maintenance is happening while a pack is created.

Return a shared reference to the contained store.

Examples found in repository?
src/store_impls/dynamic/iter.rs (line 225)
224
225
226
    pub fn iter(&self) -> Result<AllObjects, dynamic::load_index::Error> {
        AllObjects::new(self.store_ref())
    }
More examples
Hide additional examples
src/store_impls/dynamic/handle.rs (line 362)
361
362
363
364
365
366
367
    pub fn into_arc(self) -> std::io::Result<super::Handle<Arc<super::Store>>> {
        let store = Arc::new(super::Store::try_from(self.store_ref())?);
        let mut cache = store.to_handle_arc();
        cache.refresh = self.refresh;
        cache.max_recursion_depth = self.max_recursion_depth;
        Ok(cache)
    }
src/store_impls/dynamic/find.rs (line 363)
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
    fn location_by_oid(
        &self,
        id: impl AsRef<git_hash::oid>,
        buf: &mut Vec<u8>,
    ) -> Option<git_pack::data::entry::Location> {
        assert!(
            matches!(self.token.as_ref(), Some(handle::Mode::KeepDeletedPacksAvailable)),
            "BUG: handle must be configured to `prevent_pack_unload()` before using this method"
        );

        assert!(self.store_ref().replacements.is_empty() || self.ignore_replacements, "Everything related to packing must not use replacements. These are not used here, but it should be turned off for good measure.");

        let id = id.as_ref();
        let mut snapshot = self.snapshot.borrow_mut();
        'outer: loop {
            {
                let marker = snapshot.marker;
                for (idx, index) in snapshot.indices.iter_mut().enumerate() {
                    if let Some(handle::index_lookup::Outcome {
                        object_index: handle::IndexForObjectInPack { pack_id, pack_offset },
                        index_file: _,
                        pack: possibly_pack,
                    }) = index.lookup(id)
                    {
                        let pack = match possibly_pack {
                            Some(pack) => pack,
                            None => match self.store.load_pack(pack_id, marker).ok()? {
                                Some(pack) => {
                                    *possibly_pack = Some(pack);
                                    possibly_pack.as_deref().expect("just put it in")
                                }
                                None => {
                                    // The pack wasn't available anymore so we are supposed to try another round with a fresh index
                                    match self.store.load_one_index(self.refresh, snapshot.marker).ok()? {
                                        Some(new_snapshot) => {
                                            *snapshot = new_snapshot;
                                            self.clear_cache();
                                            continue 'outer;
                                        }
                                        None => {
                                            // nothing new in the index, kind of unexpected to not have a pack but to also
                                            // to have no new index yet. We set the new index before removing any slots, so
                                            // this should be observable.
                                            return None;
                                        }
                                    }
                                }
                            },
                        };
                        let entry = pack.entry(pack_offset);

                        buf.resize(entry.decompressed_size.try_into().expect("representable size"), 0);
                        assert_eq!(pack.id, pack_id.to_intrinsic_pack_id(), "both ids must always match");

                        let res = pack.decompress_entry(&entry, buf).ok().map(|entry_size_past_header| {
                            git_pack::data::entry::Location {
                                pack_id: pack.id,
                                pack_offset,
                                entry_size: entry.header_size() + entry_size_past_header,
                            }
                        });

                        if idx != 0 {
                            snapshot.indices.swap(0, idx);
                        }
                        return res;
                    }
                }
            }

            match self.store.load_one_index(self.refresh, snapshot.marker).ok()? {
                Some(new_snapshot) => {
                    *snapshot = new_snapshot;
                    self.clear_cache();
                }
                None => return None,
            }
        }
    }

Return an owned store with shared ownership.

Set the handle to never cause ODB refreshes if an object could not be found.

The latter is the default, as typically all objects referenced in a git repository are contained in the local clone. More recently, however, this doesn’t always have to be the case due to sparse checkouts and other ways to only have a limited amount of objects available locally.

Return the current refresh mode.

Convert a ref counted store into one that is ref-counted and thread-safe, by creating a new Store.

Examples found in repository?
src/cache.rs (line 24)
23
24
25
26
27
28
29
30
31
32
    pub fn into_arc(self) -> std::io::Result<Cache<crate::store::Handle<Arc<crate::Store>>>> {
        let inner = self.inner.into_arc()?;
        Ok(Cache {
            inner,
            new_pack_cache: self.new_pack_cache,
            new_object_cache: self.new_object_cache,
            pack_cache: self.pack_cache,
            object_cache: self.object_cache,
        })
    }

Convert a ref counted store into one that is ref-counted and thread-safe, by creating a new Store

Trait Implementations§

Returns a copy of the value. Read more
Performs copy-assignment from source. Read more
Executes the destructor for this type. Read more
The error returned by try_find()
Returns true if the object exists in the database.
Like Find::try_find(), but with support for controlling the pack cache. A pack_cache can be used to speed up subsequent lookups, set it to crate::cache::Never if the workload isn’t suitable for caching. Read more
Find the packs location where an object with id can be found in the database, or None if there is no pack holding the object. Read more
Obtain a vector of all offsets, in index order, along with their object id.
Return the find::Entry for location if it is backed by a pack. Read more
Find an object matching id in the database while placing its raw, decoded data into buffer. A pack_cache can be used to speed up subsequent lookups, set it to crate::cache::Never if the workload isn’t suitable for caching. Read more
The error returned by try_find()
Returns true if the object exists in the database.
Find an object matching id in the database while placing its raw, undecoded data into buffer. Read more
The error returned by try_header().
Try to read the header of the object associated with id or return None if it could not be found.
The error type used for all trait methods. Read more
As write, but takes an input stream. This is commonly used for writing blobs directly without reading them to memory first.
Write objects using the intrinsic kind of hash into the database, returning id to reference it in subsequent reads.
As write, but takes an object kind along with its encoded bytes.

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more
Like try_find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired object type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired object type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired object type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired object type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired iterator type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired iterator type.
Like find(…), but flattens the Result<Option<_>> into a single Result making a non-existing object an error while returning the desired iterator type.

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The resulting type after obtaining ownership.
Creates owned data from borrowed data, usually by cloning. Read more
Uses borrowed data to replace owned data, usually by cloning. Read more
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.