- we can use packfiles to make changing many small files much cheaper? but because https dumb clone scans EVERY packfile idx file to find files it will make https clones slower. pull via s3 will also be slower unless we do something smart like having an index of object hashes -> which packfile it's listed in? esp needed for web frontend where there's lots of tiny fetches of individual objects by hash.
- i think can create an index of small files via a transaction log type thing. can use s3's 'create if not already exists', a series of increasing integer tlog files (the web frontend cant list files, so they have to be predictably named), and some sort of immutable tree of hashes something something??
- probably eventually should also put the canonical state of refs in the transaction log and just have the ref files for dumb clones be a 'best effort' mirror of this? currently its possible for two people to push to a branch at the same time and have one clobber the other's changes, but have both people's gits report a successful push.