Originally posted by Weasel
View Post
Other than that, in my book, a revision history is incremental and local. That is, changing a file state has effects only on a very limited subsection of the directory tree. I'd assume that the working set for tracking commits is actually very small and local, too. This would lead to good performance on paging memory systems. Requiring the whole repository state and history in fast RAM would imply that the access patterns were very non-uniform and all commits affected the whole state, which makes no sense. For example, adding a 100 byte file would mean you'd have to read 200 gigabytes of data from disk and store back 200 gigabytes? I highly doubt it.
It would be nice if some more knowledgeable could explain why this tool seems so crappy.
Comment