There are methods of subsiding the poor cache miss of a linked list. You can have a linked list where the nodes point to set size blocks of memory (particularly aligned to a cache boundary but no more than size of a cache line) which contains elements. You still require sequential access up to a point but deleting and inserting is still extremely fast and list growth is instant until you reach the cache line size. Implementation can get a bit complicated though. I've seen this on an older device which was used to save memory (not necessarily be faster). I should try and benchmark this idea someday though to see how it pans out.
Announcement
Collapse
No announcement yet.
Another Potential CPU Optimization For Mesa: Quadratic Probing
Collapse
X
-
Originally posted by indepe View PostYes, but you can't use arrays/vectors containing structs or objects within the array, if you want to pass around pointers to these structs and objects, that need to remain valid while there are insertions and deletions.
Originally posted by indepe View Post(Or if you want two arrays over the same set of structs, and copying them is not an option.)
Originally posted by indepe View PostIn my experience, that's the typical use case for linked lists.Last edited by pal666; 11 February 2017, 01:21 PM.
Comment
-
Originally posted by pal666 View Postyou can solve this the same way as any other computer science problem. and it still can be faster than list.
Originally posted by pal666 View Postin this case with intrusive containers you need additional members. so you have to know in advance in which lists you will insert objects of your type. goodluck putting it in N lists. again easily solvable by vector of pointers.
Originally posted by pal666 View Postin my experience typical use case for lists is everything is list because they are taught early and relatively easy to implement in c where you don't have standard containers
Was close to using the "better string library" (which claims to be faster than std::string, with C++ wrappers), but then decided to do my own little thing, customized for my specific requirements and use patterns.
Comment
-
Originally posted by atomsymbolJust a note about benchmarking linked-lists:
The performance of (modified or new) linked lists after application warmup is lower than their initial performance. Initially an application is more likely to allocate memory sequentially which means that a linked list can be nicely laid out in memory. After the application reaches (global or local) peak memory consumption the addresses returned by the memory allocator are more-or-less randomized which means that newly allocated list elements will be randomly distributed as well. An uncached random memory access is much slower than sequential/linear access.
Using vectors of structs or objects, is simply not an alternative if you need stable pointers, so there isn't much point in making that comparison. There seems to be the danger that you say vectors of structs are faster, creating the thought that the same applies to vectors of pointers.
- Likes 1
Comment
-
Originally posted by indepe View PostIf speed of sequential access decides (iteration and/or modification), linked lists are faster than a vector of pointers, since that is an additional object you need to access and manage.
Comment
-
Comment