I really didn't like the fact that the tests were purely CPU intensive either.
Are the reviews really headed towards Toms Hardware grade quality?
How about moving files? Downloading files? and reading/writing tasks - which are the whole appeal of these drives?
Can you move 3 files around different partitions and send files over the network AND to a USB drive at a good speed?
How long does it take to start up to the bootscreen (bootchart?), how about starting/stopping common programs/services?
How about starting many different programs at the same time - how does this effect times?
What about watching a movie, listening to ogg files, ripping files, and several low CPU-intensive but many operations/file based ops - which is CERTAINLY real world for a LOT of people.
... How about battery life with these tests too?
Along the issue of cache - I'm thinking he means FILESYSTEM cache, which is written to RAM before being flushed to the disc using fsync/etc. This can be different depending on workloads/RAM/etc.
A time-line graph (I'm just thinking performance against time - don't know the proper name) would probably do a better job at monitoring speeds but I guess the performance stats were average-based and didn't have any hooks into the kernel/whatever.
... Can't some stats be gained from /proc/ at intervals? (for those in the know)
Overall, my criticism on this article (and only this, as phoronix seems to have written a few very nice and interesting articles - and this annoys me enough to join and post about it) is that the article takes two drives, and want's to test any performance difference between them (real world or not)... BUT the tests which are performed are totally inadequate for the initial proposal, and actually test the wrong area of the laptop altogether (the CPU).
... You may as well have just left it running in the corner of the room, or given it to Joe from the coffee shop on the corner and asked "do YOU think it's faster?".
Please phoronix, please get people who THINK about the tests they perform, and HOW to go about them (perhaps ask in the forums for ideas?) rather then people who just run tests at the time and decide they're relevant.
Are the reviews really headed towards Toms Hardware grade quality?
How about moving files? Downloading files? and reading/writing tasks - which are the whole appeal of these drives?
Can you move 3 files around different partitions and send files over the network AND to a USB drive at a good speed?
How long does it take to start up to the bootscreen (bootchart?), how about starting/stopping common programs/services?
How about starting many different programs at the same time - how does this effect times?
What about watching a movie, listening to ogg files, ripping files, and several low CPU-intensive but many operations/file based ops - which is CERTAINLY real world for a LOT of people.
... How about battery life with these tests too?
Along the issue of cache - I'm thinking he means FILESYSTEM cache, which is written to RAM before being flushed to the disc using fsync/etc. This can be different depending on workloads/RAM/etc.
A time-line graph (I'm just thinking performance against time - don't know the proper name) would probably do a better job at monitoring speeds but I guess the performance stats were average-based and didn't have any hooks into the kernel/whatever.
... Can't some stats be gained from /proc/ at intervals? (for those in the know)
Overall, my criticism on this article (and only this, as phoronix seems to have written a few very nice and interesting articles - and this annoys me enough to join and post about it) is that the article takes two drives, and want's to test any performance difference between them (real world or not)... BUT the tests which are performed are totally inadequate for the initial proposal, and actually test the wrong area of the laptop altogether (the CPU).
... You may as well have just left it running in the corner of the room, or given it to Joe from the coffee shop on the corner and asked "do YOU think it's faster?".
Please phoronix, please get people who THINK about the tests they perform, and HOW to go about them (perhaps ask in the forums for ideas?) rather then people who just run tests at the time and decide they're relevant.
Comment