Edited by tkaiser at Tue Apr 7, 2015 03:46 |
Suman replied at Tue Apr 7, 2015 02:15
Perhaps 'dd' is giving a fair approximation of what performance to expect from the SATA disk.
Might be the case in your special situation (where you tested with a file size of 1G with dd and also used the appropriate flags to get real disk I/O performance and not just partially caches/buffers).
When people normally use dd they suppress any further flags (you call this 'asynchronous writes' and I would call this 'buffers+disk') and this leads to results depending on the kernel (and filesystem implementation when not testing on raw devices since buffer/cache strategies might change). Because results depend on caching strategies depending on available RAM size (might differ between test runs depending on other tasks allocating more or less RAM) as well as RAM size as well as test file size:
Misleading/unrealiable stuff like the latter test (256 MB vs. 1 GB RAM) will then be published using a raw device somewhere on the net as 'raw SATA write speed of this device'. And that's the problem with dd. You can't trust the results you find unless you exactly know how the tester called it. Tools like bonnie++ keep that in mind, use an appropriate test file size and show a warning if you try to test the wrong things. Copy the Code
- root@bananapi:/sata# dd if=/dev/zero of=/sata/evo840/2048M bs=1M count=2048
- 2048+0 records in
- 2048+0 records out
- 2147483648 bytes (2,1 GB) copied, 49,9121 s, 43,0 MB/s
- root@bananapi:/sata# dd if=/dev/zero of=/sata/evo840/1024M bs=1M count=1024
- 1024+0 records in
- 1024+0 records out
- 1073741824 bytes (1,1 GB) copied, 22,7586 s, 47,2 MB/s
- root@bananapi:/sata# dd if=/dev/zero of=/sata/evo840/512M bs=1M count=512
- 512+0 records in
- 512+0 records out
- 536870912 bytes (537 MB) copied, 10,3897 s, 51,7 MB/s
- root@bananapi:/sata# dd if=/dev/zero of=/sata/evo840/256M bs=1M count=256
- 256+0 records in
- 256+0 records out
- 268435456 bytes (268 MB) copied, 3,91015 s, 68,7 MB/s
And with dd you will also get easily in rewrite situations where you wanted to test write performance (normally that doesn't matter that much but it's important if you use modern filesystems like btrfs/ZFS and block sizes that are smaller than the block size of the fs in question -- please compare with the results at the end of this post). Dedicated benchmark tests prevent you from making this mistake since they remove their test files after successful completion and do separate write/rewrite tests on their own.
In other words: It's very easy to call dd/hdparm wrong so you can't rely on the results published by someone. Real benchmark tools like iozone/bonnie++ take care of that -- they even report CPU usage while running the test (very important on slow platform like SBCs -- I find it usefull to run an 'iostat 5' in a different terminal in parallel to get a real clue what's going on). And in my opinion it's more convenient to let bonnie++ create a CSV file containing the necessary count of test runs where you can lookup strange issues later (since not only performance but also boundary conditions are written) and you get way more information than just sequential transfer speeds.
Copy the Code
- bonnie++ -d /sata/evo840 -m"EVO 840 btrfs" -x 10 -q 2>/tmp/bonnie-stderr >/tmp/bonnie-results.csv