NAS

Support status of SATA port multipliers connected to A20

21 21811
tkaiser  
Edited by tkaiser at Fri Jan 23, 2015 02:41

I did some testing with kernel 3.19.0-rc5 and a cheap JMB321 PM. With this kernel version it's not necessary to patch sunxi_ahci code for use with or without a PM any more. It simply detects itself in which mode to operate. And while a JMB321 seems to further decrease SATA read throughput (from 200 to 135 MB/s)

EDIT: Both assumptions were wrong, see next post.

It seems to make just a small difference in real world NAS scenarios as long as there is only one disk behind the PM accessed since the bottleneck is the Banana's NIC implementation in this case: http://forum.lemaker.org/thread-11857-3-1-3.html

Tests with more than one disk connected to the PM are scheduled for next weekend (both in round robin and concurrent use)

tkaiser  
Update: I've been completely wrong regarding PMP support status in 3.18/3.19. It's still necessary to patch drivers/ata/ahci_sunxi.c (or load it differently but I don't know how) and unset AHCI_HFLAG_NO_PMP therein. What led me to my false assumption was that my JMB321 handed the disk attached to its port 1 to the BPi. But in this mode it does not work as a port multiplier and just passes SATA through ("controller can't do PMP, turning off CAP_PMP" in dmesg output).

And I also found out where the decrease in read performance originates: the JMB321 has a negotiation problem with my Samsung EVO 840 SSD. Always SATA 1.0 will be used: "SATA link up 1.5 Gbps":
  1. root@bananapi:~# dmesg | grep -i sata
  2. [    1.264561] ahci-sunxi 1c18000.sata: SSS flag set, parallel bus scan disabled
  3. [    1.264617] ahci-sunxi 1c18000.sata: AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl platform mode
  4. [    1.264644] ahci-sunxi 1c18000.sata: flags: ncq sntf stag pm led clo only pmp pio slum part ccc
  5. [    1.265848] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 30
  6. [    1.814499] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  7. [    2.374793] ata1.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
  8. [    3.144791] ata1.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  9. [    4.214742] ata1.02: SATA link down (SStatus 0 SControl 0)
  10. [    5.284744] ata1.03: SATA link down (SStatus 0 SControl 0)
  11. [    6.354741] ata1.04: SATA link down (SStatus 0 SControl 0)
Copy the Code
I also did some tests with a Seagate Barracuda that negotiated SATA 2.0 (3.0 Gbps) with the JMB321 and got the following throughput values ("iozone -a -g 2000m -s 2000m -i 0 -i 1 -r{$recordsize}k"
  1. 2048000       4   41995   41862   164656   164953
  2. 2048000      32   41063   41698   161294   160812
  3. 2048000     512   41245   41344   150711   149516
  4. 2048000   16384   41458   41268   152228   151285
Copy the Code
This is nearly the same with or without JMB321 in between. Next tests will involve concurrent access to different disks connected to the PM.

tkaiser  
Edited by tkaiser at Sat Jan 24, 2015 05:59

Next step: I connected 4 different disks to the JMB321 PM:

  • Samsung EVO 840 128 GB
  • 3 TB Seagate Barracuda
  • another identical 3 TB Seagate Barracuda
  • 4 TB WD Green WD40EZRX


For whatever reason the PM enumerates differently and exchanges the first two disks:
  1. /dev/sda:
  2.         Model Number:       ST3000DM001-9YN166                     
  3.         Transport:          Serial, SATA Rev 3.0

  4. /dev/sdb:
  5.         Model Number:       Samsung SSD 840 EVO 120GB               
  6.         Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

  7. /dev/sdc:
  8.         Model Number:       ST3000DM001-9YN166                     
  9.         Transport:          Serial, SATA Rev 3.0

  10. /dev/sdd:
  11.         Model Number:       WDC WD40EZRX-00SPEB0                    
  12.         Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Copy the Code
Funny enough the PM only connects the Seagates with SATA II, both the SSD as well as the WD Green end up with just 1.5 Gbps (dmesg output):
  1. [    1.824879] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  2. [    2.175188] ata1.00: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
  3. [    2.735185] ata1.01: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
  4. [    3.925184] ata1.02: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  5. [    4.275182] ata1.03: SATA link up 1.5 Gbps (SStatus 113 SControl 320)
  6. [    5.345137] ata1.04: SATA link down (SStatus 0 SControl 0)
Copy the Code
On all 4 disks I created one primary partition with ext4:
  1. /dev/sda1 /mnt/sda ext4 rw,relatime,data=ordered 0 0
  2. /dev/sdb1 /mnt/sdb ext4 rw,relatime,stripe=384,data=ordered 0 0
  3. /dev/sdc1 /mnt/sdc ext4 rw,relatime,data=ordered 0 0
  4. /dev/sdd1 /mnt/sdd ext4 rw,relatime,data=ordered 0 0
Copy the Code
Then I tried to start three iozone runs on the 3 HDDs (always using "iozone -a -g 2000m -s 2000m -i 0 -i 1 -r32k"). I started on the 2nd Barracuda (sdc), waited a minute, started a second iozone run on the 1st Barracuda (sda) and after a while a third one on the WD green (sdd). The results after many many minutes waiting and the PM going berserk:

sdc:
  1. Error reading block 8734 b5d00000
  2. read: Input/output error
Copy the Code
sda:
  1. Error in file: Found ?0? Expecting ?6d6d6d6d6d6d6d6d? addr b5c06000
  2. Error in file: Position 1299210240
  3. Record # 39648 Record size 32 kb
  4. where b5c06000 loop 24576
Copy the Code
sdd:
  1.          2048000      32   34818   37729    11385    51757                                                                          
  2. iozone test complete.
Copy the Code
I repeated this two times. Always the same: The disks spin up and down after some time randomly and the logs are full of SATA messages: http://pastebin.com/maTgAakE

While i made the second test run I called "iostat 3" in another shell session to have a look how the accesses to the 3 different disks are balanced. It's an insane joke. The last disk that gets heavy accesses always wins and outperforms the others: http://pastebin.com/1mAfJxxF

sdc started with write throughput rates above 35 MB/sec. As soon as I started the second test run on sda the throughput on sdc dropped below 5 MB/sec and sda got preferred. The same again when I started with sdd. Then both sdc and sda were thwarted to the minimum and sdd got maximum throughput. And suddenly the whole SATA subsystem stalled and no throughput on any device any longer. Very reliable such a setup

From then on the log gets filled extensively with further ata1.* error messages. They disappear only after a complete restart of the system. Next step: I install the very same 3.19.0-rc5 kernel without PMP support and try the three disks individually (did this yesterday with the PM in between but only the SSD and one Barracuda connected to the PM and no concurrent accesses to the disks)

tkaiser  
Edited by tkaiser at Fri Jan 23, 2015 11:22

After exchange of the kernel (without PMP support patch) I directly connected each of the 3 different disks and got results as expected. They negotiated at the A20's SATA rev. (3 Gbps), performance and stability were good:

The former sdc (Seagate Barracuda -- the one that always spinned up and down randomly after the PM started to berserk):
  1. [    1.276316] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 30
  2. [    1.924870] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  3. [    1.942354] ata1.00: ATA-8: ST3000DM001-9YN166, CC4B, max UDMA/133
  4. [    1.942376] ata1.00: 5860533168 sectors, multi 0: LBA48
  5. [    1.959915] ata1.00: configured for UDMA/133
Copy the Code
And the iozone results:
  1. 2048000      32   40205   41571   156235   156535
Copy the Code
The former sdb (Samsung EVO 840, negotiation behind the PM just with 1.5 Gbps):
  1. [    1.276372] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 30
  2. [    1.624921] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  3. [    1.626910] ata1.00: ATA-9: Samsung SSD 840 EVO 120GB, EXT0BB0Q, max UDMA/133
  4. [    1.626935] ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32)
  5. [    1.628211] ata1.00: configured for UDMA/133
Copy the Code
Iozone as expected (not the PM or its SATA I negotiation being the bottleneck but the A20 SoC instead)
  1. 2048000      32   42994   43072   207043   214731
Copy the Code
And the former sdd (WD Green, negotiation behind the PM just with 1.5 Gbps):
  1. [    1.276390] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 30
  2. [    1.624932] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  3. [    1.625493] ata1.00: ATA-9: WDC WD40EZRX-00SPEB0, 80.00A80, max UDMA/133
  4. [    1.625517] ata1.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32)
  5. [    1.626069] ata1.00: configured for UDMA/133
Copy the Code
And iozone results:
  1. 2048000      32   41555   42869   146936   147736
Copy the Code
Conclusion: Cheap and crappy PMs and/or driver support aren't ready for prime time yet. Apart from the negotiation problems (just 1.5 Gbps) the stability is horrible when accessing two or more disks in parallel.

tkaiser  
Last attempt: A RAID-1 containing 2 300 GB partitions on the Seagate Barracudas. My setup: On port 1 of the PM is a Samsung 840 EVO (containing the rootfs) and on ports 4 and 5 the two Seagates:
  1. /dev/sda1 / ext4 rw,noatime,nodiratime,errors=remount-ro,commit=600,stripe=384,data=ordered 0 0
  2. /dev/sdb1 /mnt btrfs rw,relatime,space_cache 0 0
Copy the Code
Since we're dealing with totally unreliable hardware ('tablet grade' SBC and a crappy cheap Port Multiplier) I used btrfs to create the RAID-1 (since I like to know when data got corrupted). I used the usual iozone approach to test (2 GB file size and 4 different record sizes) but the PM fails to work after just 4 GB written: http://pastebin.com/qJPjz1bX

tkaiser  
Edited by tkaiser at Sun Jan 25, 2015 08:46

I exchanged the SATA cables, used a different PSU for the port multiplier, recreated the btrfs RAID-1 and tried it again with the Samsung EVO 840 (sda, containing the rootfs) and the 2 Barracudas as sdb and sdc connected to the JMB321:
  1. mkfs.btrfs /dev/sdb1
  2. mount /dev/sdb1 /mnt/sdb
  3. btrfs device add /dev/sdc1 /mnt/sdb
  4. btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/sdb
Copy the Code
Same procedure as last time. I used the usual iozone stress test approach with 2 GB file size and different record sizes:
  1. cd /mnt/sdb && for i in 1 2 4 16 32 64 512 1024 16384 ; do iozone -a -g 2000m -s 2000m -i 0 -i 1 -r${i}k ; done
Copy the Code
The first 2 runs succeeded with really bad write performance:
  1. 2048000       1   18609    8377   115707   117259
  2. 2048000       2   19507    8922   130634   132209
Copy the Code
And then the system got unusable and completely stalled after a while. Here are the contents of /var/log/syslog containing SATA related error messages as usual: http://pastebin.com/RqDFwngH

Btrfs detected checksum errors from time to time (please keep this in mind when playing with crappy PMs: Use a filesystem that is able to detect data corruption!):
  1. Jan 25 15:03:11 localhost kernel: [  191.678247] BTRFS: device fsid 3817230c-e376-4c40-b999-f682612d7042 devid 2 transid 66 /dev/sdc1
  2. Jan 25 15:03:29 localhost kernel: [  209.657593] BTRFS info (device sdc1): disk space caching is enabled
  3. Jan 25 15:10:20 localhost kernel: [  621.086986] BTRFS info (device sdc1): csum failed ino 257 off 96804864 csum 1680228055 expected csum 4264040775
  4. Jan 25 15:10:20 localhost kernel: [  621.097569] BTRFS: read error corrected: ino 257 off 96804864 (dev /dev/sdc1 sector 4472264)
  5. Jan 25 15:10:21 localhost kernel: [  621.670180] BTRFS info (device sdc1): csum failed ino 257 off 104812544 csum 2668347206 expected csum 4264040775
  6. Jan 25 15:10:21 localhost kernel: [  621.685841] BTRFS: read error corrected: ino 257 off 104812544 (dev /dev/sdc1 sector 4487904)
  7. Jan 25 15:12:56 localhost kernel: [  777.011325] BTRFS info (device sdc1): csum failed ino 257 off 1056989184 csum 1338888974 expected csum 4264040775
  8. Jan 25 15:12:56 localhost kernel: [  777.027542] BTRFS: read error corrected: ino 257 off 1056989184 (dev /dev/sdc1 sector 163752)
  9. Jan 25 15:12:56 localhost kernel: [  777.029785] BTRFS info (device sdc1): csum failed ino 257 off 1057013760 csum 3292844905 expected csum 4264040775
  10. Jan 25 15:12:56 localhost kernel: [  777.030834] BTRFS: read error corrected: ino 257 off 1057013760 (dev /dev/sdc1 sector 163800)
  11. Jan 25 15:13:01 localhost kernel: [  781.885251] BTRFS info (device sdc1): csum failed ino 257 off 1104207872 csum 784105141 expected csum 4264040775
  12. Jan 25 15:13:01 localhost kernel: [  781.896608] BTRFS: read error corrected: ino 257 off 1104207872 (dev /dev/sdc1 sector 255976)
  13. Jan 25 15:13:01 localhost kernel: [  781.899471] BTRFS info (device sdc1): csum failed ino 257 off 1104240640 csum 4268529224 expected csum 4264040775
  14. Jan 25 15:13:01 localhost kernel: [  781.900184] BTRFS: read error corrected: ino 257 off 1104240640 (dev /dev/sdc1 sector 256040)
  15. Jan 25 15:13:20 localhost kernel: [  800.649729] BTRFS info (device sdc1): csum failed ino 257 off 1214877696 csum 2481319610 expected csum 4264040775
  16. Jan 25 15:13:20 localhost kernel: [  800.662999] BTRFS: read error corrected: ino 257 off 1214877696 (dev /dev/sdc1 sector 472128)
  17. Jan 25 15:13:46 localhost kernel: [  827.355750] BTRFS info (device sdc1): csum failed ino 257 off 1389998080 csum 1380584253 expected csum 4264040775
  18. Jan 25 15:13:46 localhost kernel: [  827.367952] BTRFS: read error corrected: ino 257 off 1389998080 (dev /dev/sdc1 sector 814176)
  19. Jan 25 15:15:01 localhost kernel: [  901.675566] BTRFS info (device sdc1): csum failed ino 257 off 1915703296 csum 2852643740 expected csum 4264040775
  20. Jan 25 15:15:01 localhost kernel: [  901.692484] BTRFS: read error corrected: ino 257 off 1915703296 (dev /dev/sdc1 sector 1840952)
  21. Jan 25 15:15:14 localhost kernel: [  914.799180] BTRFS info (device sdc1): csum failed ino 257 off 2028199936 csum 2299470082 expected csum 4264040775
  22. Jan 25 15:15:14 localhost kernel: [  914.810925] BTRFS: read error corrected: ino 257 off 2028199936 (dev /dev/sdc1 sector 8407544)
  23. Jan 25 15:15:39 localhost kernel: [  939.543478] BTRFS info (device sdc1): csum failed ino 257 off 1109454848 csum 3324333784 expected csum 4264040775
  24. Jan 25 15:15:39 localhost kernel: [  939.560568] BTRFS info (device sdc1): csum failed ino 257 off 1109454848 csum 3324333784 expected csum 4264040775
  25. Jan 25 15:15:39 localhost kernel: [  939.570122] BTRFS: read error corrected: ino 257 off 1109454848 (dev /dev/sdc1 sector 4499960)
  26. Jan 25 15:15:42 localhost kernel: [  943.177962] BTRFS info (device sdc1): csum failed ino 257 off 1503965184 csum 3373640680 expected csum 4264040775
  27. Jan 25 15:15:42 localhost kernel: [  943.187910] BTRFS info (device sdc1): csum failed ino 257 off 1503965184 csum 3373640680 expected csum 4264040775
  28. Jan 25 15:15:42 localhost kernel: [  943.214882] BTRFS: read error corrected: ino 257 off 1503965184 (dev /dev/sdc1 sector 5201768)
Copy the Code
And even the disks themselve reported CRC checksum errors 'on the wire' (or lets better say as a result of an unreliable port multiplier in between)

Prior to this round of tests today (output of 'smartctl -a' for the disk device in question):
  1. sdb: 199 UDMA_CRC_Error_Count    0x003e   197   197   000    Old_age   Always       -       121
  2. sdc: 199 UDMA_CRC_Error_Count    0x003e   197   197   000    Old_age   Always       -       49
Copy the Code
After these tests:
  1. sdb: 199 UDMA_CRC_Error_Count    0x003e   200   197   000    Old_age   Always       -       196
  2. sdc: 199 UDMA_CRC_Error_Count    0x003e   200   197   000    Old_age   Always       -       81
Copy the Code
The checksum error counters of both disks were 0 before I started with this port multiplier crap.

tkaiser  
Edited by tkaiser at Mon Jan 26, 2015 05:52

Maybe I found the culprit for the instability that occurs when using the PM constantly under high load. Since I realized that the problem always occured after periods of intensive usage and then the JMB321 chip felt quite hot... I started another test run with 5 minute breaks between the iozone runs.

I created on each of the Barracudas another partition and formatted them with btrfs. Then I let iozone run on one disk, pause 5 mins., run the test on the other and so on... No more errors.

Then I recreated the 300 GB RAID-1 and started the same series of tests just on this partition. In the meantime over 30 GBs of data have been transferred in both directions. Iozone performance as expected (write performance slightly below 50% of A20's SATA write performance since data/metadata has to be written to both disks in parallel, read performance a few percent below A20's/disk's max. read throughput since the btrfs RAID-1 implementation reads the data from one disk and metadata/checksums from both disks to verify the integrity of the data):
  1. 2048000       1   17801    8536   128659   125789                                                                          
  2. 2048000       2   19290    9154   129231   137163                                                                          
  3. 2048000       4   21652   21380   138928   145163                                                                          
  4. 2048000      16   21787   21642   141487   146816                                                                          
  5. 2048000      32   21454   21318   135694   141927                                                                          
  6. 2048000      64   21652   21250   131143   138699                                                                          
  7. 2048000     512   21411   21157   134030   137705
  8. 2048000    1024   21493   21336   134541   139305
  9. 2048000   16384   21480   21283   134293   140841
Copy the Code
Not a single error any more with 5 min. breaks in between tests.

So apparently it's a problem with overheating. Just ordered a 10x10mm heat sink and will give it a try when it arrives in a few days... In my opinion it's very important that the PM is able to operate under high load for many hours constantly. Otherwise it's an unrealiable SPoF (single point of failure).

tkaiser  
Edited by tkaiser at Fri Jan 30, 2015 04:31
tkaiser replied at Sun Jan 25, 2015 16:14
So apparently it's a problem with overheating


The small heatsink arrived and solved the problem. I ran 5 consecutive iozone tests with 9 different record sizes. Both Banana Pi and Port Multiplier had to run a few hours under high I/O load. The iozone results were as expected (2GB filesize) and not a single error occured:
  1.        1   18760    8806   123930   125524                                                                          
  2.        2   19906    9396   139209   139525                                                                          
  3.        4   22216   22057   144058   145506                                                                          
  4.       16   22200   21882   147356   147803                                                                          
  5.       32   22213   22080   140911   142260                                                                          
  6.       64   22235   21823   136795   138263                                                                          
  7.      512   21956   21854   138544   138748                                                                          
  8.     1024   22043   21872   137275   138777                                                                          
  9.    16384   22098   21860   135574   140129                                                                          
  10.        1   19006    8789   125118   127847                                                                          
  11.        2   19776    9357   138444   139493                                                                          
  12.        4   22053   21984   144714   143938                                                                          
  13.       16   22320   22184   142285   145430                                                                          
  14.       32   22247   22012   143167   144463                                                                          
  15.       64   21950   22057   137653   135107                                                                          
  16.      512   22035   21746   139166   139438                                                                          
  17.     1024   21959   21796   138788   140067                                                                          
  18.    16384   21993   21658   141606   138884                                                                          
  19.        1   18229    8881   122741   124899                                                                          
  20.        2   19304    9492   140099   142533                                                                          
  21.        4   22203   21915   143053   145593                                                                          
  22.       16   22193   22104   147605   146384                                                                          
  23.       32   21995   21883   142260   140352                                                                          
  24.       64   22185   22067   137822   139392                                                                          
  25.      512   21995   21935   138362   139779                                                                          
  26.     1024   22048   21736   137690   140606                                                                          
  27.    16384   21940   21742   138868   140107                                                                          
  28.        1   19243    8774   126076   127981                                                                          
  29.        2   19740    9411   132734   135338                                                                          
  30.        4   22013   22069   138423   145548                                                                          
  31.       16   22284   21799   145247   147440                                                                          
  32.       32   22170   21927   142153   139395                                                                          
  33.       64   22118   21977   135127   138410                                                                          
  34.      512   21918   20684   139620   140308                                                                          
  35.     1024   21961   21848   138623   137310                                                                          
  36.    16384   21949   21790   140256   141914                                                                          
  37.        1   19359    8887   120247   121923                                                                          
  38.        2   19657    9449   136945   138366                                                                          
  39.        4   21993   21750   143437   144857                                                                          
  40.       16   22254   22122   143458   145860                                                                          
  41.       32   22190   22009   140787   142892                                                                          
  42.       64   22100   21942   136869   138942                                                                          
  43.      512   22043   21826   140544   137947                                                                          
  44.     1024   22023   21830   138464   140320                                                                          
  45.    16384   22020   21600   138275   141106
Copy the Code
(I put the "iostat 5" output of the last test runs here to have a look how both disks have been accessed in btrfs' RAID-1 mode: http://pastebin.com/NFMKiwWH)

JMB321 port multiplier without heatsink:



Cooling solution applied (using a 10x10x12.5 mm heatsink and Kerafol KP92)



Acrylic Shield with Banana Pi ready for the PM (the white stuff is an IKEA FIXA)



Side view (the heatsinks for Banana Pi's DRAM and SoC on top, the PM's heatsink on the lower side of the acrylic shield)



The IKEA FIXA provides some pressure to fix the heatsink on the JMB321:



Banana Pi, PM and Samsung EVO 840 stacked together:



View from top:



Conclusion: A passive heatsink and convection is necessary to ensure reliable operation of a JMB321 PM.

tkaiser  
Small update regarding port multiplier support in mainline (3.19): I figured out how to dynamically switch between PM and non-PM mode. The first step is to build ahci_sunxi as a module so your kernel config should read:
  1. CONFIG_AHCI_SUNXI=m
Copy the Code
And if you want the A20 operate in PMP mode you simply create /etc/modprobe.d/ahci-sunxi.conf containing one single line:
  1. options ahci-sunxi enable_pmp=1
Copy the Code
The dmesg output as follows (I switched the JMB321 based PMP with 1 SSD and 1 HDD connected on 49.5 seconds after the BPi started to boot):
  1. [    4.495031] ahci-sunxi 1c18000.sata: forcing PORTS_IMPL to 0x1
  2. [    4.499997] ahci-sunxi 1c18000.sata: AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl platform mode
  3. [    4.504996] ahci-sunxi 1c18000.sata: flags: ncq sntf pm led clo only pmp pio slum part ccc
  4. [    4.564412] ata1: SATA max UDMA/133 mmio [mem 0x01c18000-0x01c18fff] port 0x100 irq 31
  5. [    4.914996] ata1: SATA link down (SStatus 0 SControl 300)
  6. [   49.433568] ata1: exception Emask 0x10 SAct 0x0 SErr 0x4050002 action 0xe frozen
  7. [   49.438154] ata1: irq_stat 0x00400040, connection status changed
  8. [   49.442719] ata1: SError: { RecovComm PHYRdyChg CommWake DevExch }
  9. [   49.447354] ata1: hard resetting link
  10. [   58.134876] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
  11. [   58.139781] ata1.15: Port Multiplier 1.2, 0x197b:0x0325 r0, 5 ports, feat 0x5/0xf
  12. [   58.160201] ata1.00: hard resetting link
  13. [   58.604900] ata1.00: link resume succeeded after 1 retries
  14. [   58.725185] ata1.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
  15. [   58.730022] ata1.01: hard resetting link
  16. [   59.085176] ata1.01: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
  17. [   59.089912] ata1.02: hard resetting link
  18. [   60.154904] ata1.02: failed to resume link (SControl 0)
  19. [   60.159728] ata1.02: SATA link down (SStatus 0 SControl 0)
  20. [   60.164348] ata1.03: hard resetting link
  21. [   61.234942] ata1.03: failed to resume link (SControl 0)
  22. [   61.239794] ata1.03: SATA link down (SStatus 0 SControl 0)
  23. [   61.244405] ata1.04: hard resetting link
  24. [   62.314897] ata1.04: failed to resume link (SControl 0)
  25. [   62.319723] ata1.04: SATA link down (SStatus 0 SControl 0)
  26. [   62.326304] ata1.00: ATA-9: Samsung SSD 840 EVO 120GB, EXT0BB0Q, max UDMA/133
  27. [   62.330980] ata1.00: 234441648 sectors, multi 16: LBA48 NCQ (depth 31/32)
  28. [   62.336099] ata1.00: configured for UDMA/133
  29. [   62.341166] ata1.01: ATA-8: ST3000DM001-9YN166, CC4B, max UDMA/133
  30. [   62.345721] ata1.01: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32)
  31. [   62.350804] ata1.01: configured for UDMA/133
  32. [   62.355612] ata1: EH complete
Copy the Code

tkaiser  
Edited by tkaiser at Mon Mar 2, 2015 11:49

In the meantime I tested a RAID worst case scenario and the JMB321 failed as expected. I posted the experiences (also with btrfs and other RAID setups including USB) in another thread on request: http://forum.lemaker.org/forum.php?mod=redirect&goto=findpost&ptid=11857&pid=71617

You have to log in before you can reply Login | Sign Up

Points Rules