11-18-2018 08:36 AM
Hello Forum members!
Some older IBM disk shelves (24 bays, SAS 6Gb/sec) this is connected (Multipathed) to a Dell server with a SAS 6Gb/sec HBA.
The server is an older Dell R810(power monster) and has 4x older L7555 series CPU's(64 cores) and 64GB of RAM and runs FreeNAS 11.2(still RC). Connectivity is 4x bonded ethernet to a decent but older ex4200 Juniper switch, this hardly matters though since the upstream internet connectivity is less than a gigabit and would be traversing NAT anyway.
The storage pool configuration of SSD's
To maximized available space, we have chosen a raidz2 and plan to run regular scrubs on the array and presently have only 1 pool consisting of 12 disks:
scan: resilvered 16.0M in 0 days 00:00:01 with 0 errors on Sun Nov 18 08:48:03 2018
NAME STATE READ WRITE CKSUM
shelf_1_ssd ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/6b2fc271-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/726d7488-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/7ba6fb1b-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/8593cc2b-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/903739ba-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/9aa9b1ef-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/a6111a5e-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/b3ec8b57-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/c3887ae0-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/d3c396f4-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/e38b32d5-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
gptid/f118f9e2-4a50-11e8-a291-14feb5caaa4f ONLINE 0 0 0
What my family members and I are hoping to achieve is some small amount of general purpose file storage (maybe a TB at most) and the rest will be devoted to Plex and archiving our independent DVD's/Bluerays into a Plex instance for each of us. This should provide the disk throughput easily for streaming up to 8-10 30Mbps streams (half the SSD bandwidth) and the 64 cores should provide enough transcoding ability for our needs.
Q: My understanding of the Active Garbage collection feature is that the drive needs to be powered on, but not currently in use (I'm not sure how the drive knows this, but I'm interested as I assume the PHY on the drive would have already negotiated a link speed? not sure). If that is correct, I would need to boot the server and pause it at POST for some amount of time?
Q: Is there a single SMART attribute would tell me what wear level the SSD's are presently at? Here are the ones that are seen by smartctl on the host system:
Solved! Go to Solution.
11-18-2018 09:19 AM
202 is the percentage worn. If you use a SMART tool that supports the drive you will see useful descriptions for them all.
The drive being not in use means not busy as opposed to not having a 'connection'. So since it's a NAS it'll presumably be on all the time and see periods of none use overnight? The drives should be set to not power down to take advantage of this.
It also sounds like your usage will be primarily read only? Both wear and the erase/rewrite slow down that garbage collection counters are due to writing.