Suspected failing MX100 drives, just looking for confirmation

Bit Baby

Suspected failing MX100 drives, just looking for confirmation

[ Edited ]

I have a pair of MX100 drives that I suspect are failing, I've already removed them from service, but just want to confirm my suspicions. The drives were in a mirror, but were performing sequential reads at between 8 and 13 MB/s. The system has also hard locked a few times over the last month, prompting the replacement.

 

I threw in a screenshot from the Storage Executive for each drive. It still reports good health for the drive, but I'm concerned about the Raw Read Error Rate, Average Block-Erase Count, Reported Uncorrectable Errors, and Ultra-DMA CRC Error Counts.

 

Drive 1:

 

Drive 1

Drive 2:

 

Drive 2

3 Replies
JEDEC Jedi

Re: Suspected failing MX100 drives, just looking for confirmation

Wow - they've seen some use!  Both are almost 3 times over the NAND's rated life.

 

But yeah, both drives have a scattering of errors. 1, 5, 187, 196, 197 all indicate past errors.  Coupled with the drives wear level and the actual issues you are experiencing in use then it probably is time to replace them considering I assume the safety of your data is important to you if you're mirroring.

 

______________________________________

Did a user help you? Say thanks by giving Kudos!
How do I know what memory to buy?
Still need help? Contact Crucial Customer Service
Remember to regularly backup your important data!

Bit Baby

Re: Suspected failing MX100 drives, just looking for confirmation

Thanks for the confirmation. Someone prior to me just threw them in a db server that likes to hammer disks. So to confirm, 1 is actually an error indicator? I've seen mention that no one really uses that, and that it occasionally resets based on some internal status. Or is it just ignored when it's in the low thousands, not the millions that both of these drives are reporting?

Highlighted
JEDEC Jedi

Re: Suspected failing MX100 drives, just looking for confirmation

Most SMART attributes are cumulative stats.  They'll show the drives lifetime record of errors. 197 is a notable exception - it logs sectors that are going to be mapped out but haven't yet.

Anyway, when errors occur, drives should be able to map them out and replace them.  So if those stats are all stable (ie. not increasing) and you're not experiencing any problems in use then it's a bit of a non issue.  But if the numbers are increasing, then ongoing errors are occuring.  Coupled with the issues you are experiencing in use, and the extreme wear the drives have taken (they're consumer level drives and not suitable for use as a DB server) then it's probably time to get rid.

 

There have been firmware bugs on oen of the drives where it showed incorrect stats on one of the error attributes and it was reset during a firmware upgrade btu I don't recall which model or firmware - and it was a lot lower value than that.

 

The one of the stats you mentioned that's probably not a drive error is 199 (ultra dma count) - it's typically a communication error with the host computer (cabling issue or similar)

______________________________________

Did a user help you? Say thanks by giving Kudos!
How do I know what memory to buy?
Still need help? Contact Crucial Customer Service
Remember to regularly backup your important data!