01-09-2012 04:41 AM
Does anyone happen to know if the affected M4s are from a specific production batch, or are they just random units from different batches?
I assumed it was all drives? It's just currently only affecting people that have used the drive 24/7 since it was released. They are the only ones able to have reached the number of hours to be affected.
01-09-2012 04:55 AM
I am still running on my SSD.. I have a timer set to 55 minutes so I got 5 minutes to finish what Im doing then I do a hard restart and keep going.. Its really annoying though..
Hehe, I was doing the same But my work takes so much time, I was very angry every time when timer started to blink... 1 hour is not so much, so I decided to clone drive. For me at this point stability is better than speed.
01-09-2012 06:39 AM
"RAID is not backup. don't be fooled into thinking it is, every day someone learn it the hard way.
if a file gets corrupted it gets corrupted across both drives. if a file gets deleted it gets deleted across both drives. and so on."
RAID is a backup for hardware failure and it works perfectly. It may not work, as you described, if there is any user induced corruption or files accidentally get deleted. But, if the drive starts to have issues, as the M4s are now starting to do, you don't have to reinstall with a RAID 1 configured drive. You could just pull the second drive out of RAID 1 and use it as a primary until you are able to repair/replace the damaged drive. Then you could just re-add the drive to the array and back it up. I work in an area that has dozens of tools that use RAID 1 for redundancy, as well as global backups. We run the drives into the dirt with usage and replace them often. RAID 1 is the perfect tool for entry level backups.
01-09-2012 09:31 AM - edited 01-09-2012 09:31 AM
a bsod can cause file corruption if it happens during a write to disk. it can corrupt your own data or OS specific files resulting in a failure to boot the OS. in this case the corruption will spread across all drives in your raid chain.
as you said RAID can get your system back to working order in no time if you have an hardware failure. one of the drives dies, you change the drive and you're good to go but it's not a backup tool. backup it's a whole different thing and has to work against all kind of errors: hardware failure, user error, software errors that result in data corruption, etc.
there are many articles that go into more detail about how RAID is not backup, google will show dozens of good results. it's a common misconception that can cause huge damage if you don't have alternative means of backupping your data.
01-09-2012 09:08 PM - edited 01-09-2012 09:53 PM
....I was just typing a good reply and guess what yes it slowed and hung and then BSOD. Keeping this short now yes I too have the same errors and behavior just starting these last 24 hours, my M4 128gb has 5525 hours on the uptime clock. But still reading the various threads now here in the UK 4am its clear this is getting worse for a lot of us. How long before a BSOD kills my OS. I've only just reinstalled it all. I've tried the moving of cables and SATA ports but it just happens again, but does appear to be forced quicker when watching a full screen movie I have noticed tonight.
Crucial I hope your guys are sleeping by their computers while testing your firmware update because the 16th CANNOT arrive quick enough!
01-09-2012 09:43 PM - edited 01-09-2012 09:54 PM
Possible yes, its a computer. Everything was good and now its not. Over the next days I'll try break it all down and build up again slowly. I've never had errors and BSOD like this before and for it to match a lot of other peoples same issues on here over the last weeks. Will know for sure when the firmware arrives.
EDIT: I was looking at the wrong HD smart data, silly me. Mine is 5525 hours. Edited my post above.
01-10-2012 07:30 AM
I am having this exact issue on a 256GB drive. The drive is in my work desktop which has been running continuously since May/June. The problem started about a week and a half ago. My computer (Supermicro workstation) will BSOD almost exactly 60 minutes. Although I can use other computers at work, the firmware fix is needed ASAP.
01-10-2012 10:59 PM
I have a M4 128GB drive with approximately 5200 hours of on time and have the same problem as everyone else.
At first I thought it could be a mobo issue because my front USB ports and my burner stopped working. So I cloned the M4 SSD drive to a regular HDD and my front inputs work again as well as my burner.
Previously I was getting read speeds of 470MB/s using HD Tune Pro, once I started getting the BSOD's the read speeds went down to 230MB/s. I called tech support to inquire about a RMA, but he said the fix is coming out on the 16th, but he did suggest I do a firmware update to 0009. So after I got off the phone I did that and the speeds got back up to about 330MB/s. Which still seems pretty low considering the drive isn't that old and most of my stuff gets installed on my 2TB drive.
Is that much speed degradation normal? I mean a month or so before the BSOD's it's 470MB/s, to 230MB/s to 330MB/s. This is my first SSD drive and I am not exactly sure what is normal degradation. Any ideas?
01-10-2012 11:14 PM
if you are having issues with your usb and neighboring sata ports (different sata controller?), i would be more concerned with your mobo than your ssd, atm. never heard of speeds dropping to this extent. what type of system are you using the ssd on?