05-04-2013 01:00 AM
Solved! Go to Solution.
05-04-2013 02:13 AM - edited 05-05-2013 06:52 PM
What do you want to know?
I'm using debian linux with a 2.5 inch M4 64GB with 040H firmware as my boot/root drive, but I do have /home on another disk. Some thoughts...
alignment - most modern linux distributions should align to MiB offset, check with parted or gparted.
TRIM - use ext4 filesystem with discard option in /etc/fstab
wear reduction - mount file systems with noatime or relatime and mount /tmp as tmpfs if ubuntu doesn't do this by default.
In /etc/fstab something like:
proc /proc proc defaults 0 0
tmpfs /tmp tmpfs nodev,nosuid 0 0
/dev/sda3 / ext4 defaults,relatime,discard,errors=remount-ro 0 1
/dev/sda2 /boot ext4 defaults,relatime,discard 0 2
/dev/sda4 /var ext4 defaults,relatime,discard 0 2
/dev/sda5 /home ext4 defaults,relatime,discard 0 2
/dev/sda6 none swap sw 0 0
/dev/sda1 /boot/efi vfat defaults 0 0
05-04-2013 02:19 AM - edited 05-04-2013 02:20 AM
Note: in the post above I'm using /dev/sdaX device format to illustrate the possible partition table, but these days you would use UUID. I'm using GPT partitioning too as you'd find in most modern machines.
05-04-2013 09:20 AM
05-04-2013 10:01 AM
05-04-2013 07:34 PM
I'm not sure about BTRFS - it appears to have some nice ideas, but what is the reason for using it in your case? Do you just want to learn more about it, or do you want performance, reliability, do you need some of the new features?
Performance doesn't seem to be any better at this stage - in some recent testing, not as good as ext4.
I quite liked the comment in this thread:
"The "common wisdom" of filesystem developers is that it takes some 5 years of beating to consider a filesystem stable enough for non-experimental use. BTRFS hasn't accumulated 5 years yet, so it is considered strictly for experimental use right now. If the data on the machine aren't critical, and a rigurous backup scheme is in place, go wild. Be prepared to report strange things happening."
05-05-2013 07:03 PM - edited 05-05-2013 07:15 PM
Some additional thoughts on SSDs with Linux...
Smartmontools - get smartmontools (if not installed by default) and update the database.
In most cases something like this:
If you skip this step, smartmontools will not have the latest drive data.
With smartmontools you can get the drive health info using the smartctl command:
# smartctl -A /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-0.bpo.3-amd64] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 050 Pre-fail Always - 0 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 9 Power_On_Hours 0x0032 100 100 001 Old_age Always - 4649 12 Power_Cycle_Count 0x0032 100 100 001 Old_age Always - 450 170 Grown_Failing_Block_Ct 0x0033 100 100 010 Pre-fail Always - 0 171 Program_Fail_Count 0x0032 100 100 001 Old_age Always - 0 172 Erase_Fail_Count 0x0032 100 100 001 Old_age Always - 0 173 Wear_Leveling_Count 0x0033 100 100 010 Pre-fail Always - 2 174 Unexpect_Power_Loss_Ct 0x0032 100 100 001 Old_age Always - 2 181 Non4k_Aligned_Access 0x0022 100 100 001 Old_age Always - 1 0 1 183 SATA_Iface_Downshift 0x0032 100 100 001 Old_age Always - 0 184 End-to-End_Error 0x0033 100 100 050 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 001 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 001 Old_age Always - 0 189 Factory_Bad_Block_Ct 0x000e 100 100 001 Old_age Always - 48 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 0 195 Hardware_ECC_Recovered 0x003a 100 100 001 Old_age Always - 0 196 Reallocated_Event_Count 0x0032 100 100 001 Old_age Always - 0 197 Current_Pending_Sector 0x0032 100 100 001 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 001 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 100 100 001 Old_age Always - 0 202 Perc_Rated_Life_Used 0x0018 100 100 001 Old_age Offline - 0 206 Write_Error_Rate 0x000e 100 100 001 Old_age Always - 0
If you dual boot with windows, you can get the same data with CrystalDiskInfo or something similar, but it's often easier to copy text, than upload a screenshot.
SWAP - some people will tell you - don't use swap with a SSD. The reason given is to reduce wear on the SSD NAND cells. This shouldn't be a problem, because (a) if you can afford an SSD, you can afford a reasonable amount of RAM and your system should not be swapping (paging) all the time, and (b) modern SSDs have decent wear leveling, so this should not be the problem it might have been on SSDs from five years ago.
If in doubt, read this:
I'm using a swap partition in my system, but with SSDs there is probably no advantage in using a swap partition over a swap file. With spinning disks, people preferred a swap partition and one reason was that with a swap partition, the swap was kept to one area of the disk which should minimise seek time. Seek time and fragmentation are not something you should worry about with SSDs, so use a swap file if you want.
Using a swap file might be a good idea in your case, as you have a smaller number of partitions available with a msdos partition table.
Plus you can forget the old advice about needing more swap than RAM - I'm using about 2GB of swap on a system with 16GB of RAM. I could probably get away with 800MB if I was short of disk space.
One exception might be a laptop setup to hibernate and write data to the swap when it suspends/sleeps?
Log files - some people will tell you to mount /var/log as a ramdisk (tmpfs) to reduce wear, but I don't recommend this. Your log files might be needed if you are having problems and you don't want to loose them after a reboot. Perhaps in some very busy servers on some older SSDs, this might have been needed? You can see from my data above, no excessive wear.
Plus /var/tmp - some people will tell you to mount this as tmpfs but once again there is no need. In most cases, very little data gets written to /var/tmp so there is almost no impact on NAND wear.
05-15-2013 02:55 PM
05-16-2013 02:30 AM
Thanks for the information you posted on this thread, it's been great and will help further users in future who have similar questions about SSDs and Linux.
kevpan815, let us know how it goes! The info Alex has provided has been great so it would be good to let us know how it all goes.