Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
Default
25% Over-Provisioning
The IO consistency is good but obviously not as good as the Extreme Pro due to lower over-provisioning (7% vs 12%). The architecture is still the same, though, as first the performance drops to around 10K IOPS, which is followed by a higher throughput burst. At steady-state the X300s averages about 5K IOPS, which is actually similar to the Crucial MX100 but with added over-provisioning the X300s gets close to the Extreme Pro level.
Thanks for the review, as always =). If you have the opportunity to meet with sandisk, can you please ask if there will be a msata version of their extreme or ultra ssd.
That's too bad since it clearly has a piece of (undisclosed capacity) memory on the PCB. Looks to be a 128MB DDR2 chip. I wonder if any user data is stored in there of if it truly caches only the indirection table?
The X300s does not have capacitors to provide power-loss protection as that is generally an enterprise-only feature. SanDisk does have a good white paper about their power-loss protection techniques, though.
"Many mainstream drives have capacitors dating back to the Intel SSD 320 (X25-M v3)"
There is only a handful of client-grade drives that provide power loss protection in the form of capacitors (Crucial M500, M550 & MX100, Intel SSD 730 & SSD 320 are the only ones I can remember).
The SSD 320 was never strictly a client drive as Intel also targeted it towards the entry-level enterprise market, hence the power loss protection. The SSD 730, on the other hand, is derived from the DC S3500/S3700, so it is basically a client tuned enterprise drive.
The power loss protection in the MX100 and other Crucial's client drives is not as perfect as their marketing makes you think. Crucial only guarantees that the capacitors provide enough power to save the NAND mapping table, which means user data is vulnerable to data loss. That is why the M500DC uses different capacitors because the ones in the client drives do not provide enough power to save all writes in progress.
SanDisk's approach is to use nCache (i.e. an SLC portion) to flush the NAND mapping table from the DRAM more often. The lower write latency that SLC has ensures that in case of a power loss, the data loss is minimal but it is true that some data may be lost. Crucial/Micron operates all NAND as MLC, which is why they need the capacitors to make sure that the NAND mapping table is safe.
On the subject of mapping tables; how does controllers like sandforce (and some marvell implementations) work without DRAM ? Do they dedicate a portion of flash for that and how do they keep track of that portions activity (eg block wear) ?
Also, since some of the manufactures use pseudo SLC (ie MLC/TLC acting as SLC) how is endurance of those cells affected ? Can SLC portion last longer than normal MLC/TLC ?
The controller designs that don't utilize DRAM use the internal SRAM cache in the controller to cache the NAND mapping table. It just requires a different mapping table design since SRAM caches are much smaller than DRAM. Ultimately the mapping table is still stored in NAND, though.
Pseudo-SLC can definitely last longer than MLC/TLC. With only one bit per cell, there is much more voltage headroom as there are only two voltage states.
So really, MLC/TLC and SLC dies do not differ much internally. I'm guessing that real SLC just uses less on die error correction than MLC, but cells shouldn't be that different at all. Same i suppose goes for TLC aswell.
If this is the case, it brings an interesting question; If one were to buy MLC drive and wanted SLC grade endurace, it could (if access to firmware was available) tweak the firmware in a manner, so the whole drive would act as a pSLC; obviously at a cost of performance. Something like nCache 2.0, but expanded to whole capacity.
I believe some cheap flash drive controllers offered something like that using their MPtools. I remember messing around with a cheap TLC based flash drive; Once done, i ended up with 1/3 of the capacity, but write speeds increased dramaticly.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
34 Comments
View All Comments
vaayu64 - Thursday, August 21, 2014 - link
Thanks for the review, as always =).If you have the opportunity to meet with sandisk, can you please ask if there will be a msata version of their extreme or ultra ssd.
vaayu64 - Thursday, August 21, 2014 - link
Another question, does this X300 provide power loss protection?Regards
hojnikb - Thursday, August 21, 2014 - link
Looking at the PCB it appears not.Samus - Thursday, August 21, 2014 - link
That's too bad since it clearly has a piece of (undisclosed capacity) memory on the PCB. Looks to be a 128MB DDR2 chip. I wonder if any user data is stored in there of if it truly caches only the indirection table?Kristian Vättö - Thursday, August 21, 2014 - link
The X300s does not have capacitors to provide power-loss protection as that is generally an enterprise-only feature. SanDisk does have a good white paper about their power-loss protection techniques, though.http://www.sandisk.com/assets/docs/unexpected_powe...
Samus - Friday, August 22, 2014 - link
Enterprise-only feature? Many mainstream drives have capacitors dating back to the Intel SSD 320 (X25-M v3)Some of the cheapest SSD's on the market have capacitors (Crucial MX100) so its inconceivable to leave them out in 2014.
Kristian Vättö - Friday, August 22, 2014 - link
"Many mainstream drives have capacitors dating back to the Intel SSD 320 (X25-M v3)"There is only a handful of client-grade drives that provide power loss protection in the form of capacitors (Crucial M500, M550 & MX100, Intel SSD 730 & SSD 320 are the only ones I can remember).
The SSD 320 was never strictly a client drive as Intel also targeted it towards the entry-level enterprise market, hence the power loss protection. The SSD 730, on the other hand, is derived from the DC S3500/S3700, so it is basically a client tuned enterprise drive.
The power loss protection in the MX100 and other Crucial's client drives is not as perfect as their marketing makes you think. Crucial only guarantees that the capacitors provide enough power to save the NAND mapping table, which means user data is vulnerable to data loss. That is why the M500DC uses different capacitors because the ones in the client drives do not provide enough power to save all writes in progress.
SanDisk's approach is to use nCache (i.e. an SLC portion) to flush the NAND mapping table from the DRAM more often. The lower write latency that SLC has ensures that in case of a power loss, the data loss is minimal but it is true that some data may be lost. Crucial/Micron operates all NAND as MLC, which is why they need the capacitors to make sure that the NAND mapping table is safe.
hojnikb - Friday, August 22, 2014 - link
On the subject of mapping tables; how does controllers like sandforce (and some marvell implementations) work without DRAM ? Do they dedicate a portion of flash for that and how do they keep track of that portions activity (eg block wear) ?Also, since some of the manufactures use pseudo SLC (ie MLC/TLC acting as SLC) how is endurance of those cells affected ? Can SLC portion last longer than normal MLC/TLC ?
Kristian Vättö - Friday, August 22, 2014 - link
The controller designs that don't utilize DRAM use the internal SRAM cache in the controller to cache the NAND mapping table. It just requires a different mapping table design since SRAM caches are much smaller than DRAM. Ultimately the mapping table is still stored in NAND, though.Pseudo-SLC can definitely last longer than MLC/TLC. With only one bit per cell, there is much more voltage headroom as there are only two voltage states.
hojnikb - Friday, August 22, 2014 - link
So really, MLC/TLC and SLC dies do not differ much internally. I'm guessing that real SLC just uses less on die error correction than MLC, but cells shouldn't be that different at all. Same i suppose goes for TLC aswell.If this is the case, it brings an interesting question; If one were to buy MLC drive and wanted SLC grade endurace, it could (if access to firmware was available) tweak the firmware in a manner, so the whole drive would act as a pSLC; obviously at a cost of performance. Something like nCache 2.0, but expanded to whole capacity.
I believe some cheap flash drive controllers offered something like that using their MPtools. I remember messing around with a cheap TLC based flash drive; Once done, i ended up with 1/3 of the capacity, but write speeds increased dramaticly.