Computex 2014: Update on SandForce SF3700 & A Live Demo
by Kristian Vättö on June 4, 2014 12:44 AM ESTOne of the key things I have been looking forward to at Computex is hearing more about the third generation SandForce controller i.e. the SF3700. I just stopped by at LSI's suite and finally have some new information to share. First off, LSI said that the controller will be shipping to OEMs in H2'14 but ultimately product releases will depend on the OEMs' schedules. I believe we might see some products shipping in Q4'14 but I'm afraid that most OEMs won't have their drives ready until early 2015 (CES?).
The firmware development is still ongoing and LSI told me that they have just started optimizing the write performance of the firmware. In other words, it's not even ready for final validation, which means that the OEMs can't fully validate their products yet because the firmware is not final. That's why I doubt we'll see any retail products shipping this year and none of the manufacturers I've talked to so far have given me any timing for their SF3700 products.
Performance wise LSI is focusing on mixed read/write performance. While the PCIe drives shipping today (like the Samsung XP941) provide great read and write performance, they aren't optimized for workloads that consist of both reads and writes. In other words, the drives are more optimized for benchmarks as those usually test one or the other, whereas real world workloads will always have both. This is an area we definitely need to investigate more -- we've been doing this for the enterprise for a while but we will likely be bringing it to the client side soon as well.
LSI showed me a live demo of the SF3700 with 128KB 80/20 read/write configuration. This time there wasn't any secretive or fishy stuff going on like at CES when Kingston had a live demo of the drive and we were allowed to see Iometer in action along with all the preparations (well, there wasn't any to be honest, they just clicked start). Based on the graph in the previous picture, SF3700 is clearly the highest performing PCIe SSDs when it comes to 80/20 read/write as the Plextor M6e and Samsung XP941 are only hitting ~250MB/s. Obviously this doesn't show the big picture and there are a lot of other variables when it comes to testing but the performance is certainly looking promising.
Furthermore, the SF3700 should bring much improved performance with incompressible data. I wasn't able to get any details other than that LSI is implementing an option to disable compression so the drives will perform the same regardless of the data type. While I think compression definitely has its advantage (higher performance, more over-provisioning...), I can see this being a big deal for some OEM customers who need their components to perform consistently in all workloads.
Lastly, the last week's acquisition. As it's still so recent news, there aren't much details to share. LSI did say that the acquisition won't change the state of SF3700 at all and it will be licensed to OEMs like all previous SandForce controllers but the future after that is still up in the air.
9 Comments
View All Comments
blanarahul - Wednesday, June 4, 2014 - link
Nice. But I wonder. Will Seagate fight with Sandforce customers and release their own SF3700 SSD?hpglow - Wednesday, June 4, 2014 - link
Seagate will without any doubt have a SF3700 drive. They didn't aquire SF to watch from the sidelines. There is a good chance that Seagate already had a Sandforce drive in the works.jjj - Wednesday, June 4, 2014 - link
Hope they figure it out sooner rather than later, since it seems that they messed it up and it got a "little bit"delayed.isa - Wednesday, June 4, 2014 - link
Nice update. Just curious: how big is the firmware team working on the 3700 series within LSI?zanon - Wednesday, June 4, 2014 - link
>While I think compression definitely has its advantageThe problem though is that generally compression is best (and increasingly) handled in the filesystem itself. It's not just more modern ones like ZFS either, even ancient ones like HFS+ have had transparent compression hacked into them. More recent improvements to algorithms (like lz4) have made it more effective and much faster. And perhaps most importantly anything encrypted is going to be incompressible, and particularly in a notebook setting FDE should be the norm, not the exception. Even in many desktops I suspect there's a fair amount of FDE.
Controller-level compression probably will have some continuing use cases, particularly in servers, but it's rapidly become more and more of a niche case, a nice bonus on top of otherwise great performance but absolutely not something to be depended on to be competitive in general.
Kristian Vättö - Wednesday, June 4, 2014 - link
The problem with filesystem encryption is that it's still done by the CPU, which is not optimised for compression, so the power draw usually ends up being higher and it might bottleneck some CPU intensive tasks. Another problem is that none of the mainstream filesystems support it -- while ZFS is great, it's not something that the average user (or even an enthusiast) would use since it's not natively supported by the major OSs. It's much easier for an end-user to just have a drive that does it for them instead.As for encryption, you are right that software encryption is fully incompressible but that is why TCG Opal 2.0 is such a big thing (and the SF-2000 series as well as the SF3700 support it). With that the controller itself will be doing the encryption, so there is no need for another software layer that consumes the CPU and hurts SSD performance in the first place.
zanon - Wednesday, June 4, 2014 - link
>The problem with filesystem encryption is that it's still done by the CPU, which is not optimised for compression, so the power draw usually ends up being higher and it might bottleneck some CPU intensive tasks.At least for user workloads, it'll take more then just assertions to back this up. In general in modern systems there is an abundance of CPU available, it's the cheapest and least used resource. Something like lz4 scales across any number of cores and can easily hit 400 MB/s or higher per core for compression and 1.8GB/s for decompression. While there may be some loads where that would be a limiting factor as opposed to other parts of the system, it seems pretty niche.
>Another problem is that none of the mainstream filesystems support it
This is just wrong. NTFS and HFS+ both have compression (and have for a while). Linux also has plenty of on-the-fly compression options, and of course FreeBSD now has full ZFS integration by default. Apple at least has been using it by default in many areas since at least 10.6, that was one of the major ways they've got down on install sizes. In Windows, NTFS Compression is right there in the GUI, "Advanced Attributes > Compress".
>As for encryption, you are right that software encryption is fully incompressible but that is why TCG Opal 2.0 is such a big thing (and the SF-2000 series as well as the SF3700 support it). With that the controller itself will be doing the encryption, so there is no need for another software layer that consumes the CPU and hurts SSD performance in the first place.
Yeah there is. Software encryption is far more flexible (will SSDs support smart cards, network authentication etc?), portable, and is fully auditable rather then being a black box. And with AES-NI and equivalents CPU benchmarks might see a miniscule decline, but it's again almost never a remotely limiting factor (users with those extreme niche cases can decide for themselves). In contrast as Anand's own tests show Sandforce sequential write performance takes a big hit.
The security advantages are well worth using FDE ubiquitously right now in any mobile situation and may have use cases even for enthusiast desktops. I stand by compression being a decent bonus to offer on top of competitive universal performance, but only that. If every other player in the game can offer zero compromises, particularly as solid stage storage costs continue to decline thus expanding usage to all media including pictures/songs/video, Sandforce would be at a massive disadvantage. I think they recognize too though that controller-level compression is now merely a niche bonus, not something to build their offering around. They need to be as good everywhere else too with that being icing on the cake.
Kristian Vättö - Wednesday, June 4, 2014 - link
What you are missing is that the CPU consumes a ton of power. It doesn't matter how scalable the algorithms are because the CPU is just not efficient for compression compared to a chip that is specifically optimised for that. Sure that's not a problem for desktops but bear in mind that the biggest market for SSDs is portable devices where battery life is a major concern.The same applies to encryption. TCG Opal 2.0 is software encryption in the sense that it requires encryption software but the encryption itself is hardware-accelerated. In other words, you can get the same features as with standard software encryption (including features like network authentication) but there is no CPU overhead because the drive will be doing the encryption. Take a look at our Crucial M500 testing with Microsoft eDrive, which is TCG Opal 2.0 compliant.
Furthermore, SandForce is paying a lot of attention to making the performance better regardless of the data type. They do think they are better even with 100% incompressible data, which is pretty rare (only media files are that compressed) but of course we'll have to wait and see.
hpglow - Wednesday, June 4, 2014 - link
I think the compression idea is clever. If the sandforce team evolves it properly it could continue to work. Very little of my data is compressable maybe 64 to 128GB. So as long as they fix the incompressable issue on the next controller it should be decent. There are quite a few users still using HDD boot disks that would never notice the inconsistant performance. Only 1 of my 4 PCs has an SSD boot disk and it is painful using a PC with a mecanical disk now.