The TeamGroup L5 LITE 3D (480GB) SATA SSD Review: Entry-Level Price With Mainstream Performance
by Billy Tallis on September 20, 2019 9:00 AM EST- Posted in
- SSDs
- SATA
- Silicon Motion
- SM2258
- TeamGroup
AnandTech Storage Bench - The Destroyer
The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test. These AnandTech Storage Bench (ATSB) tests do not involve running the actual applications that generated the workloads, so the scores are relatively insensitive to changes in CPU performance and RAM from our new testbed, but the jump to a newer version of Windows and the newer storage drivers can have an impact.
We quantify performance on this test by reporting the drive's average data throughput, the average latency of the I/O operations, and the total energy used by the drive over the course of the test.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
Any expectations that the TeamGroup L5 LITE 3D would perform like an entry-level SSD are shattered by the results from The Destroyer. The L5 LITE 3D has about the best overall data rate that can be expected from a TLC SATA drive. The latency scores are generally competitive with other mainstream TLC SATA drives and unmistakably better than the DRAMless Mushkin Source. Even the energy efficiency is good, though not quite able to match the Samsung 860 PRO.
AnandTech Storage Bench - Heavy
Our Heavy storage benchmark is proportionally more write-heavy than The Destroyer, but much shorter overall. The total writes in the Heavy test aren't enough to fill the drive, so performance never drops down to steady state. This test is far more representative of a power user's day to day usage, and is heavily influenced by the drive's peak performance. The Heavy workload test details can be found here. This test is run twice, once on a freshly erased drive and once after filling the drive with sequential writes.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
On the Heavy test, the L5 LITE 3D starts to show a few weaknesses, particularly with its full-drive performance—latency clearly spikes and overall throughput drops more than for most mainstream TLC drives. The effect is vastly smaller than the full-drive penalty suffered by the DRAMless competitor. The energy efficiency doesn't stand out.
AnandTech Storage Bench - Light
Our Light storage test has relatively more sequential accesses and lower queue depths than The Destroyer or the Heavy test, and it's by far the shortest test overall. It's based largely on applications that aren't highly dependent on storage performance, so this is a test more of application launch times and file load times. This test can be seen as the sum of all the little delays in daily usage, but with the idle times trimmed to 25ms it takes less than half an hour to run. Details of the Light test can be found here. As with the ATSB Heavy test, this test is run with the drive both freshly erased and empty, and after filling the drive with sequential writes.
Average Data Rate | |||||||||
Average Latency | Average Read Latency | Average Write Latency | |||||||
99th Percentile Latency | 99th Percentile Read Latency | 99th Percentile Write Latency | |||||||
Energy Usage |
The Team L5 LITE 3D has basically the same overall performance on the Light test as drives like the Crucial MX500. A handful of the latency scores are a bit on the high side, but don't really stand out—the Seagate BarraCuda that uses the old Phison S10 controller with current 3D TLC has more trouble on the latency front, and of course the DRAMless Mushkin Source has by far the worst full-drive behavior. There is a bit of room for improvement on the L5 LITE 3D's energy efficiency, since both the Mushkin Source and Crucial MX500 are clearly better for the empty-drive test runs. The Team drive's efficiency isn't anything to complain about, though.
42 Comments
View All Comments
flyingpants265 - Friday, September 20, 2019 - link
Why promote this drive without mentioning anything about the failure rates? Some Team Group SSDs have 27% 1-star reviews on newegg. That's MUCH higher than other manufacturers.. That's not worth saving $5 at all... Is Anandtech really that tone-deaf now?-I would not recommend this drive to others -- 5 months, dead.
-Not safe for keep your data.Highly recommend not to store any important data on it
-DO NOT BUY THIS SSD! Total lack of support for defective products! Took days to reply after TWO requests for support, and then I am expected to pay to ship their defective product back when it never worked!?
-Failed and lost all data after just 6 months.
...
Ryan Smith - Friday, September 20, 2019 - link
"Is Anandtech really that tone-deaf now?"Definitely not. However there's not much we can say on the subject with any degree of authority. Obviously our test drive hasn't failed, and the drive has survived The Destroyer (which tends to kill obviously faulty drives very early). But that's the limit to what we have data for.
Otherwise, customer reviews are a bit tricky. They're a biased sample, as very happy and very unhappy people tend to self-report the most. Which doesn't mean what you state is untrue, but it's not something we can corroborate.
* We've killed a number of SSDs over the years. I don't immediately recall any of them being Team Group
eastcoast_pete - Friday, September 20, 2019 - link
Ryan, I appreciate your response. Question: which SSDs have given up the ghost when challenged by the "destroyer"? Any chance you can name names? Might be interesting for some of us, even in historic context. Thanks!keyserr - Friday, September 20, 2019 - link
Yes anecdotes are interesting. In an ideal world we would have 1000 drives of each model put through its paces. We don't.It's a lesser known brand. It wouldn't make too much sense if they made bad drives in the long term.
Billy Tallis - Friday, September 20, 2019 - link
I don't usually keep track of which test a drive was running when it failed. The Destroyer is by far the longest test in our suite so it catches the blame for a lot of the failures, but sometimes a drive checks out when it's secure erased or even when it's hot-swapped.Which brands have experienced a SSD failure during testing is more determined by how many of their drives I test than by their failure rate. All the major brands have contributed to my SSD graveyard at some point: Crucial, Samsung, Intel, Toshiba, SanDisk.
eastcoast_pete - Friday, September 20, 2019 - link
Billy, I appreciate the reply, but would really like to encourage you and your fellow reviewers to "name names". An SSD going kaplonk when stressed is exactly the kind of information that I really want to know. I know that such an occurrence might not be typical for that model, but if the review unit provided by a manufacturer gives out during testing, it doesn't bode well for regular buyers like me.Death666Angel - Friday, September 20, 2019 - link
You can read every article, I remember a lot of them discussing the death of a sample (Samsung comes to mind). But it really isn't indicative of anything: sample size is crap, early production samples (hardware), early production samples (software). Most SSDs come with 3 years of warranty. Just buy from a reputable retailer, have a brand that actually honors warranty and make sure to back up your data. Then you're fine. If you don't follow those those rules, even using the very limited data Billy could give you won't help you out in any way.eastcoast_pete - Friday, September 20, 2019 - link
To add: I don't just mean the manufacturers' names, but especially the exact model name, revision and capacity tested. Clearly, a major manufacturer like Samsung or Crucial has a higher likelihood of the occasional bad apple, just due to the sheer number of drives they make. But, even the best big player produces the occasional stinker, and I'd like to know which one it is, so I can avoid it.Kristian Vättö - Saturday, September 21, 2019 - link
One test sample isn't sufficient to conclude that a certain model is doomed.bananaforscale - Saturday, September 21, 2019 - link
This. One data point isn't a trend. Hell, several data points aren't a trend if they aren't representative of the whole *and you don't know if they are*.