We bought a 2TB Sandisk SSD back in 2024 for around $95 from best buy. Today 1TB ssd by sandisk is $166(the cheapest that i found, and it goes for $199 in walmart). Market is forcing people to become renters than buyers and there is no force countering that idea. Its a market failure that people will study 10-15 years from now.
It's not a market failure, it's just supply and demand. There are many computer components competing for the same resources (fabs, wafers). Demand for GPUs, RAM etc. has increased a lot due to AI, but supply is still the same due to new fabs being huge investments that take years to build. Of course the price goes up.
The demand side is the world's "investors" buying up a product without producing profit. This results in locking out companies operating without such "investors" from accessing the product and using it to produce profit. This is a failure.
You were not are renter in 2021 when NVMe were same price as now by TB,
stuff is just becoming more expensive on market shortages.
Last half year ate up three to four years of earlier price regression, that's about it.
As long as this plateaus here, as prices did for last 4 months, that's just the new equilibrium where it has the chance to get better again, would no be all doom and gloom about personal computing yet.
Meanwhile I'm still dreaming about any consumer and affordable 32TB or even 16TB portable SSD. Innovation and market for consumers are going backwards.
Funny thing is that one of the best you can get is the Crucial (Micron) 8TB one but even that one gets more expensive. I have the feeling it will be gone completely soon.
There was once a 2.5" SSD Mushkin Source 16TB SATA drive. At its cheapest it was ~1700 USD (or 1500 EUR). That was mid 2023 (like 3 years ago!).
Nowadays it feels like that this time and price region is like decades away in the future. I was hoping I can store more data in future on modern tech like SSDs and not less.
The prices aren't going down for large consumer drives because the market is so small, and because the AI DC market is swallowing up everything. There's little demand from your average consumer to have 30TB of storage, let alone specifically SSDs. The average user doesn't have that much data, and if they do a HDD is fine for any practical purpose.
Despite the recent AI bubble you can still buy HDDs in the tens of TBs for a few hundred EUR/USD and you still don't see them in every computer. How high could the 30TB SSD demand be to justify the kind of volumes that drive price down?
In the DC it's the opposite, large and efficient drives are a must to save support all those fancy workloads while driving down space, power, cooling needs.
A few months ago I finished building a new media server based on UnRaid. I populated it with WD 26TB drives. At the time they were about $400 (steep, but a decent capacity/dollar buy). Now they are nearly $1000 on Amazon, a 250% increase. I just hope I don't have a drive failure.
With regards to the new Micron SSD - I wonder how they keep it cool? I don't see coolant ports on it so they must strap a heatsink on.
The product brief says maximum 30W and it looks like the whole enclosure is a heatsink, even has ribs on the back. The expected operating temp is 50C but it's probably rated to operate at higher than that.
P.S. I had to shuck 20TB WD drives that cost 350EUR on sale (now at 400EUR). 26TB drives are now ~700EUR. These external drives were the cheap option. Standalone drives usually cost more.
The average user consumes that much quite regularly. They've been taught to stream it off of someone else's computer, mostly so that the next time they stream it they can be compelled to pay for it again. It's fun going back to dumb terminals.
Note how that is still well in excess of what e.g. AWS EBS GP3 volumes offer (or at least used to, though even now their "80K IOPS" is measured with 64 KiB random transfers, whereas Micron measured that 42K IOPS with 4 KiB random transfers), which is what the person above is gesturing towards.
The same EBS GP3 used to be specified with 16K max IOPS at 16 KiB random transfers until pretty recently.
What’s the intended block size of these things? I thought 4KB was normal, but that doesn’t make sense at 40K IOPS, and doesn’t align with the benchmarks I’ve seen.
Also: price is expected to be $80k. I suppose density is the selling point here, not speed.
That is pretty awful write performance. Does anyone know more about this? I assume all of these hyperdense SSDs suffer from the same drawback. Also, I heard that the E3.L interface can support up to 16x lanes, but there are no practical commerical products at this point.
A more convenient (and dare I say, faster) tape drive replacement for backups? They do make a good point, it would take 10*24TB drives working in the worst raid configuration to even come close to these speeds.
The u.2 form factor is slightly larger than a 2.5" drive. I can imagine the entire space in it taken by Flash chips. I can't imagine what cooling scheme do they employ for the chips in the middle.
The U.2 form factor is a 2.5" drive, not larger than it.
"U.2" does not change anything in the mechanical characteristics of a 2.5" drive, it just replaces the SATA or SAS electrical interface with a NVMe electrical interface.
You can mount a U.2 drive in any location intended for 2.5" drives, as long as its height can fit there.
However, 2.5" drives come in various heights. Many laptops and mini-PCs that accept 2.5" drives accept only some of the smaller heights and they do not accept the greater heights, like 15 mm, which are typical for enterprise SSDs and HDDs, regardless whether they have a NVMe, i.e. U.2, or a SAS interface or a SATA interface.
This new high-capacity U.2 SSD has the standard 15 mm height of the 2.5" form factor.
Apparently TDP is 30 watts¹, according to the product brief. I would imagine it's a single PCB with flash chips on both sides then thermally bonded to the aluminum chassis. That should keep all chips at approximately the same temperature. On its own it could be easily air cooled, but with 24 in a 2U chassis you'll be having some decently hefty forced air over the drives.
1. For comparison, an HDD usually comes in around ~10 watts
The transfer rates limit how much each chip can be active at any given time, so a heat-aware writing allocator can pick the least active blocks for the next writes and distribute the heat accordingly. Even if it’s not heat-aware, the tendency will be that the writes will be distributed over as many chips as there are, and so will be the heat generated.
Now, I would LOVE to see this much SLC flash on a direct to bus attachment setting.
Over the past few years the main improvement in SSD capacity has been due to them stacking an ever-increasing number of NAND layers in a single chip, with state-of-the-art SSDs already having over 300 layers.
No need to worry about cooling when each layer in the sandwich is only a fraction of a micrometer thick!
the u.2 form factor indeed evolved from chassis designs that were originally 2.5" drives. It's now kind of becoming obsolete with new designs using things like E1S, E1L (exactly the correct height to be slotted into a 1U server, it's like a slightly wider M.2, but meant to be insertable and removable), and E3S and E3L.
Note that the 245TB is an E3L, the half size version of it come in smaller size.
A big consideration for efficiency and TCO calculations is the number of servers required to house the drives. NVMe drives tend not to be in external JBOF enclosures.
Fewer servers means fewer cpus, less RAM, fewer fans, and maybe fewer switches.
Getting rid of 30 watts of heat is trivial compared to say, 300 (I don't quite know how to read that ratio since a 2.5kW SSD seems a little high to me).
With a modern CPUs hitting 400W it's already a problem to fill a rack top down with servers like you could do before: too many heat to dissipate and transfer, too much power to provide in the first place.
Just imagine something like 2S 9565 in at least 2U machines: with 10 server x 2U x 2 CPU you would have 8kW in the processors alone and you didn't even fill half of the 42U rack.
(Im)patiently waiting for this AI-generated memory crisis to pass (or the bubble to pop) so SSD prices can crash back down again. Been dreaming of replacing my RAID6 HDD setup with a RAID1 of SSDs and a hot spare.
> What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?
Spare capacity, mostly. That’s why they have higher endurance. If you want to double the endurance of a given drive, tell the controller to allocate twice as many spare blocks and report less capacity than you would otherwise.
In this case, you are also paying a premium for the PCIe attachment instead of SAS, and a lot for price elasticity. You see, with drives like these you slash space and energy consumption in relation to HDDs by a large number, and that allows you to pay a premium for the device, because, at the end of its lifetime, it’ll have more than covered the cost difference in saved space and energy.
4-5x times what it would have been if not for the demand from AI. According to my rough calculation 4-8tb ssd drives were going to reach parity with hdd this year
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
Perhaps their usual buyers just care less about retention?
Those drives aren't going to be used for cold storage, and it is basically a guarantee that there will be checksums and some form of redundancy. Who cares whether the data is retained for 10 or for 15 years after writing when you can do a low-priority background scrub of the entire drive once a month, and when there are already mechanisms in place to account for full-drive failure?
QLC retention reported to be around 1 year in unpowered state. I would assume, that drive does background refresh, though. No idea what effect it has on total drive lifetime. It is still mean that if you use it for cold storage it has to be powered.
Why is it mean? Why would you want to use a technology that is unsuitable for cold storage for cold storage? You won't even get the power / IOPS benefit if all it does is an infrequent replication of data and is then switched off.
I believe it has read speeds of 13GB/s, not 3 (unless you are referring to an equivalent array of 10 HDD). It will almost certainly be used to store training datasets and model weights. Which I assume are good use cases for fast sequential reads.
Can someone who knows explain what is the benefit of having all that data in one ssd instead of splitting it up into hundreds of individual drives? Does the single ssd benefit is more performance or does it really tuen out to be cheaper than hundreds of individual drives?
It’s about density in a datacenter. With this you have 1PB in 4 drives, fitting in a 1u rack, which is just incredible. Also these drives don’t use regular SATA or SAS, they use PCIe, so these drives are also quite fast in comparison. Density has a power efficiency aspect as well both in just having fewer drives and requiring fewer servers to put drives into.
A 42U rack filled with 1u servers with 8 drives each, will have 84PB of data. It feels like it was a few month ago where you could buy a rack with 1PB of storage, and that was awesome. Not anymore.
You’re actually right, it’s just that datacenters like density and will gladly split your data onto hundreds of these little amazing magical bits of technology rather than hundreds of less magical ones in the same physical volume.
DENSITY. Hyperscalers want to store as much data per rack and per data center as possible. They will eventually have hundreds of thousands of these drives.
Want, but then need two for reduncancy... then a spare for recovery... why not 3 raid or zfs... imagine the resilver time on this. It's hit the limit of data surety surely.
God damn. I know somebody that became a multi-millionaire from web hosting in the 2000s and his entire data center back then could have been replaced with just one of these SSDs.
We bought a 2TB Sandisk SSD back in 2024 for around $95 from best buy. Today 1TB ssd by sandisk is $166(the cheapest that i found, and it goes for $199 in walmart). Market is forcing people to become renters than buyers and there is no force countering that idea. Its a market failure that people will study 10-15 years from now.
EDIT- The same 2TB ssd is now $329.99 at bestbuy.
It's not a market failure, it's just supply and demand. There are many computer components competing for the same resources (fabs, wafers). Demand for GPUs, RAM etc. has increased a lot due to AI, but supply is still the same due to new fabs being huge investments that take years to build. Of course the price goes up.
The demand side is the world's "investors" buying up a product without producing profit. This results in locking out companies operating without such "investors" from accessing the product and using it to produce profit. This is a failure.
You were not are renter in 2021 when NVMe were same price as now by TB, stuff is just becoming more expensive on market shortages.
Last half year ate up three to four years of earlier price regression, that's about it.
As long as this plateaus here, as prices did for last 4 months, that's just the new equilibrium where it has the chance to get better again, would no be all doom and gloom about personal computing yet.
Ug. I picked up a 16TB hard disk last year for about $250. Went looking for another one last month and the same disk is about $500. Painful.
Meanwhile I'm still dreaming about any consumer and affordable 32TB or even 16TB portable SSD. Innovation and market for consumers are going backwards.
Funny thing is that one of the best you can get is the Crucial (Micron) 8TB one but even that one gets more expensive. I have the feeling it will be gone completely soon.
Enterprise NVMe on the high end is now starting to ship batches at $1000/TB with existing stock around $500/TB. No consumer is going to pay that.
But if you're buying a $500k GPU server putting 100TB of nvme in there for $50-100k is justifiable.
There was once a 2.5" SSD Mushkin Source 16TB SATA drive. At its cheapest it was ~1700 USD (or 1500 EUR). That was mid 2023 (like 3 years ago!).
Nowadays it feels like that this time and price region is like decades away in the future. I was hoping I can store more data in future on modern tech like SSDs and not less.
The prices aren't going down for large consumer drives because the market is so small, and because the AI DC market is swallowing up everything. There's little demand from your average consumer to have 30TB of storage, let alone specifically SSDs. The average user doesn't have that much data, and if they do a HDD is fine for any practical purpose.
Despite the recent AI bubble you can still buy HDDs in the tens of TBs for a few hundred EUR/USD and you still don't see them in every computer. How high could the 30TB SSD demand be to justify the kind of volumes that drive price down?
In the DC it's the opposite, large and efficient drives are a must to save support all those fancy workloads while driving down space, power, cooling needs.
A few months ago I finished building a new media server based on UnRaid. I populated it with WD 26TB drives. At the time they were about $400 (steep, but a decent capacity/dollar buy). Now they are nearly $1000 on Amazon, a 250% increase. I just hope I don't have a drive failure.
With regards to the new Micron SSD - I wonder how they keep it cool? I don't see coolant ports on it so they must strap a heatsink on.
The product brief says maximum 30W and it looks like the whole enclosure is a heatsink, even has ribs on the back. The expected operating temp is 50C but it's probably rated to operate at higher than that.
P.S. I had to shuck 20TB WD drives that cost 350EUR on sale (now at 400EUR). 26TB drives are now ~700EUR. These external drives were the cheap option. Standalone drives usually cost more.
>The average user doesn't have that much data
The average user consumes that much quite regularly. They've been taught to stream it off of someone else's computer, mostly so that the next time they stream it they can be compelled to pay for it again. It's fun going back to dumb terminals.
I look forward to have my favourite hyperscaler grant me 1000 "premium" IOPS per VM on this monster.
IOPS? This thing has slower IOPS than an old SATA SSD (~40k / QLC). I think it is meant for sequential operations only.
Note how that is still well in excess of what e.g. AWS EBS GP3 volumes offer (or at least used to, though even now their "80K IOPS" is measured with 64 KiB random transfers, whereas Micron measured that 42K IOPS with 4 KiB random transfers), which is what the person above is gesturing towards.
The same EBS GP3 used to be specified with 16K max IOPS at 16 KiB random transfers until pretty recently.
What’s the intended block size of these things? I thought 4KB was normal, but that doesn’t make sense at 40K IOPS, and doesn’t align with the benchmarks I’ve seen.
Also: price is expected to be $80k. I suppose density is the selling point here, not speed.
I checked the specs here: https://www.micron.com/content/dam/micron/global/public/prod...
The interface looks equiv to 4x PCIe 5.0.
That is pretty awful write performance. Does anyone know more about this? I assume all of these hyperdense SSDs suffer from the same drawback. Also, I heard that the E3.L interface can support up to 16x lanes, but there are no practical commerical products at this point.A more convenient (and dare I say, faster) tape drive replacement for backups? They do make a good point, it would take 10*24TB drives working in the worst raid configuration to even come close to these speeds.
65 hours to restore a full backup
Yes, but with all that data, how much heavier does it get?
2.231705*10^-13 gram
:)
A single speck of dust could throw off that measurement (~ 1.6 x 10^-7 grams)
Extremely dense QLC chips. Still it's 2700-3000MByte, ie ~3GByte/second.
What should worry way more is DWPD which is abysmal... on the first glance. But if you punch it in the calc it still would take ages to wear it out.
https://wintelguy.com/dwpd-tbw-gbday-calc.plDWPD was the boogey man 10 years ago. everybody worried about it.
now, nobody cares. I have over 500 NVMe drives in our deployment and the drive deaths are not due to wear.
The u.2 form factor is slightly larger than a 2.5" drive. I can imagine the entire space in it taken by Flash chips. I can't imagine what cooling scheme do they employ for the chips in the middle.
The U.2 form factor is a 2.5" drive, not larger than it.
"U.2" does not change anything in the mechanical characteristics of a 2.5" drive, it just replaces the SATA or SAS electrical interface with a NVMe electrical interface.
You can mount a U.2 drive in any location intended for 2.5" drives, as long as its height can fit there.
However, 2.5" drives come in various heights. Many laptops and mini-PCs that accept 2.5" drives accept only some of the smaller heights and they do not accept the greater heights, like 15 mm, which are typical for enterprise SSDs and HDDs, regardless whether they have a NVMe, i.e. U.2, or a SAS interface or a SATA interface.
This new high-capacity U.2 SSD has the standard 15 mm height of the 2.5" form factor.
Apparently TDP is 30 watts¹, according to the product brief. I would imagine it's a single PCB with flash chips on both sides then thermally bonded to the aluminum chassis. That should keep all chips at approximately the same temperature. On its own it could be easily air cooled, but with 24 in a 2U chassis you'll be having some decently hefty forced air over the drives.
1. For comparison, an HDD usually comes in around ~10 watts
It's not just a single PCB, but a sandwich of several.
The 4th Earl of Sandwich disagrees.
Given the cost of 24 of them, you can probably buy solid silver heatsinks watercooled with tears of sysadmins.
The tears of sysadmins are fairly cheap though.
I was going to say blood of virgins, but tears are probably better heat conductors.
Hey! You leave me out of your twisted fantasy!
I just want....I just want hard drive prices to come back down. *sniffle*
The transfer rates limit how much each chip can be active at any given time, so a heat-aware writing allocator can pick the least active blocks for the next writes and distribute the heat accordingly. Even if it’s not heat-aware, the tendency will be that the writes will be distributed over as many chips as there are, and so will be the heat generated.
Now, I would LOVE to see this much SLC flash on a direct to bus attachment setting.
Over the past few years the main improvement in SSD capacity has been due to them stacking an ever-increasing number of NAND layers in a single chip, with state-of-the-art SSDs already having over 300 layers.
No need to worry about cooling when each layer in the sandwich is only a fraction of a micrometer thick!
the u.2 form factor indeed evolved from chassis designs that were originally 2.5" drives. It's now kind of becoming obsolete with new designs using things like E1S, E1L (exactly the correct height to be slotted into a 1U server, it's like a slightly wider M.2, but meant to be insertable and removable), and E3S and E3L.
Note that the 245TB is an E3L, the half size version of it come in smaller size.
https://americas.kioxia.com/en-ca/business/ssd/solution/edsf...
https://www.exxactcorp.com/blog/storage/edsff-e1s-e1l-e3s-e3...
https://www.simms.co.uk/tech-talk/e1s-e1l-the-new-server-for...
Access Denied
You don't have permission to access
"http://investors.micron.com/news-releases/news-release-detai..." on this server.
High security on this press release.
Your IP address might be on a blocklist.
even my AWS IP is let in without trouble
works for me. akamai doesn't like you
No problems here ...
The press release is missing the key specification — how many Libraries of Congress fit on this thing?
Usual estimate is 10TB of compressed text-only. So 24x LoC would fit on the drive.
“For AI workloads: The 245TB Micron 6600 ION provided up to 84 times better energy efficiency”
How big of a deal is this part in relation to the initial upfront costs? I’m not privy to the cost of power for SSD
A big consideration for efficiency and TCO calculations is the number of servers required to house the drives. NVMe drives tend not to be in external JBOF enclosures.
Fewer servers means fewer cpus, less RAM, fewer fans, and maybe fewer switches.
It means you don't have as much to cool.
Getting rid of 30 watts of heat is trivial compared to say, 300 (I don't quite know how to read that ratio since a 2.5kW SSD seems a little high to me).
Given that 2.91TB SSDs are a common enterprise size, perhaps they're saying the 1x245TB SSD uses 1/84 the power of 84 2.91TB SSDs ;p
With a modern CPUs hitting 400W it's already a problem to fill a rack top down with servers like you could do before: too many heat to dissipate and transfer, too much power to provide in the first place.
Just imagine something like 2S 9565 in at least 2U machines: with 10 server x 2U x 2 CPU you would have 8kW in the processors alone and you didn't even fill half of the 42U rack.
https://www.amd.com/en/products/processors/server/epyc/9005-...
(Im)patiently waiting for this AI-generated memory crisis to pass (or the bubble to pop) so SSD prices can crash back down again. Been dreaming of replacing my RAID6 HDD setup with a RAID1 of SSDs and a hot spare.
What is this thing that all pictures of new devices need to come with this black background?
Dark mode.
How much is it?
They haven't released details but I was able to find a Solidigm D5-P5336 122.88TB drive for around 40,000 USD, as a guideline. So ... more than that.
Okay, so that 122TB drive costs about $330/TB.
I haven't bought a hard drive or an SSD in at least a decade (I get stuff for free, basically) but…that seems a bit high, right?
Seems like well-rated consumer-level SSDs cost around $250 for 1TB right now.
What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?
What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?
The extremely high capacity and the enterprise targeting.
> What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?
Spare capacity, mostly. That’s why they have higher endurance. If you want to double the endurance of a given drive, tell the controller to allocate twice as many spare blocks and report less capacity than you would otherwise.
In this case, you are also paying a premium for the PCIe attachment instead of SAS, and a lot for price elasticity. You see, with drives like these you slash space and energy consumption in relation to HDDs by a large number, and that allows you to pay a premium for the device, because, at the end of its lifetime, it’ll have more than covered the cost difference in saved space and energy.
What accounts for the premium price/TB of these extremely high capacity enterprise-targeted drives?
The word "enterprise".
I fondly remember when i could buy a well-rated consumer-level SSD for a lot less per TB...
I paid $300 each for my last two SSDs, 4 TB Samsung 990 Pros.
They’re currently selling for $942.72 on Amazon.
Density, power efficiency, write endurance, sustained write speeds under continuous load, power-loss protection.
And out of band management, hot plug capable form factors, and a bunch of other things described in the OCP NVMe SSD spec.
https://www.opencompute.org/documents/datacenter-nvme-ssd-sp...
I was quoted $18K for a 3.7 TB Dell NVMe disk the other day. I'm gonna guess these drives are literally a quarter million each
> I was quoted $18K for a 3.7 TB Dell NVMe disk
surely you don't actually think that's realistic pricing?
You're getting ripped off. NVMe SSDs are expensive, but not THAT expensive. A 4Tb drive should be around $1k even with some "enterprise" markup.
$200/TB is reasonable. $300 if it is VERY fast. That is just robbery.
Apparently $80k, not that terrible in comparison
4-5x times what it would have been if not for the demand from AI. According to my rough calculation 4-8tb ssd drives were going to reach parity with hdd this year
Likely $90k USD MSRP with a wholesale price around half that.
Dell is getting first dibs.
If you have to ask...
I don't think he wants to buy one
‘Contact us’
QLC NAND
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
Perhaps their usual buyers just care less about retention?
Those drives aren't going to be used for cold storage, and it is basically a guarantee that there will be checksums and some form of redundancy. Who cares whether the data is retained for 10 or for 15 years after writing when you can do a low-priority background scrub of the entire drive once a month, and when there are already mechanisms in place to account for full-drive failure?
QLC retention reported to be around 1 year in unpowered state. I would assume, that drive does background refresh, though. No idea what effect it has on total drive lifetime. It is still mean that if you use it for cold storage it has to be powered.
Why is it mean? Why would you want to use a technology that is unsuitable for cold storage for cold storage? You won't even get the power / IOPS benefit if all it does is an infrequent replication of data and is then switched off.
What kind of usage do you envision for 245TB drive with read speed of 3GB/sec?
I believe it has read speeds of 13GB/s, not 3 (unless you are referring to an equivalent array of 10 HDD). It will almost certainly be used to store training datasets and model weights. Which I assume are good use cases for fast sequential reads.
You can trivially modulate flash endurance by tweaking the reported space - the less space you report, the more spares you have.
Can someone who knows explain what is the benefit of having all that data in one ssd instead of splitting it up into hundreds of individual drives? Does the single ssd benefit is more performance or does it really tuen out to be cheaper than hundreds of individual drives?
It’s about density in a datacenter. With this you have 1PB in 4 drives, fitting in a 1u rack, which is just incredible. Also these drives don’t use regular SATA or SAS, they use PCIe, so these drives are also quite fast in comparison. Density has a power efficiency aspect as well both in just having fewer drives and requiring fewer servers to put drives into.
A 42U rack filled with 1u servers with 8 drives each, will have 84PB of data. It feels like it was a few month ago where you could buy a rack with 1PB of storage, and that was awesome. Not anymore.
For when you need to store a copy of the internet, and have been granted immunity for your copy of Anna's Archive.
Power consumption is the single biggest data center cost. This thing takes only 30W. An average 4 TB SSD pulls 6W, so that's a 12x improvement.
Furthermore, 15-60x density improvement reduces server and infrastructure costs because it requires vastly less of everything per EiB.
Cooling and power are the limiting factors of density.
You’re actually right, it’s just that datacenters like density and will gladly split your data onto hundreds of these little amazing magical bits of technology rather than hundreds of less magical ones in the same physical volume.
Higher density, less power. Those are the bottlenecks in current and new data centers that are built out.
So it's not exactly about cost savings, but having the option to do more, faster.
Also, you could also get much higher bandwidth density out of this vs HDD, and this is great for AI training
They’ll still have hundreds of individual drives. Of these drives.
And thanks to the density, they won’t need as many racks as they used to.
Probably for a similar reason why I would rather buy a single 4TB SSD than fourty 100MB SSDs.
DENSITY. Hyperscalers want to store as much data per rack and per data center as possible. They will eventually have hundreds of thousands of these drives.
Want, but then need two for reduncancy... then a spare for recovery... why not 3 raid or zfs... imagine the resilver time on this. It's hit the limit of data surety surely.
The word AI can be safely deleted wherever it occurs in this press release.
Very cool bit of tech.
Would like to see what the internals of this look like, how many flash packages and PCBs are in that tiny chassis?
https://web.archive.org/web/20260505162256/https://investors...
Rather silly of them to hide investor relations material behind an anonymity-hostile CDN.
PDF for those who want it. https://web.archive.org/web/20260506084407if_/https://invest...
God damn. I know somebody that became a multi-millionaire from web hosting in the 2000s and his entire data center back then could have been replaced with just one of these SSDs.
Cost? Durability? Iops do we know?
Data centers are winning.