I have 384TB of ECC DDR4 across two blades with 4 CPUs for a combined core count of 96.
-
@adrianww @SecurityWriter You mean just before? When it bursts it'll be worthless due to liquidation of AI companies flooding the market.
-
@SecurityWriter I've noticed price of storage going up ever so slightly
@dps910 https://wccftech.com/western-digital-has-no-more-hdd-capacity-left-out/
Expect more increases soon...
-
@agowa338 @bob_zim @SecurityWriter which is pretty unlikely for a SAN - if he said 48 TB or something it would be possible but unless you have very very very specialized boards I dont think you get up to 96TB per socket on ddr4 in any cases I know about
@agowa338 @bob_zim @SecurityWriter that being said things like Solid State Sans do have some highly specialized hw setups so we might be totally off
-
@agowa338 @bob_zim @SecurityWriter that being said things like Solid State Sans do have some highly specialized hw setups so we might be totally off
@cursedsql @bob_zim @SecurityWriter
Hence why I asked

-
@adrianww @SecurityWriter You mean just before? When it bursts it'll be worthless due to liquidation of AI companies flooding the market.
@dalias @adrianww @SecurityWriter
The moment the AI bubble bursts, I will buy me some second-hand Nvidia GPUs so I can try out Vulkan raytracing
-
You could offer the box and RAM to the Ai bandits and ask in exchange for cease and desist of operations ....doing humanity a favour sounds like a good thing?
-
@cursedsql @bob_zim @SecurityWriter
Hence why I asked

@agowa338 @cursedsql @bob_zim @SecurityWriter
I also would lean towards it being GB, although 384 GB does seem quite modest for what I assume is quite a high performance SAN, given it's all solid state.
I once worked on a mid range combined NAS/SAN head that topped out at 1TB for the high-end model. That wasn't just connected to the CPUs, it was also in caches and buffers for other chips in the data path.
That was a few years ago, and I can imagine a high end system might have a lot more, but 384TB does sound excessive, especially if there's only 192 SSDs hanging off it. It might be possible to load the entire array into RAM in that case. -
I have 384TB of ECC DDR4 across two blades with 4 CPUs for a combined core count of 96.
It powers a fully populated 192 disk solid state SAN.
I was told it was old and in need of replacing, but apparently now it’s worth more than the GDP of the UK.
Can’t afford to run it (or hear my thoughts when in the vicinity)… but I can sit atop it like a fucking dragon.
And I will.
@SecurityWriter I wonder if the hardware decommissioning plan of the company I left last year (they were bought and being shutdown) is still to physically destroy any physical storage components.
It wouldn't surprise me if some of those ended up, or will end up on the second hand market. -
@agowa338 @cursedsql @bob_zim @SecurityWriter
I also would lean towards it being GB, although 384 GB does seem quite modest for what I assume is quite a high performance SAN, given it's all solid state.
I once worked on a mid range combined NAS/SAN head that topped out at 1TB for the high-end model. That wasn't just connected to the CPUs, it was also in caches and buffers for other chips in the data path.
That was a few years ago, and I can imagine a high end system might have a lot more, but 384TB does sound excessive, especially if there's only 192 SSDs hanging off it. It might be possible to load the entire array into RAM in that case.@GerardThornley @agowa338 @bob_zim @SecurityWriter yes that's why I figured it was still credible because anyone who has a 384 tb solid state san might be rich enough to back it entirely in ram
-
@GerardThornley @agowa338 @bob_zim @SecurityWriter yes that's why I figured it was still credible because anyone who has a 384 tb solid state san might be rich enough to back it entirely in ram
@GerardThornley @agowa338 @bob_zim @SecurityWriter also if they were 8tb instead of 2tb it would just be like a huge working set
-
I have 384TB of ECC DDR4 across two blades with 4 CPUs for a combined core count of 96.
It powers a fully populated 192 disk solid state SAN.
I was told it was old and in need of replacing, but apparently now it’s worth more than the GDP of the UK.
Can’t afford to run it (or hear my thoughts when in the vicinity)… but I can sit atop it like a fucking dragon.
And I will.
@SecurityWriter@infosec.exchange I'm imagining the dragon hoard as a pile of equipment that refuses to be thrown out. Who am I kidding, that was my office before we started having kids.
-
@GerardThornley @agowa338 @bob_zim @SecurityWriter also if they were 8tb instead of 2tb it would just be like a huge working set
@cursedsql @agowa338 @bob_zim @SecurityWriter I don't know what's typical for these things with solid state, but with spinning rust (and a few years ago) large arrays typically didn't use drives much bigger than about 600GB. The preference would be for more drives, rather than larger. The reason for that was to do with failure rates, rebuild times and bandwidth.
The maths might have changed with the technology, but I'd suggest that if you're using SSDs then your focus is probably response time and bandwidth rather than storage density, so I'd expect smaller rather than larger drives. -
Well he only said "DDR4", not that it is used as the systems memory. And PCIe add-on cards for ramdisks exist, sooo
@agowa338 Cards like that exist, but they don’t hold thousands of DIMMs.
-
@cursedsql @agowa338 @bob_zim @SecurityWriter I don't know what's typical for these things with solid state, but with spinning rust (and a few years ago) large arrays typically didn't use drives much bigger than about 600GB. The preference would be for more drives, rather than larger. The reason for that was to do with failure rates, rebuild times and bandwidth.
The maths might have changed with the technology, but I'd suggest that if you're using SSDs then your focus is probably response time and bandwidth rather than storage density, so I'd expect smaller rather than larger drives.@GerardThornley @cursedsql @bob_zim @SecurityWriter
Or you want to place it in an environment where it has to deal with heavy vibrations. Like on a moving trolly or in a vehicle or ... there are multiple reasons for this. It may even just be because you need high random IO speeds...
And the sizing also depends on what you're using it for. Like e.g. if you get your data in to the system in infrequent busts but at multiple TB/s and you've to cache it until it is synced even to SSDs, well
-
@GerardThornley @cursedsql @bob_zim @SecurityWriter
Or you want to place it in an environment where it has to deal with heavy vibrations. Like on a moving trolly or in a vehicle or ... there are multiple reasons for this. It may even just be because you need high random IO speeds...
And the sizing also depends on what you're using it for. Like e.g. if you get your data in to the system in infrequent busts but at multiple TB/s and you've to cache it until it is synced even to SSDs, well
@GerardThornley @cursedsql @bob_zim @SecurityWriter
(The later was an example from scientific environments. I think it was CERN but I'm not sure...)
-
@agowa338 Cards like that exist, but they don’t hold thousands of DIMMs.
@bob_zim But PCIe lane splitters and extenders also exist. And I don't know what the highest archivable density of these cards currently is.
I so far have only had one old one in my hands and seen them in slides in class at my job training about 10 years ago (they were mentioned as accelerator cards primarily used for things like MS Dynamics and SAP databases)...
-
@GerardThornley @cursedsql @bob_zim @SecurityWriter
Or you want to place it in an environment where it has to deal with heavy vibrations. Like on a moving trolly or in a vehicle or ... there are multiple reasons for this. It may even just be because you need high random IO speeds...
And the sizing also depends on what you're using it for. Like e.g. if you get your data in to the system in infrequent busts but at multiple TB/s and you've to cache it until it is synced even to SSDs, well
@agowa338 @cursedsql @bob_zim @SecurityWriter Yep, those are also possibilities. I described what I think is most probable given the information available and scenarios I've seen, but yeah, there are reasons it might be a less typical setup, or my knowledge might be out of date.
-
@GerardThornley @cursedsql @bob_zim @SecurityWriter
(The later was an example from scientific environments. I think it was CERN but I'm not sure...)
@agowa338 @cursedsql @bob_zim @SecurityWriter Yeah, that's sounds pretty plausible for things like the LHC experiments.
-
@agowa338 @cursedsql @bob_zim @SecurityWriter Yeah, that's sounds pretty plausible for things like the LHC experiments.
@agowa338 @cursedsql @bob_zim @SecurityWriter For the vehicle scenario, I know modern trains have a lot of sensors on, and I don't know the sampling rate, but they do only have short windows to upload the data they're in their terminal station. I'm not sure I can believe them needing TBs, though.
On the other hand, a F1 team would probably combine transport (though presumably powered down) with need for high bandwidth and low latency.
-
@bob_zim But PCIe lane splitters and extenders also exist. And I don't know what the highest archivable density of these cards currently is.
I so far have only had one old one in my hands and seen them in slides in class at my job training about 10 years ago (they were mentioned as accelerator cards primarily used for things like MS Dynamics and SAP databases)...
@agowa338 It’s more about the physical space for the cards. Most hold eight DIMMs. Holding 3072 would take 384 cards. That’s nearly a full rack just for the RAM cards, not counting the persistent drives. No way would you run that with even two entire blade frames, let alone two blades.