By using this site, you agree to our Privacy Policy and our Terms of Use. Close
JEMC said:
Captain_Yuri said:
"375GB SSD that sells for $1520"

Jesus

"Endurance: 12 Petabytes written"

Damnnn

But I will stick with my Samsung SSDs thankz

But it's not only an SSD drive. It can also act as system RAM. You connect that to the motherboard, set things up and boom! Now you have 375GB of aditional RAM.

...

Yeah, it's great for servers and such, but kind of useless for personal computers.

It's a technology launch price. Article mentions that this mainly for business now, consumer-oriented ones will come later.

What makes 3D XPoint interesting is that it offers read and write speeds that rival what you’d expect from DRAM memory. But unlike DRAM, you can use 3D XPoint for long-term storage, since it’s non-volatile memory.

The upshot is that not only can you do things like copy files and read data more quickly, but you can also use 3D XPoint as “virtual memory,” to get near-RAM speeds.

And as mentioned in the comments, the PCIe will be a bottleneck:

My guess is that using PCIe is a transitional stage to fit in with existing architectures. When this technology was announced a while ago much was made of the fact that there would need to be new connection and bus architectures for the full potential of this technology to be realised -particularly to take advantage of the potential speed of the new memory system. We can only wait and see how that what the scene evolves

Some future insights:

I’d say make a new architecture around it instead.

New CPU: direct access to this memory. Throw out the L2/3 cache, use it’s place to make a lot of registers. More registers make it easier to run multiple processes on the same core and make virtualization easier too. Cache on the other hand is only needed to prefetch from the relatively slow RAM, not needed anymore. Throw out branch-prediction, prefetch and long pipelines and ques, we only needed them to handle NOPs while waiting for memory, but not needed here. Use a simple RISC architecture. Instead make several cores that share the entire memory and hardware memory protection. RISC can do more instructions per clock.

We need a new filesystem to go with this, since it’s both fast and non-violate it needs to be less robust, but instead of partitions it should work as a RAM-disk and should be resizeable at any moment, so you can dedicate your RAM to storage or to the CPU depending on what you do. Linux can be most likely modded to work on this.

This would mean a computer that has less components and thus cheaper (with bigger volumes, the price for XPoint would drop), more power-efficient while also being faster and more robust and also easily scaleable.

This technology will have more impact on our computers and sooner than quantum-computing. Instead of increasing the raw processing power, the last decade was about slowly removing all the bottlenecks from the computer, the last step was SSD-s becoming mainstream. Now that all the puzzle-pieces are ready it’s time to throw out all the legacy stuff we’ve inherited from the ’70-ies and make a new type of computer, where all the parts perform on the same speed, the CPU don’t have to wait for the RAM or the storage. Legacy software can just be recompiled, as long as the kernel handles the new type of RAM well, it wouldn’t change much from an average app’s standpoint, malloc will still give you some memory, and that’s it.



@Twitter | Switch | Steam

You say tomato, I say tomato 

"¡Viva la Ñ!"