Post by IanDPost by Stephen HoffmanSome of the recent NVMe-based non-volatile PCIe byte-addressable memory
is now (limited) shipping...
Specs and life expectancy, etc. for the Intel Optane SSD DC P4800X
product; the first product based on 3D XPoint...
http://www.intel.com/content/www/us/en/solid-state-drives/optane-ssd-dc-p4800x-brief.html
https://arstechnica.com/information-technology/2017/03/intels-first-optane-ssd-375gb-that-you-can-also-use-as-ram/
No NVMe drivers available yet for OpenVMS, though — if there's enough
interest — I'm sure that'll be resolved...
--
Pure Personal Opinion | HoffmanLabs LLC
Interesting
So now we have 3 tiers of local memory
So we can crunch immediate calculations in DRAM and then have other
cores in our multicore VMS systems pre-processing the next level in
Optane and then have the slower stuff (if you can call it slow!) on NAND
Processor internal cache, shared cache, local DRAM, remote DRAM (RDMA
or otherwise), local NV, remote NV, SSD, local disk, remote disk, local
tape, remote tape, etc. Systems have been complex for quite a few
years. Using the marshalling routines on another platform has been
really nice — gets the data loaded from storage into memory and into
the data structures necessary, and back out again, and the routines
take care of dealing with all of the code slogging and the glue code;
whether it's a flat file or a database or whatever underneath. (Yes,
there are cases when direct access into the databases is definitely
necessary for reasons of performance or scale, but these routines get
rid of a whole lot of the cases where that level of control isn't
necessary. And much like how we once had to — thankfully only rarely
— had to deal directly with disk geometries when we really needed
storage performance, almost no-one needs to look at geometries anymore.)
Post by IanDThen there will be algorithms to be developed on calculating the break
even point on whether to shuffle the data up the speed tier towards the
processors
That stuff is already in use in various environments, Apple Fusion for
storage, and OpenVMS — an aeon or two ago — had a layered product
package called HSM that performed that for then-current storage. There
are still find traces of that HSM package visible in the DIRECTORY
/FULL command output, too.
Post by IanDWow, things are complex at the high end of town
I'd be surprised if a couple of the vendors weren't really interested
in pushing this memory into smartphones, as there's been rather more
innovation happening in that range in recent years — and a whole lot of
the total production for flash memory goes into those devices, too.
Post by IanDI'd be interested in this going forward when a consumer version pops out
I'd expect to see it on smartphones and tablets first, maybe on
high-end x86 boxes.
Post by IanDI'm planning on taking up what I started some time ago but got
sidetracked, large data set number crunching (on a home scale!). Having
375 GB of main Optane memory will be a hell of a lot cheaper than
trying to populate a system with that much DRAM, or even a fraction of
that 375 GB.
https://aws.amazon.com/about-aws/whats-new/2016/05/now-available-x1-instances-the-largest-amazon-ec2-memory-optimized-instance-with-2-tb-of-memory/
https://www.ovh.com/us/dedicated-servers/infra/ (half-terabyte is
~US$751 per month)
Some testing-related memory-related reading.
https://software.intel.com/en-us/blogs/2016/09/02/simulating-six-terabytes-of-serious-ram
Post by IanDWith technology such as this, one has to start thinking about maximum
memory support of your OS's, ...
What's VMS's memory limitations? Will VMS-x86 change the picture any?
OpenVMS has 50-bit physical support on Itanium. Intel will be (is?)
implementing 57-bit physical addressing for x86-64. That's 256 TB to
128 PB of physical address space, depending on page size, and
particularly depending on how massive the customer purchasing budget
might be. Older x86-64 implements 48-bit physical.
https://software.intel.com/sites/default/files/managed/2b/80/5-level_paging_white_paper.pdf
As for what is planned for OpenVMS x86-64 support, we shall learn in
the fullness of time.
As for Microsoft Windows...
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366778(v=vs.85).aspx
Beyond addressing, there's optimization for the file system for SSD —
that's something that's become common in the last few years — and how
to manage NV memory without exposing it to rogue writes will continue
to be interesting — the file system and the page tables are likely
going to get a whole lot friendlier, in some implementations.
As for what Apple is rolling out now, as a replacement for the
HFS-heritage file system, and that's nearly old as the ODS-heritage
file system on OpenVMS...
https://developer.apple.com/library/prerelease/content/documentation/FileManagement/Conceptual/APFS_Guide/Introduction/Introduction.html
...
Post by IanDThis is over the PCI-e bus for now
Thunderbolt allows extending PCIe, too. Same with various USB-C
implementations, and more servers will certainly be picking up support
for that connection.
Having a PCIe or other expansion box hanging off of a
Thunderbolt-capable laptop looks weird, but it does work. It's also
routine for PCIe buses and boxes to be hanging off of servers these
days, it's just less obvious when it's all mounted in a big cabinet —
much like Unibus boxes from an earlier era, and like the various
AlphaServers that offered PCI-X buses and boxes, for instance.
Post by IanDI guess Intel couldn't wait for PCI-e 4 to come out even though it
doubles the throughput, then again, this Optane is x4 isn't it, I think
that's 4 lanes x 1 gb per lane under PCI-e 3.0
Over FC? As in have some mass storage device acting like a large memory
pool, much like how a SAN works but this would be for virtual memory?
Byte-addressable storage. Expect to see it running via FC, but ponder
whether that or performing RDMA to remote servers will be more
effective for the application designs — beyond being able to store data
directly into some non-volatile part of the address space of a remote
server, ponder what some HBVS memory replication package might provide
when using host-local non-volatile storage.
Post by IanDThe latency would be crippling for high performance BUT I image you
could then have systems configured for things like mass sorting
operations where virtual memory would be this Optane storage - that's
gotta be better than having to provision one large single box somewhere
to do all the grunt work and that you have to shift the data onto to
make use of the ram
Stuff out on the far end of a traditional I/O bus — HDD or SSD — is
going to be around for decades, but — like HDDs are becoming the modern
equivalent of a tape pool — it'll trend toward infrequent and archival
storage.
Post by IanDI would image the likes of AWS and Azure would be very interested in
this Optane. They could then offer larger scale memory instances
without having to physically provision specific boxes (if it becomes
available via FC), although the memory speed would be slower but you'd
just offer it at a cheaper rate
I'd wager a fair chunk of the remote access will be via 10 GbE, 40 GbE
and faster, and Infiniband at the high-end. NVMf — NVMe over Fabric —
for details. FC has always been expensive — profitable for the
vendors, too — and it'll be around for a while.
--
Pure Personal Opinion | HoffmanLabs LLC