Discussion:
VSI OpenVMS V9.1 Field Test beginning.
(too old to reply)
John H. Reinhardt
2021-06-30 18:54:01 UTC
Permalink
If you are "a customer or partner of VSI with a valid support contract" then you can register for the V9.1 Field test version by going to this VSI Page:
https://vmssoftware.com/about/openvmsv9-1

"We are excited to announce the availability of VSI OpenVMS V9.1 for x86-64. This is the next step in the journey to migrate OpenVMS to the x86-64 platform. The V9.1 release allows you to use the operating system with some of your favorite hypervisors and includes a host of newly migrated applications."

Listed Hypervisors:
Oracle VirtualBox
Redhat KVM
VMware
ESXi Enterprise
Fusion (mac)
Workstation Pro
Workstation Player

No mention of Hobbyist inclusion though.
Customers and ISV only for now at least.
--
John H. Reinhardt
John H. Reinhardt
2021-06-30 19:18:35 UTC
Permalink
Post by John H. Reinhardt
https://vmssoftware.com/about/openvmsv9-1
"We are excited to announce the availability of VSI OpenVMS V9.1 for x86-64. This is the next step in the journey to migrate OpenVMS to the x86-64 platform. The V9.1 release allows you to use the operating system with some of your favorite hypervisors and includes a host of newly migrated applications."
    Oracle VirtualBox
    Redhat KVM
    VMware
        ESXi Enterprise
        Fusion (mac)
        Workstation Pro
        Workstation Player
No mention of Hobbyist inclusion though.
Customers and ISV only for now at least.
Missed this:

V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>

List of Open Source apps included: <https://vmssoftware.com/docs/x86-open-source-list.pdf>

List of Layered Products included: <https://vmssoftware.com/docs/x86-LP-list.pdf>
--
John H. Reinhardt
Richard Maher
2021-07-01 05:56:05 UTC
Permalink
Post by John H. Reinhardt
Post by John H. Reinhardt
If you are "a customer or partner of VSI with a valid support
contract" then you can register for the V9.1 Field test version by
going to this VSI Page: https://vmssoftware.com/about/openvmsv9-1
"We are excited to announce the availability of VSI OpenVMS V9.1
for x86-64. This is the next step in the journey to migrate OpenVMS
to the x86-64 platform. The V9.1 release allows you to use the
operating system with some of your favorite hypervisors and
includes a host of newly migrated applications."
Listed Hypervisors: Oracle VirtualBox Redhat KVM VMware ESXi
Enterprise Fusion (mac) Workstation Pro Workstation Player
No mention of Hobbyist inclusion though. Customers and ISV only for
now at least.
<https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
<https://vmssoftware.com/docs/x86-open-source-list.pdf>
<https://vmssoftware.com/docs/x86-LP-list.pdf>
Well done!
John Dallman
2021-07-01 08:27:00 UTC
Permalink
Post by John H. Reinhardt
<https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
I don't know very much about VMS at the system management level. Section
1.24 of this document says that OpenVMS x86 does not support swap files,
but does say that page files should be used.

What's the difference between swapping and paging, on VMS?

John
Jan-Erik Söderholm
2021-07-01 09:30:44 UTC
Permalink
Post by John Dallman
Post by John H. Reinhardt
<https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
I don't know very much about VMS at the system management level. Section
1.24 of this document says that OpenVMS x86 does not support swap files,
but does say that page files should be used.
What's the difference between swapping and paging, on VMS?
John
"Swapping" is for writing out whole processes to the swap-file.
"Paging" is for writing out individual memory pages to the page-file.
Arne Vajhøj
2021-07-01 14:22:20 UTC
Permalink
This post might be inappropriate. Click to display it.
Craig A. Berry
2021-07-01 16:15:21 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by John Dallman
Post by John H. Reinhardt
<https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
I don't know very much about VMS at the system management level. Section
1.24 of this document says that OpenVMS x86 does not support swap files,
but does say that page files should be used.
What's the difference between swapping and paging, on VMS?
"Swapping" is for writing out whole processes to the swap-file.
"Paging" is for writing out individual memory pages to the page-file.
And I don't think it is a big deal.
To me swapping was something that in rare cases could be useful
on VAX back in the 1980's. Since Alpha swapping has been an
indication that the system was totally fucked up.
Nobody will miss the possibility of being in COMO state.
No one said there is no swapping, just that there is no swap file. I
thought it had always been possible for the pagefile to get used for
swapping if necessary, though I can't find any docs to that effect at
the moment. So it's possible there is no swapping, but it's also
possible it's just done differently.

The swapper handles some special start-up tasks such as initializing
non-paged pool. The new in-memory start-up image likely changed how
memory management works in the early phases of start-up, and may have
provided opportunities for simplifying or eliminating the swapper.
Jim
2021-07-01 18:18:51 UTC
Permalink
Post by Craig A. Berry
thought it had always been possible for the pagefile to get used for
swapping if necessary, though I can't find any docs to that effect at
the moment.
True for past VMS versions...

https://wiki.vmssoftware.com/Page_File
Zane H. Healy
2021-07-01 23:04:24 UTC
Permalink
Post by John H. Reinhardt
V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
On pg. 21 I noticed the following, "VSI OpenVMS x86-64 V9.1 can be
clustered with any OpenVMS system running Version 7.3 or above." Does this
mean that OpenVMS x86-64 can be clustered with a VAX?

Now we just need it to be available to Hobbyists. I'd like to start doing
some interoperability testing.

Zane
Phillip Helbig (undress to reply)
2021-07-02 04:50:00 UTC
Permalink
Post by Zane H. Healy
Post by John H. Reinhardt
V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
On pg. 21 I noticed the following, "VSI OpenVMS x86-64 V9.1 can be
clustered with any OpenVMS system running Version 7.3 or above." Does this
mean that OpenVMS x86-64 can be clustered with a VAX?
Or non-VSI VMS?

What about the first regular release? Will that require VSI VMS on
Alpha to cluster?
Phillip Helbig (undress to reply)
2021-07-02 10:44:32 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Zane H. Healy
Post by John H. Reinhardt
V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
On pg. 21 I noticed the following, "VSI OpenVMS x86-64 V9.1 can be
clustered with any OpenVMS system running Version 7.3 or above." Does this
mean that OpenVMS x86-64 can be clustered with a VAX?
Or non-VSI VMS?
What about the first regular release? Will that require VSI VMS on
Alpha to cluster?
Glad I haven't upgraded yet.

The message has always been that a) clustering with Alpha (but not VAX,
though it might "just work") would be supported with x86, at least for
migration, but that that would require a version of VMS from VSI on
Alpha. Has that changed, are the rules different for the field test, or
is the above information wrong?
Jan-Erik Söderholm
2021-07-02 10:51:56 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Zane H. Healy
Post by John H. Reinhardt
V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
On pg. 21 I noticed the following, "VSI OpenVMS x86-64 V9.1 can be
clustered with any OpenVMS system running Version 7.3 or above." Does this
mean that OpenVMS x86-64 can be clustered with a VAX?
Or non-VSI VMS?
What about the first regular release? Will that require VSI VMS on
Alpha to cluster?
Glad I haven't upgraded yet.
The message has always been that a) clustering with Alpha (but not VAX,
though it might "just work") would be supported with x86, at least for
migration, but that that would require a version of VMS from VSI on
Alpha. Has that changed, are the rules different for the field test, or
is the above information wrong?
What os it with you guys? :-)

The quoted text says "VSI OpenVMS x86-64 *V9.1* can be clustered..."
It says nothing about V9.2 or later. I guess you need to wait for the
formal release of V9.2...

And it has clearly been said before that a VSI version *is* needed and
that VAX will *not* be supported by the production rel (V9.2 and later).
I have not seen anything changing that so far.

It can even by that the above text is slightly wrong. But does it matter?
How many are in need for a cluster with VAX and x86-64?
Phillip Helbig (undress to reply)
2021-07-02 11:18:04 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Zane H. Healy
Post by John H. Reinhardt
V9.1 Field Test Release Notes: <https://vmssoftware.com/docs/VSI-X86V91-RN.pdf>
On pg. 21 I noticed the following, "VSI OpenVMS x86-64 V9.1 can be
clustered with any OpenVMS system running Version 7.3 or above." Does this
mean that OpenVMS x86-64 can be clustered with a VAX?
Or non-VSI VMS?
What about the first regular release? Will that require VSI VMS on
Alpha to cluster?
Glad I haven't upgraded yet.
The message has always been that a) clustering with Alpha (but not VAX,
though it might "just work") would be supported with x86, at least for
migration, but that that would require a version of VMS from VSI on
Alpha. Has that changed, are the rules different for the field test, or
is the above information wrong?
What os it with you guys? :-)
We want to understand the world. :-)
Post by Jan-Erik Söderholm
The quoted text says "VSI OpenVMS x86-64 *V9.1* can be clustered..."
It says nothing about V9.2 or later. I guess you need to wait for the
formal release of V9.2...
Right. But the purpose of a field test is to test a system. If such
clustering is allowed with 9.1 but not with 9.2, then that means that it
will be removed (or at least de-supported) with 9.2. Why make such a
change after the test? Yes, clustering with older versions makes sense
for a test, but also for migration. Assuming that the information is
correct, why not support clustering for migration in 9.2? I can
understand that VSI don't want to support it long-term, but since
apparently it is possible, not supporting it in 9.2 means that people
(even those who did the field test with 9.1, perhaps) will have to move
to VSI VMS on Alpha just for migration.
Post by Jan-Erik Söderholm
And it has clearly been said before that a VSI version *is* needed and
that VAX will *not* be supported by the production rel (V9.2 and later).
I have not seen anything changing that so far.
Right, hence the questions.
Post by Jan-Erik Söderholm
It can even by that the above text is slightly wrong. But does it matter?
How many are in need for a cluster with VAX and x86-64?
With VAX, probably few if any. With Alpha? Probably at least a few.
Zane H. Healy
2021-07-02 15:14:16 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Right. But the purpose of a field test is to test a system. If such
clustering is allowed with 9.1 but not with 9.2, then that means that it
will be removed (or at least de-supported) with 9.2. Why make such a
change after the test? Yes, clustering with older versions makes sense
It would less than impressive if something works in 9.1FT, but not in 9.2.
Post by Phillip Helbig (undress to reply)
Post by Jan-Erik Söderholm
And it has clearly been said before that a VSI version *is* needed and
that VAX will *not* be supported by the production rel (V9.2 and later).
I have not seen anything changing that so far.
Right, hence the questions.
Exactly, the fact that the release notes say any version of 7.3 or later, is
a pretty broad statement.
Post by Phillip Helbig (undress to reply)
Post by Jan-Erik Söderholm
It can even by that the above text is slightly wrong. But does it matter?
How many are in need for a cluster with VAX and x86-64?
With VAX, probably few if any. With Alpha? Probably at least a few.
One very obvious use case, and one that I've been involved with is recovery
of old obscure media that requires a Q-Bus system (or potentially Unibus).
For the project I helped with, it was interesting needing to figure out how
to read DEC media that even a couple ex-DEC FE's had never seen. The only
way to read it was a Q-Bus system, and nearly impossible to source drives.
Having such a system in a cluster enables easily moving the recovered data.

There are places that still keep VAXen around for applications as well. Not
everything that was written on VAX was ported to Alpha, and even less was
ported to Itanium. How many places that have remained on VMS have kept one
or two systems around running on real, or more likely emulated VAX or Alpha
hardware?

Zane
Phillip Helbig (undress to reply)
2021-07-02 16:54:38 UTC
Permalink
Post by Zane H. Healy
One very obvious use case, and one that I've been involved with is recovery
of old obscure media that requires a Q-Bus system (or potentially Unibus).
For the project I helped with, it was interesting needing to figure out how
to read DEC media that even a couple ex-DEC FE's had never seen. The only
way to read it was a Q-Bus system, and nearly impossible to source drives.
Having such a system in a cluster enables easily moving the recovered data.
Another approach: get a VAX with QBUS and SCSI and move the data to an
SBB disk. Then mount that disk on a newer system. I'm sure that it
will work on Alpha. Presumably Itanium can connect SCSI disks. I don't
know about x86.
John H. Reinhardt
2021-07-02 17:29:31 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Zane H. Healy
One very obvious use case, and one that I've been involved with is recovery
of old obscure media that requires a Q-Bus system (or potentially Unibus).
For the project I helped with, it was interesting needing to figure out how
to read DEC media that even a couple ex-DEC FE's had never seen. The only
way to read it was a Q-Bus system, and nearly impossible to source drives.
Having such a system in a cluster enables easily moving the recovered data.
Another approach: get a VAX with QBUS and SCSI and move the data to an
SBB disk. Then mount that disk on a newer system. I'm sure that it
will work on Alpha. Presumably Itanium can connect SCSI disks.
Yes
Post by Phillip Helbig (undress to reply)
I don't know about x86.
Not yet, but eventually.

From Release Notes:

1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk types will be added in future releases of VSI OpenVMS x86-64.
--
John H. Reinhardt
Phillip Helbig (undress to reply)
2021-07-02 17:37:39 UTC
Permalink
Post by John H. Reinhardt
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.

I had always assumed that I would have a mixed cluster and add some x86
disks to shadow sets, then removed the SCSI members and the nodes
hosting them one by one until everything is new. If x86 will support
SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the Top-Gun
Blue BA356 boxes into x86?
John H. Reinhardt
2021-07-03 02:45:48 UTC
Permalink
Post by John H. Reinhardt
Post by John H. Reinhardt
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
I had always assumed that I would have a mixed cluster and add some x86
disks to shadow sets, then removed the SCSI members and the nodes
hosting them one by one until everything is new. If x86 will support
SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the Top-Gun
Blue BA356 boxes into x86?
From the OpenVMS x86 Release notes:

2. Hardware Support
Direct support for x86-64 hardware systems (models to be specified) will be added in later releases.


Not initially. The current release of FT9.1 only runs on virtual hosts. While you could probably get a SCSI card to go into whatever machine you use as a virtual host, you'd need some sort of pass thru connection to get those SCSI disks to the OpenVMS virtual machine. VMware ESXI Enterprise might do that. I'm pretty positive that Oracle VirtualBox can't. I don't know about KVM.

So you need to wait until a subsequent release of OpenVMS V9.x to support physical hardware. Then you'd need a SCSI card that was supported by both the physical hardware and OpenVMS.

It might be a while and then it may depend on what hardware you can get.

The field test V9.1 does support MSCP served disks, however so *iof* you can cluster with an Alpha, then it could serve the disks such that the x86 OpenVMS can access them.
--
John H. Reinhardt
Phillip Helbig (undress to reply)
2021-07-03 07:07:17 UTC
Permalink
Post by John H. Reinhardt
Post by John H. Reinhardt
Post by John H. Reinhardt
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
I had always assumed that I would have a mixed cluster and add some x86
disks to shadow sets, then removed the SCSI members and the nodes
hosting them one by one until everything is new. If x86 will support
SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the Top-Gun
Blue BA356 boxes into x86?
2. Hardware Support
Direct support for x86-64 hardware systems (models to be specified) will be added in later releases.
Not initially. The current release of FT9.1 only runs on virtual
hosts. While you could probably get a SCSI card to go into whatever
machine you use as a virtual host, you'd need some sort of pass thru
connection to get those SCSI disks to the OpenVMS
I plan to wait for bare metal in any case.
Post by John H. Reinhardt
The field test V9.1 does support MSCP served disks, however so *iof*
you can cluster with an Alpha, then it could serve the disks such that
the x86 OpenVMS can access them.
Presumably MSCP-served disks will always be supported. That's what I
was thinking of originally: use MSCP to serve all disks to all nodes,
then make shadow sets of SCSI members on Alpha and whatever is available
on x86.
k***@gmail.com
2021-07-03 14:02:28 UTC
Permalink
-----Original Message-----
undress to reply via Info-vax
Sent: July-03-21 4:07 AM
Subject: Re: [Info-vax] VSI OpenVMS V9.1 Field Test beginning.
Post by John H. Reinhardt
Post by John H. Reinhardt
Post by John H. Reinhardt
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
I had always assumed that I would have a mixed cluster and add some
x86 disks to shadow sets, then removed the SCSI members and the
nodes hosting them one by one until everything is new. If x86 will
support SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the
Top-Gun Blue BA356 boxes into x86?
2. Hardware Support
Direct support for x86-64 hardware systems (models to be specified)
will
be added in later releases.
Post by John H. Reinhardt
Not initially. The current release of FT9.1 only runs on virtual
hosts. While you could probably get a SCSI card to go into whatever
machine you use as a virtual host, you'd need some sort of pass thru
connection to get those SCSI disks to the OpenVMS
I plan to wait for bare metal in any case.
Post by John H. Reinhardt
The field test V9.1 does support MSCP served disks, however so *iof*
you can cluster with an Alpha, then it could serve the disks such that
the x86 OpenVMS can access them.
Presumably MSCP-served disks will always be supported. That's what I was
thinking of originally: use MSCP to serve all disks to all nodes, then make
shadow sets of SCSI members on Alpha and whatever is available on x86.
Interesting how history always repeats itself - especially in the IT world.

For those not familiar with an emerging very hot software defined VM hosting
technology called HCI (Hyperconverged Infrastructure) from companies like
Nutanix, HPE, Dell etc., one of the key ways HCI drastically reduces overall
costs, is to eliminate expensive and complex fibre based SAN switches, SAN
controllers etc. and instead, use cheap local drives and "serve" this local
storage in a distributed manner to other commodity X86-64 server nodes in
the HCI cluster. Overall integrated cluster mgmt. solutions are also a key
component of HCI.

While thy support VMware as well, Nutanix core product also provides their
own hypervisor to host VM's without the very high costs with VMware
licensing that is now becoming a big concern for med-large IT shops.

While HCI solutions also support SAN infrastructure, the biggest cost
savings usually touted is to use cheap local drives and then serve these
local drives storage to other server nodes in the cluster. HCI also uses
host-based RAID strategies (replication factors determine RAID level) to
mitigate local drive failures.

Does this not sound like MSCP and HBVS?

Reference:

< https://www.nutanix.com/hpe>
< https://www.hpe.com/ca/en/integrated-systems/hyper-converged.html>
<
https://www.itcentralstation.com/questions/what-is-the-biggest-difference-be
tween-nutanix-and-vmware-vsan>

The more things change, the more they stay the same.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
--
This email has been checked for viruses by AVG.
https://www.avg.com
Zane H. Healy
2021-07-06 22:08:33 UTC
Permalink
Post by John H. Reinhardt
Post by John H. Reinhardt
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
I had always assumed that I would have a mixed cluster and add some x86
disks to shadow sets, then removed the SCSI members and the nodes
hosting them one by one until everything is new. If x86 will support
SCSI, could I plug my Top-Gun Blue 40 MB/s SCSI disks in the Top-Gun
Blue BA356 boxes into x86?
I think a better question might be, why would you want to plug in seriously
old 40MB/s SCSI disks. I'd much rather have my data on modern SATA, or
better yet, SAS disks. Though the idea of booting VMS off an NVME SSD is
attractive!

Zane
chris
2021-07-05 16:48:38 UTC
Permalink
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...

Chris
Simon Clubley
2021-07-05 18:03:51 UTC
Permalink
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
I wonder how long until someone asks if they can run it on an IDE
drive... :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Chris Townley
2021-07-05 19:48:54 UTC
Permalink
Post by Simon Clubley
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
I wonder how long until someone asks if they can run it on an IDE
drive... :-)
Simon.
Or when they want to run it on an NVME SSD?
--
Chris
Dave Froble
2021-07-06 01:04:21 UTC
Permalink
Post by Simon Clubley
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
I wonder how long until someone asks if they can run it on an IDE
drive... :-)
Simon.
Like the Alpha DS10L ???
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
dthi...@gmail.com
2021-07-06 02:54:20 UTC
Permalink
Not DQDRIVER. For some reason, virtual SATA drives use PKDRIVER on V9.0-H and appear as DKxn:.
chris
2021-07-06 10:22:44 UTC
Permalink
Post by Simon Clubley
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
I wonder how long until someone asks if they can run it on an IDE
drive... :-)
Simon.
Well, I guess they could always fall back to ST506, as in uVax
II and others. There's also ESDI, a slightly faster version
of the same thing...
Jan-Erik Söderholm
2021-07-06 13:24:22 UTC
Permalink
Post by chris
Post by Simon Clubley
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
I wonder how long until someone asks if they can run it on an IDE
drive... :-)
Simon.
Well, I guess they could always fall back to ST506, as in uVax
II and others. There's also ESDI, a slightly faster version
of the same thing...
How much does the actual type of the emulated disk matter?
The physical disk will always be something modern anyway.
Probably SSD in most cases. Is there any patback from having a
SSD emulation in VMS that ends up as a container file on an SSD)?
David Jones
2021-07-06 14:57:10 UTC
Permalink
Post by Jan-Erik Söderholm
How much does the actual type of the emulated disk matter?
The physical disk will always be something modern anyway.
Probably SSD in most cases. Is there any patback from having a
SSD emulation in VMS that ends up as a container file on an SSD)?
The command set matters for how well you handle hardware errors and can
optimize performance (e.g. tagged command queueing).
John H. Reinhardt
2021-07-06 02:58:38 UTC
Permalink
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
Chris
You're thinking physical drives. V9.1 is virtual only at the moment. Most Hypervisors provide virtual SCSI and SATA interfaces. The Enterprise versions of ESXi might have virtual SAS interfaces but I don't have access to them to know. VMware Fusion (Mac) and Oracle Virtualbox don't. Why it doesn't support virtual SCSI, I don't know. You would think that would already be just about built in given all the years of SCSI support in OpenVMS. Maybe we are just being to literal. I would imagine the SAS interface will come when some future version 9.1-x supports physical hardware.
--
John H. Reinhardt
Zane H. Healy
2021-07-06 22:11:47 UTC
Permalink
Post by John H. Reinhardt
You're thinking physical drives. V9.1 is virtual only at the moment.
Most Hypervisors provide virtual SCSI and SATA interfaces. The Enterprise
versions of ESXi might have virtual SAS interfaces but I don't have access
to them to know. VMware Fusion (Mac) and Oracle Virtualbox don't. Why it
doesn't support virtual SCSI, I don't know. You would think that would
already be just about built in given all the years of SCSI support in
OpenVMS. Maybe we are just being to literal. I would imagine the SAS
interface will come when some future version 9.1-x supports physical
hardware.
Since the HPE DL380 gen 9's and 10's are a platform they're planning to
support, I can't imagine them not supporting SAS. How many DL380's use
SATA? I ask that knowing how many hundred my team has been retiring the
past couple years.

Zane
Jan-Erik Söderholm
2021-07-06 23:44:26 UTC
Permalink
Post by Zane H. Healy
Post by John H. Reinhardt
You're thinking physical drives. V9.1 is virtual only at the moment.
Most Hypervisors provide virtual SCSI and SATA interfaces. The Enterprise
versions of ESXi might have virtual SAS interfaces but I don't have access
to them to know. VMware Fusion (Mac) and Oracle Virtualbox don't. Why it
doesn't support virtual SCSI, I don't know. You would think that would
already be just about built in given all the years of SCSI support in
OpenVMS. Maybe we are just being to literal. I would imagine the SAS
interface will come when some future version 9.1-x supports physical
hardware.
Since the HPE DL380 gen 9's and 10's are a platform they're planning to
support, I can't imagine them not supporting SAS.
You mean for "bare metal" use? How many customers are looking at bare
metal use? According to the talk we had with a VSI representative
last week, there are almost no customer that expects to *not* run
VMS x86-64 in a virtualized environment of some kind.
Post by Zane H. Healy
How many DL380's use SATA?
Does it matter if you run virtualized?
Post by Zane H. Healy
I ask that knowing how many hundred my team has been retiring the
past couple years.
Zane
Dave Froble
2021-07-06 23:48:04 UTC
Permalink
Post by Zane H. Healy
Post by John H. Reinhardt
You're thinking physical drives. V9.1 is virtual only at the moment.
Most Hypervisors provide virtual SCSI and SATA interfaces. The Enterprise
versions of ESXi might have virtual SAS interfaces but I don't have access
to them to know. VMware Fusion (Mac) and Oracle Virtualbox don't. Why it
doesn't support virtual SCSI, I don't know. You would think that would
already be just about built in given all the years of SCSI support in
OpenVMS. Maybe we are just being to literal. I would imagine the SAS
interface will come when some future version 9.1-x supports physical
hardware.
Since the HPE DL380 gen 9's and 10's are a platform they're planning to
support, I can't imagine them not supporting SAS. How many DL380's use
SATA? I ask that knowing how many hundred my team has been retiring the
past couple years.
Zane
What are you retiring? DL380 systems? If so, are they available?
Might be good systems for running VMS.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Zane H. Healy
2021-07-14 01:10:32 UTC
Permalink
Post by Dave Froble
What are you retiring? DL380 systems? If so, are they available?
Might be good systems for running VMS.
Everything I'm retiring has been snapped up by other teams at work. I
*WISH* I could have grabbed a couple for my home lab with the intention of
running OpenVMS under VMware on them. I'm currently looking at what it will
take to replace a couple HP SFF desktops that are part of my VMware cluster.
The third system in the cluster is an HP DL380 G7 I purchased off eBay. A
decent G9 system still costs some real money. A decent G7 is almost
nothing.

Zane

chris
2021-07-08 13:49:30 UTC
Permalink
Post by Zane H. Healy
Post by John H. Reinhardt
You're thinking physical drives. V9.1 is virtual only at the moment.
Most Hypervisors provide virtual SCSI and SATA interfaces. The Enterprise
versions of ESXi might have virtual SAS interfaces but I don't have access
to them to know. VMware Fusion (Mac) and Oracle Virtualbox don't. Why it
doesn't support virtual SCSI, I don't know. You would think that would
already be just about built in given all the years of SCSI support in
OpenVMS. Maybe we are just being to literal. I would imagine the SAS
interface will come when some future version 9.1-x supports physical
hardware.
Since the HPE DL380 gen 9's and 10's are a platform they're planning to
support, I can't imagine them not supporting SAS. How many DL380's use
SATA? I ask that knowing how many hundred my team has been retiring the
past couple years.
Zane
Still running G8 series here, but the G7 and perhaps earlier were all
sata or sas campatible, depending on the controller. Sata always were
the low cost option though, with higher rotational latency and limited
command set. Would only consider sata if the drives were in a redundant
raid set with spares, or mirrored.

Virtualisation is ok, but there must be real disks somewhere at some
point and sas is the preferred option for professional work...
Stephen Hoffman
2021-07-07 01:24:31 UTC
Permalink
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
I can get ~half a gigabyte per second through a single SATA storage
connection, sequential read or sequential write, on widely-available
SATA (SATA 3.0 / SATA 6Gp/s) storage hardware.

Dragging across your existing HDDs—whether IDE/PATA, SATA, SCSI, USB
2.0, DSA, DSSI, MASSBUS, or otherwise—over to OpenVMS x86-64 isn't a
great idea, either.

And at least for now, the hypervisor can usually "hide" modern HDDs and
SSD capacities from that pesky OpenVMS 2 TiB limit, too. Might even be
able to "hide" NAS storage, donno yet.
--
Pure Personal Opinion | HoffmanLabs LLC
chris
2021-07-08 14:00:29 UTC
Permalink
Post by Stephen Hoffman
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
I can get ~half a gigabyte per second through a single SATA storage
connection, sequential read or sequential write, on widely-available
SATA (SATA 3.0 / SATA 6Gp/s) storage hardware.
That's with 7200 rpm drives, probably, whereas many are only 5400, so
doubt that would be sustainable bandwidth over large files, or where
the drive needs to seek or change track.

At 7200 rpm, track to track and rotational latency could be 8.4
milliseconds worst case + head seek time, which limits sustainable
bandwidth. 10K drives are better, 15K, even more so...
Stephen Hoffman
2021-07-08 18:41:01 UTC
Permalink
Post by chris
Post by Stephen Hoffman
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other
disk types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
I can get ~half a gigabyte per second through a single SATA storage
connection, sequential read or sequential write, on widely-available
SATA (SATA 3.0 / SATA 6Gp/s) storage hardware.
That's with 7200 rpm drives, probably, whereas many are only 5400, so
doubt that would be sustainable bandwidth over large files, or where
the drive needs to seek or change track.
That's sustained sequential reads and sustained sequential writes at
roughly half a gigabyte per second.
Post by chris
At 7200 rpm, track to track and rotational latency could be 8.4
milliseconds worst case + head seek time, which limits sustainable
bandwidth. 10K drives are better, 15K, even more so...
How old is this server gear?

Yes, a high-end SAS 15K HDD peaks around 200 to maybe 225 MBps
sustained sequential, which is marginal for use on SAS/SATA 1 (2003),
and comfortably within SAS/SATA 2.

SSDs are however faster, and can push the limits of what SATA 3 (2008)
offers. The Integrity rx2800 i6 box offers SAS/SATA 3.

Or storage on yet faster buses such as NVMe via PCIe 3.0 (2013) or PCIe
4.0 (2017), for those on more recent server gear. But VSI isn't there
yet, with NVMe.

Specifically for the OpenVMS V9.1 beta, a hypervisor guest on SSD
storage on SATA 3 buses should be more than adequate for most beta
users.

Available x86-64 storage options and configurations are very different
from those of Integrity and AlphaServer systems too, so many of us will
spending time reading the VSI hardware support documentation.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2021-07-05 19:57:31 UTC
Permalink
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
What about SSDs?
Craig A. Berry
2021-07-05 20:38:31 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by chris
Post by John H. Reinhardt
1.32 SupportedDiskTypes
VSI OpenVMS x86-64 V9.1 only supports SATA disks. Support for other disk
types will be added in future releases of VSI OpenVMS x86-64.
Find that odd. SATA are consumer quality, 5400 or 7200 rpm. so slow
access compared to U320 scsi or sas at 10 or 15k...
What about SSDs?
Since most (all?) disks used with v9.1 will be virtual, what matters
most is finding something that is easy to work with in the various
hypervisors and uses a relatively efficient driver on VMS. I certainly
hope SATA does not imply DQDRIVER, but I don't know.
Simon Clubley
2021-07-02 17:28:30 UTC
Permalink
Post by Zane H. Healy
Post by Phillip Helbig (undress to reply)
Right. But the purpose of a field test is to test a system. If such
clustering is allowed with 9.1 but not with 9.2, then that means that it
will be removed (or at least de-supported) with 9.2. Why make such a
change after the test? Yes, clustering with older versions makes sense
It would less than impressive if something works in 9.1FT, but not in 9.2.
It's not that simple Zane and Phillip.

The cluster communications protocol is changing to now support secure
communications.

Does 9.1 contain the old cluster protocol or the new secure protocol ?

If 9.1 contains the old cluster protocol then it's possible it will
work with 7.3 but will stop working in 9.2 (or whenever the new protocol
is shipped) because there will not be compatibility kits for VAX 7.3.

What I do not know is, when the new cluster protocol is shipped, if
the old insecure protocol will continue to available and hence if the
old protocol will continue to work with 7.3 clustered with a 9.2 or
later node.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Phillip Helbig (undress to reply)
2021-07-02 17:34:14 UTC
Permalink
Post by Simon Clubley
Post by Zane H. Healy
Post by Phillip Helbig (undress to reply)
Right. But the purpose of a field test is to test a system. If such
clustering is allowed with 9.1 but not with 9.2, then that means that it
will be removed (or at least de-supported) with 9.2. Why make such a
change after the test? Yes, clustering with older versions makes sense
It would less than impressive if something works in 9.1FT, but not in 9.2.
It's not that simple Zane and Phillip.
The cluster communications protocol is changing to now support secure
communications.
Does 9.1 contain the old cluster protocol or the new secure protocol ?
OK, but considering that clustering is something rather essential to
many users, it seems strange that the field test doesn't include the new
protocol.
Arne Vajhøj
2021-07-02 18:55:15 UTC
Permalink
It looks like 9.1 will be more like 9.0
and less like any other prior field test in that there will be frequent
updates during the course of the field test with lots of rough edges
initially.
Yes.

But I can understand VSI wanting to get it further out now.

Given how big the changes are then it is good to have a lot
of testers. And given the size of the VMS community then
0.1% as field testers may not be enough.

And then there is the political aspect of it. For quite some
time when the PHB has asked his VMS guy "When is VMS x86-64 coming?"
the answer has been "It has been released in beta and I have
heard good reports about it." now there can come a new
answer "I am running it here as an early access tester
and it looks pretty good.". It gives an impression
of progress.

Arne
Zane H. Healy
2021-07-06 22:16:03 UTC
Permalink
Post by Simon Clubley
It's not that simple Zane and Phillip.
The cluster communications protocol is changing to now support secure
communications.
Does 9.1 contain the old cluster protocol or the new secure protocol ?
If 9.1 contains the old cluster protocol then it's possible it will
work with 7.3 but will stop working in 9.2 (or whenever the new protocol
is shipped) because there will not be compatibility kits for VAX 7.3.
What I do not know is, when the new cluster protocol is shipped, if
the old insecure protocol will continue to available and hence if the
old protocol will continue to work with 7.3 clustered with a 9.2 or
later node.
Simon.
Thanks for that explaination! It will be interesting to see what happens.
Based on this, I tend to suspect that they don't have the new cluster
protocol in place for 9.1. It would be really nice if, when they ship 9.2,
you can choose which version of the protocol to use.

Zane
Dave Froble
2021-07-06 23:54:09 UTC
Permalink
Post by Zane H. Healy
Post by Simon Clubley
It's not that simple Zane and Phillip.
The cluster communications protocol is changing to now support secure
communications.
Does 9.1 contain the old cluster protocol or the new secure protocol ?
If 9.1 contains the old cluster protocol then it's possible it will
work with 7.3 but will stop working in 9.2 (or whenever the new protocol
is shipped) because there will not be compatibility kits for VAX 7.3.
What I do not know is, when the new cluster protocol is shipped, if
the old insecure protocol will continue to available and hence if the
old protocol will continue to work with 7.3 clustered with a 9.2 or
later node.
Simon.
Thanks for that explaination! It will be interesting to see what happens.
Based on this, I tend to suspect that they don't have the new cluster
protocol in place for 9.1. It would be really nice if, when they ship 9.2,
you can choose which version of the protocol to use.
Zane
Beware the "wrath of Hoff" descending upon you ...

:-)

Seriously, the issues will be complex enough, without VSI having to
support two protocols. It's past time for SAS to be a bit more secure,
so just go there and get it over with.

After all, who would be affected? Perhaps Phillip, and maybe someone
else. DECnet works fine for transferring data one time. One could even
place everything on LD volumes for one time transfer. And of course
there is always image backups.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Stephen Hoffman
2021-07-07 01:13:18 UTC
Permalink
Post by Dave Froble
Beware the "wrath of Hoff" descending upon you ...
:-)
Seriously, the issues will be complex enough, without VSI having to
support two protocols.
Clustering already "runs" up to two protocol versions at once, when
necessary. For lack of better terminology, "old" and "current". This is
necessary for rolling upgrades.
Post by Dave Froble
It's past time for SAS to be a bit more secure, so just go there and
get it over with.
SCS, not SAS.
Post by Dave Froble
After all, who would be affected? Perhaps Phillip, and maybe someone
else. DECnet works fine for transferring data one time. One could
even place everything on LD volumes for one time transfer. And of
course there is always image backups.
If you're still running production OpenVMS VAX, you're not in a rush to
cluster with and upgrade to x86-64. You're just not.

If you're still running OpenVMS VAX, you're probably not replacing
anything—not anything you can avoid replacing—until the production line
and/or the whole facility is replaced and rebuilt.

And in the unlikely event you really do want to use or to add a cluster
to migrate three architectures forward from a production OpenVMS VAX
configuration—and I'm skeptical that there's more than a rounding error
of examples here that want to add multi-architecture clustering with
OpenVMS x86-64 to an OpenVMS VAX production configuration—then the
OpenVMS VAX configuration and the rest of the cluster must all be under
VSI support.

Here are the warranty and migration configurations:
https://vmssoftware.com/products/clusters/
--
Pure Personal Opinion | HoffmanLabs LLC
John Dallman
2021-07-02 15:53:00 UTC
Permalink
Post by Jan-Erik Söderholm
It can even by that the above text is slightly wrong. But does it
matter? How many are in need for a cluster with VAX and x86-64?
"Because it's there."

John
Loading...