Discussion:
replicating a system disk
(too old to reply)
Phillip Helbig (undress to reply)
2014-12-21 12:32:14 UTC
Permalink
Yes, there have been discussions about this here before, but I want to
revive them, for two reasons. First, to get some feedback about a new
concept and second with VMS 8.4 in mind.

The general idea has been to copy the system disk and "change the node
name", which is quite complicated but various aspects of it have been
discussed at length. I did this a couple of times after replacing two
VAXes in my cluster with ALPHAs. It worked, but is not very
straightforward. I want to do something similar in the context of an
upgrade, i.e. upgrade a master system disk then propagate the changes to
other system disks.

I have never done this in the context of an upgrade. Back when I had a
mixed cluster, I had to do two upgrades anyway, and adding a third
wasn't that much difference. With 3 homogeneous cluster now, it means
about three times the work, not 50% more.

'Tis the season to be jolly, errr, to upgrade to VMS 8.4. :-) It looks
like I finally have the time to do so. I have been at 7.3-2 for about
10 years. Various distractions kept me from upgrading, then when I
finally had the time the access to patches for hobbyists was stopped; I
didn't want to upgrade and then experience a problem I couldn't get
patches. Around the time the new hobbyist programme was sorted out, it
looked like the writing was on the wall because of lack of Poulson
support. (This didn't affect me directly, but made an upgrade less
urgent. If 8.4 was to be the last anyway, there is no reason to rush
it.) Now, I am looking forward to VMS on x86 and hope that VSI will
support a mixed ALPHA/x86 cluster, at least for migration. Even if they
don't and I have to go to a higher VMS version and/or to Itanium first,
8.4 is better than 7.3-2.

I had been thinking that it is fortunate that I have an 8.3 hobbyist CD,
since I thought I couldn't go directly from 7.3-2 to 8.4. However,
looking at the OS upgrade chart at HP, it looks like I CAN go directly
from 7.3-2 to 8.4.

Is this correct?

Has anyone here done it?

Is there some reason to nevertheless go via 8.3?

Is there some reason to upgrade to 8.3 now, and to 8.4 later?

I am using 4-GB system disks now. Will this still be sufficient for
8.4, including various layered products etc?

I have the 8.3 CD and patches for 7.3-2, 8.3 and 8.4 which were
available on 16-SEP-2010. My source for 8.4 and newer patches will be
via the hobbyist download site. Is anyone running 8.4 with only the OS
and patches available from the hobbyist download site? Is there any
reason not to upgrade to this configuration?

All the system disks (one for each boot server) are two-member shadow
sets, so producing copies by making and breaking shadow sets is
straightforward. Rather than "changing the node name", my new idea is
to first get all system roots onto all disks and in synch. (I already
have separate numbers for the SYS$SPECIFIC root for each bootserver
node, even though there is only one node booting from each disk.) The
idea is then to upgrade one disk and then just use a direct copy for the
other nodes, the idea being that all the node-specific stuff is in
SYS$SPECIFIC. (Alternatively, I could upgrade a master disk and then
copy SYS$COMMON:[*...]*.* to the other disks. However, this would be
more work than the "automatic" shadow copies. It would avoid having all
system roots on all disks, though.)

This assumes, of course, that all node-specific stuff is in
SYS$SPECIFIC:[*...]. Is this actually the case? Or is their some
software which doesn't follow this convention? (Note: I have only DEC/
Compag/HP stuff on the system disk; none of my own stuff nor any third-
party software.)

I have long since moved SYSUAF etc (the stuff in SYLOGICALS.TEMPLATE as
well as ACCOUNTNG and VMS$AUDIT_SERVER; I also define OPC$LOGFILE_NAME
to be somewhere else) off the system disk. What I tested for a while
with ONE of the VAX nodes was defining various TCPIP logicals to move
some stuff which should be common to all nodes. These were:

TCPIP$CONFIGURATION
TCPIP$HOST
TCPIP$PRINTCAP
TCPIP$NETWORK
TCPIP$PROXY
TCPIP$ROUTE
TCPIP$SERVICE

Presumably it would be better to enable this functionality for all nodes
first before doing any upgrading. Since the default location is in
SYS$COMMON and not in SYS$SPECIFIC, but contains information about
individual nodes, presumably there is some node-based stuff in there,
breaking with the tradition that all node-specific stuff should be in
SYS$SPECIFIC. I guess this means I will have to reconfigure TCPIP on
each node after the upgrade. I guess I would have to do this in any
case, but having this stuff off the system disk should make replicating
the system disk simpler, and might mean that I don't have to reconfigure
as much on each node.

Does anyone here actually define the logicals above to point to
somewhere off the system disk?
Stephen Hoffman
2014-12-21 13:38:22 UTC
Permalink
Yes, there have been discussions about this here before...
First, to get some feedback about a new concept and second with VMS
8.4 in mind.
New concept? This is VMS. There is *nothing* new here. Not until
after V8.4-1H1 ships. And V8.4-1H1 won't apply to your Alpha-based
configuration. Etc.

In no particular order... Read and follow the HP upgrade manual. The
HP upgrade matrix is correct. Backup your disks, via off-line
BACKUP/IMAGE or by quiescing and splitting off a member volume of a
shadowset. Upgrade to V8.4. In a cluster with multiple system disks,
shut down all hosts and upgrade the individual system disks, as
changing the host names and replicating copies can be more work — the
host name tends to get attached all over the place. Technically, a
rolling upgrade can work, but is more work and more setup and more
reading, and the permissible versions can differ. If you can't
replace that configuration with, for instance, one Itanium, then work
to reduce the morass of system disks and shadowsets and 4 GB drives
here, and to move to a simpler and more consolidated and preferably
"newer" configuration. Whether those 4 GB disks are appropriate or
sufficient or not is an entirely local call, but folks are commonly
junking 36 GB and larger SCSI drives. Even if it'll fit, I'd replace
those existing disks, both because 4 GB is insufficient for most use
and constrained system disks are more than a little extra effort, and
because those 4 GB disks are ancient, and because those disks are very
slow. Hobbyists that are running V8.4 only have access to the
provided UPDATE kit and provided layered product versions — yes, other
folks are using other versions and other patches. Of the hobbyists
that are running V8.4 on Alpha, the provided UPDATE kit and the
resulting configuration — modulo various security issues, and some bugs
that have been identified and fixed by HP — does work.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 13:51:17 UTC
Permalink
Post by Stephen Hoffman
Yes, there have been discussions about this here before...
First, to get some feedback about a new concept and second with VMS
8.4 in mind.
New concept? This is VMS. There is *nothing* new here. Not until
after V8.4-1H1 ships. And V8.4-1H1 won't apply to your Alpha-based
configuration. Etc.
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Post by Stephen Hoffman
In no particular order... Read and follow the HP upgrade manual.
Presumably, the "straight upgrade" is the only thing mentioned.
Post by Stephen Hoffman
The
HP upgrade matrix is correct. Backup your disks, via off-line
BACKUP/IMAGE or by quiescing and splitting off a member volume of a
shadowset. Upgrade to V8.4. In a cluster with multiple system disks,
shut down all hosts and upgrade the individual system disks, as
changing the host names and replicating copies can be more work
If you can't
replace that configuration with, for instance, one Itanium, then work
to reduce the morass of system disks and shadowsets and 4 GB drives
here, and to move to a simpler and more consolidated and preferably
"newer" configuration.
Since Itaniums are rare on the used/dumpster market, it would be
difficult to get enough systems and peripherals and replacement parts
for everything. With x86 on the horizon, it seems less effort to just
wait until that is there, maybe even buying a new VMS system again.
Hopefully 8.4 (or some later version) will be supported in a mixed
cluster with x86. If not, then I can borrow an Itanium when the time
comes and go from Alpha to Alpha/Itanium to Itanium to Itanium/x86 to
x86.
Post by Stephen Hoffman
Hobbyists that are running V8.4 only have access to the
provided UPDATE kit and provided layered product versions yes, other
folks are using other versions and other patches. Of the hobbyists
that are running V8.4 on Alpha, the provided UPDATE kit and the
resulting configuration modulo various security issues, and some bugs
that have been identified and fixed by HP does work.
Sounds like a reasonable goal to shoot for, with the hope than some
patches will be made available to hobbyists.
Stephen Hoffman
2014-12-21 14:23:04 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Yes, there have been discussions about this here before...
First, to get some feedback about a new concept and second with VMS
8.4 in mind.
New concept? This is VMS. There is *nothing* new here. Not until
after V8.4-1H1 ships. And V8.4-1H1 won't apply to your Alpha-based
configuration. Etc.
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.

Changing the host name is a pain in the ass, in the general case. For
some layered products I've encountered, it's easier to remove the
product and reinstall it, too.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
In no particular order... Read and follow the HP upgrade manual.
Presumably, the "straight upgrade" is the only thing mentioned.
Your use of "presumably" here implies that you have not read the
documentation. Go do that. Right now.
<http://www.hp.com/go/openvms/doc> has the VMS shelf, and the
installation and upgrade manual is in the VMS shelf, and that has
details on the upgrade including details on performing the cluster
rolling upgrade should you decide to try that path.

Barring some very good reason to the contrary, follow the documented
sequence, too.
Post by Phillip Helbig (undress to reply)
Since Itaniums are rare on the used/dumpster market, it would be
difficult to get enough systems and peripherals and replacement parts
for everything.
That might be specific to Europe, as Itanium boxes are not rare on the
US used market. I've acquired some very nice system configurations for
US$200 to US$700, and the upper-end of that range is *vastly* past what
you'd want or need here.

Replacement parts for everything are generally all in one box these
days, for that whole pile of stuff. Which will reduce much of the
complexity you're dealing with, too.
Post by Phillip Helbig (undress to reply)
With x86 on the horizon, it seems less effort to just wait until that
is there, maybe even buying a new VMS system again.
Whether the VMS x86-64 port might support any box that you have is an
open question, as is whether we'll see a port available prior to 2018?
Post by Phillip Helbig (undress to reply)
Hopefully 8.4 (or some later version) will be supported in a mixed
cluster with x86. If not, then I can borrow an Itanium when the time
comes and go from Alpha to Alpha/Itanium to Itanium to Itanium/x86 to
x86.
The open question being why you seek to run a mixed-architecture
cluster here, and to add more complexity here. One Itanium box — if
you can deal with the server noise, if you happen to get one of the
louder ones — will greatly exceed the performance of your existing
collection of hardware. I'd expect even a 900 MHz McKinley will, for
that matter — the 900 MHz zx2000 box was about as fast as an EV6.
Assuming that there's a hobbyist license as VSI expects, then I'd
expect to see a complete transition over from that existing Alpha
configuration, maybe using translation or emulation as available if
there's some non-portable hunk of Alpha code in use. Remember, your
ancient Alpha boxes will be three or four years more ancient, and
x86-64 will have three or four more years of speed and added cores and
increased integration and higher density, etc.
Post by Phillip Helbig (undress to reply)
...with the hope than some patches will be made available to hobbyists.
I'd expect that no new patches will be available to hobbyists. What
VSI might do here is not yet known.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 14:52:53 UTC
Permalink
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.

On the other hand, upgrading from hard disk, rather than CD, should be
faster, so it might be quicker and safer to do three regular upgrades.
Post by Stephen Hoffman
Whether the VMS x86-64 port might support any box that you have is an
open question,
Certainly not any I have now, but perhaps one I can obtain when the time
comes.
Post by Stephen Hoffman
The open question being why you seek to run a mixed-architecture
cluster here, and to add more complexity here.
Actually, I wish to avoid it, but it might be the only path from Alpha
to x86. I probably can't connect the SCSI disks I have on ALPHA
directly to an x86 box, so a mixed-architecture cluster migrating the
disks via volume shadowing seems the logical way to go (and would be
less work than copying them offline on some machine which supported both
the disks I have on Alpha and future disks which would work with x86).
Post by Stephen Hoffman
One Itanium box --- if
you can deal with the server noise, if you happen to get one of the
louder ones --- will greatly exceed the performance of your existing
collection of hardware.
While more performance would be nice, I do want to keep power and noise
down. More important is having enough replacements which I can
essentially hot swap. There is also the question whether learning the
ins and outs of the Itanium console is worth the trouble for some
intermediate solution.

Also, I like a cluster with separate system disks because it allows one
to test things on one node and also allows for a node to drop out
(intentionally or not). Yes, a modern smartphone might have more
performance than my VMS cluster, but it's not the right tool for the job
here.
Post by Stephen Hoffman
I'd expect even a 900 MHz McKinley will, for
that matter --- the 900 MHz zx2000 box was about as fast as an EV6.
I have one EV67 and two EV56 in the cluster boot servers, and an EV6 on
a satellite.
Post by Stephen Hoffman
Assuming that there's a hobbyist license as VSI expects, then I'd
expect to see a complete transition over from that existing Alpha
configuration, maybe using translation or emulation as available if
there's some non-portable hunk of Alpha code in use.
I doubt there is any non-portable Alpha code; it's more a question of
the hardware transition.
Stephen Hoffman
2014-12-21 16:13:39 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades.
Yeah, many folks do exactly that. If they really need ten system disks.
Post by Phillip Helbig (undress to reply)
It would be nice if there were an officially documented and supported
replication process.
There isn't. That's something that more experienced system managers
sometimes do perform, but it's more common to not be operating with the
constraints of your current and your very limited hardware. For a
low-end Alpha configuration, I'd expect to find shared SCSI or fibre
channel storage, for instance. The former needs a supported and
TCQ-capable SCSI controller, while used fibre channel storage and used
fibre channel switches are getting pretty cheap. That'll reduce the
numbers of system disks present. With those system disks that are
present, folks use either parallel manual upgrades, or perform a
cluster rolling upgrade; usually after a backup of or after a clone of
the local host's or cluster lobe's system disk is created.

AFAIK, there's also no good documented way to change the host name.
That's part of why I have that information posted, after all.
Post by Phillip Helbig (undress to reply)
On the other hand, upgrading from hard disk, rather than CD, should be
faster, so it might be quicker and safer to do three regular upgrades.
Or a distro booted from (host-based) InfoServer. That's also how you
can easily keep online system disk replication capabilities, and a
library of disk images you might want to clone and boot for testing,
too. But again, I've done it on several occasions, and re-naming a VMS
host is a pain with a non-trivial configuration — the cluster host
address and particularly the cluster host name and network host name
gets embedded all over the place. What happens? Well, the system
startup can wedge, when the old host is detected in the queue database.
There are ways around all of this, but a parallel upgrade is more
maintainable for most folks.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Whether the VMS x86-64 port might support any box that you have is an
open question,
Certainly not any I have now, but perhaps one I can obtain when the
time comes.
Call back in several years or whenever the port is available, then?
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
The open question being why you seek to run a mixed-architecture
cluster here, and to add more complexity here.
Actually, I wish to avoid it, but it might be the only path from Alpha
to x86. I probably can't connect the SCSI disks I have on ALPHA
directly to an x86 box, so a mixed-architecture cluster migrating the
disks via volume shadowing seems the logical way to go (and would be
less work than copying them offline on some machine which supported
both the disks I have on Alpha and future disks which would work with
x86).
So you're not going to use some of the two or three terabyte SAS or
SATA disks with either HBVS or controller RAID, and avoid the need for
any of the several-years-even-more-ancient and slow SCSI chain?
Six-terabyte SATA drives are now on the market. Seriously, ditch the
old hardware when you upgrade. Use the StorageWorks bricks for
book-ends, computer-themed decorations, a boat anchor or maybe as
weights for a tarp or a pool cover. SCSI was massively expensive.
SCSI will make your configuration slower, too. Where you're headed,
half-terabyte hard disk drives are already dinky and cheap, and
equivalently-sized SSDs are now cheaper than most of the smaller hard
disks. In several more years, this'll only be further along, and
those current SCSI bricks only yet years older and slower. Simplify
your configuration, and consolidate. Maybe even virtualize, where that
fits.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
One Itanium box --- if you can deal with the server noise, if you
happen to get one of the louder ones --- will greatly exceed the
performance of your existing collection of hardware.
While more performance would be nice, I do want to keep power and noise down.
Acoustical enclosure, server closet, office-friendly box if you can
find one, or one of the quiet Itanium servers. Though unsupported and
now getting somewhat more rare, the zx2000 is very quiet.
Post by Phillip Helbig (undress to reply)
More important is having enough replacements which I can essentially
hot swap. There is also the question whether learning the ins and outs
of the Itanium console is worth the trouble for some intermediate
solution.
Once the boot is set up, there's seldom much need to access the
console. x86-64 will be using some form of EFI or UEFI console, as
well.
Post by Phillip Helbig (undress to reply)
Also, I like a cluster with separate system disks because it allows one
to test things on one node and also allows for a node to drop out
(intentionally or not).
The newer boxes and newer storage tends to be more reliable than what
you're using and what you're accustomed to, too. Having a spare of
the same model can be handy for the folks doing self-maintenance, but
there's not really all that much need to keep the spare powered up,
given the performance of most any Itanium system in comparison to what
you have with that Alpha cluster and particularly given that
configuration is running via (probably) Fast Ethernet and not local
storage and gigabit.
Post by Phillip Helbig (undress to reply)
Yes, a modern smartphone might have more performance than my VMS
cluster, but it's not the right tool for the job here.
Well, if you booted an emulator on that smartphone...
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
I'd expect even a 900 MHz McKinley will, for that matter --- the 900
MHz zx2000 box was about as fast as an EV6.
I have one EV67 and two EV56 in the cluster boot servers, and an EV6 on
a satellite.
A zx2000 box with a 900 MHz McKinley is ~EV6 in terms of CPU
performance. In terms of the other peripheral hardware in the box,
it's faster than what you have. Something like an rx2620 or rx2660
box with even a low-end processor or two will be faster.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Assuming that there's a hobbyist license as VSI expects, then I'd
expect to see a complete transition over from that existing Alpha
configuration, maybe using translation or emulation as available if
there's some non-portable hunk of Alpha code in use.
I doubt there is any non-portable Alpha code; it's more a question of
the hardware transition.
That probably won't be a concern for a few years or whenever the port
becomes available, and you're probably not interested in being on the
bleeding edge of VMS technology in any case.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 16:42:31 UTC
Permalink
Post by Stephen Hoffman
Yeah, many folks do exactly that. If they really need ten system disks.
I'm sure many people do. Maybe not all in the same cluster, but still.
Post by Stephen Hoffman
AFAIK, there's also no good documented way to change the host name.
That's part of why I have that information posted, after all.
Right; very much appreciated.
Post by Stephen Hoffman
So you're not going to use some of the two or three terabyte SAS or
SATA disks with either HBVS or controller RAID, and avoid the need for
any of the several-years-even-more-ancient and slow SCSI chain?
Will they work with my current Alpha hardware? Sure, the plan is to
upgrade to better, cheaper hardware with VMS on x86. I just want to
make the transition as quick and painless as possible. If there are
disks I could use on both my current Alpha hardware and on x86 3 or 4
years from now then I would probably replace my SCSI stuff on Alpha now.
Stephen Hoffman
2014-12-21 17:09:33 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
So you're not going to use some of the two or three terabyte SAS or
SATA disks with either HBVS or controller RAID, and avoid the need for
any of the several-years-even-more-ancient and slow SCSI chain?
Will they work with my current Alpha hardware?
The first VMS-supported SAS and SATA storage is on Itanium.

There was a discussion within the week of somebody (Terry?) that's had
some success with an unsupported LSI PCI-X SAS/SATA controller on
Alpha, though.
Post by Phillip Helbig (undress to reply)
Sure, the plan is to upgrade to better, cheaper hardware with VMS on
x86. I just want to
make the transition as quick and painless as possible.
That's usually by imaging the data from the old to the new box
(possibly into an LD volume), and retiring the old hardware.
Post by Phillip Helbig (undress to reply)
If there are disks I could use on both my current Alpha hardware and
on x86 3 or 4 years from now then I would probably replace my SCSI
stuff on Alpha now.
I'd scrounge some of the last of the new-old-stock SCSI disks before
the last of those disappear off the market, and consolidate off those 4
GB disk drives. I was getting 146 GB 10K new-old-stock SCSI drives
for ~US$25 off Amazon, and swapping those into SBB bricks. There's
little point in buying SAS or SATA for an Alpha here in any case, as in
three or four years, what you buy now will be about the same interest
as those 4 GB SCSI disk drives you're staring at, and you'll have to
rig your own connections and mountings.

This storage stuff is also increasingly disposable, and we're presently
seeing a transition from HDD to SSD.

I'd spend my pesos not on preparing these disks for some unknown
hypothetical future x86-64 box that might run VMS, but rather on
consolidating existing onto newer storage, and on upgrading and
particularly consolidating this server warren.

But in general, if you want this to be painless, you don't haul across
old disk storage hardware to newer boxes. That's more work, more
expense, and less performance.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 17:53:47 UTC
Permalink
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
So you're not going to use some of the two or three terabyte SAS or
SATA disks with either HBVS or controller RAID, and avoid the need for
any of the several-years-even-more-ancient and slow SCSI chain?
Will they work with my current Alpha hardware?
The first VMS-supported SAS and SATA storage is on Itanium.
So I can go to Itanium, get better disks, then go to x86. Or, I can
skip Itanium and get better disks via HBVS in a mixed-architecture
cluster. I would prefer the latter, since it will make it easy to
recompile stuff and make sure it still runs properly.
Post by Stephen Hoffman
There was a discussion within the week of somebody (Terry?) that's had
some success with an unsupported LSI PCI-X SAS/SATA controller on
Alpha, though.
I don't want to mess with unsupported stuff.
Post by Stephen Hoffman
I'd scrounge some of the last of the new-old-stock SCSI disks before
the last of those disappear off the market,
You should see my cellar! I literally have cabinets full of SBB disks.
Enough spares to last me until I die.
Post by Stephen Hoffman
and consolidate off those 4
GB disk drives. I was getting 146 GB 10K new-old-stock SCSI drives
for ~US$25 off Amazon, and swapping those into SBB bricks.
That is one unsupported thing I'll probably try.
Post by Stephen Hoffman
There's
little point in buying SAS or SATA for an Alpha here in any case, as in
three or four years, what you buy now will be about the same interest
as those 4 GB SCSI disk drives you're staring at, and you'll have to
rig your own connections and mountings.
Right.
Post by Stephen Hoffman
This storage stuff is also increasingly disposable, and we're presently
seeing a transition from HDD to SSD.
My wife has a MacBook with SSDs. Nice.
Post by Stephen Hoffman
But in general, if you want this to be painless, you don't haul across
old disk storage hardware to newer boxes. That's more work, more
expense, and less performance.
Another attraction of the mixed-architecture cluster: old disks on old
boxes, new disks on new boxes, copy via HBVS. No downtime.
Kerry Main
2014-12-21 18:27:39 UTC
Permalink
-----Original Message-----
Phillip Helbig (undress to reply)
Sent: 21-Dec-14 12:54 PM
Subject: Re: [New Info-vax] replicating a system disk
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
So you're not going to use some of the two or three terabyte SAS or
SATA disks with either HBVS or controller RAID, and avoid the need
for
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
any of the several-years-even-more-ancient and slow SCSI chain?
Will they work with my current Alpha hardware?
The first VMS-supported SAS and SATA storage is on Itanium.
So I can go to Itanium, get better disks, then go to x86. Or, I can
skip Itanium and get better disks via HBVS in a mixed-architecture
cluster. I would prefer the latter, since it will make it easy to
recompile stuff and make sure it still runs properly.
[snip]

While the complexities of any HW /OS upgrade definitely need to be
considered, by far the biggest challenges are typically ensuring getting
all of the Apps, ISV support, App tuning (think alignment issues which
will be present on X64 as well) correctly migrated.

In many Cust environments today, the App code is not well documented.
Things like App to App, App to server, App services, firewall, old compiler
and non-std code Issues and external dependencies are often a gray area
& not well understood.

Hence, imho, Custs looking at future migrations should consider the IA64
upgrade as a means to clean-up, upgrade, document and optimize their
code in preparation for a future X64 (or other) migration in 3-5 years' time.

Since VSI has already stated they are making it easier to do future platform
migrations, this will make the transition to X64 much easier than a single
big bang migration from Alpha to X64.

I would also suggest that Cust's not position X64 as the final target after
IA64 because who knows - perhaps in 3 years the Power9 and/or Arm or
other platform might look a whole lot better than they do now.

Google is looking big time at PowerX as the per core speeds look a lot
better than X64. As we have heard in discussions this week, a lot of ISV
licensing is based on numbers of cores, so per core performance will be
important in the future. Ask most Cust's today whether they would
prefer much faster per core performance vs. more, but slower cores &
most would likely choose better per core performance.

A cleaned up migration from Alpha to IA64 now, will allow a Cust to
reduce their HW support costs today, move to fully supported HW &
at the same time, give them the flexibility to more easily move to
a future TBD platform.


Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
Phillip Helbig (undress to reply)
2014-12-21 19:22:27 UTC
Permalink
In article
Post by Kerry Main
While the complexities of any HW /OS upgrade definitely need to be
considered, by far the biggest challenges are typically ensuring getting
all of the Apps, ISV support, App tuning (think alignment issues which
will be present on X64 as well) correctly migrated.
In my case, apart from the OS and LPs, it's mostly third-party software
such as ZIP, LYNX, the OSU WWW server, LateX etc. Most of this should
be compile and go, especially since most of it will probably have been
tested on the new architecture before I get around to it. My own stuff
is mostly written in Fortran. Since I take care to write standard code,
it should be compile and go. However, it would still be useful to have
both old and new architectures in the same cluster for ease of
comparison (run .EXE and save results; recompile, relink, run .EXE again
and save results; compare results).
Phillip Helbig (undress to reply)
2014-12-22 18:17:35 UTC
Permalink
Post by Phillip Helbig (undress to reply)
In article
Post by Kerry Main
While the complexities of any HW /OS upgrade definitely need to be
considered, by far the biggest challenges are typically ensuring getting
all of the Apps, ISV support, App tuning (think alignment issues which
will be present on X64 as well) correctly migrated.
In my case, apart from the OS and LPs, it's mostly third-party software
such as ZIP, LYNX, the OSU WWW server, LateX etc. Most of this should
be compile and go, especially since most of it will probably have been
tested on the new architecture before I get around to it. My own stuff
is mostly written in Fortran. Since I take care to write standard code,
it should be compile and go. However, it would still be useful to have
both old and new architectures in the same cluster for ease of
comparison (run .EXE and save results; recompile, relink, run .EXE again
and save results; compare results).
WARNING !!! total waste of bandwidth to follow ...
You don't need to run your old and new in the same cluster ....
No, but it's easier.
David Froble
2014-12-23 09:52:11 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
In article
Post by Kerry Main
While the complexities of any HW /OS upgrade definitely need to be
considered, by far the biggest challenges are typically ensuring getting
all of the Apps, ISV support, App tuning (think alignment issues which
will be present on X64 as well) correctly migrated.
In my case, apart from the OS and LPs, it's mostly third-party software
such as ZIP, LYNX, the OSU WWW server, LateX etc. Most of this should
be compile and go, especially since most of it will probably have been
tested on the new architecture before I get around to it. My own stuff
is mostly written in Fortran. Since I take care to write standard code,
it should be compile and go. However, it would still be useful to have
both old and new architectures in the same cluster for ease of
comparison (run .EXE and save results; recompile, relink, run .EXE again
and save results; compare results).
WARNING !!! total waste of bandwidth to follow ...
You don't need to run your old and new in the same cluster ....
No, but it's easier.
Just what, assuming DECnet and FAL, is easier? Be specific.
JF Mezei
2014-12-21 22:09:13 UTC
Permalink
Post by Kerry Main
Hence, imho, Custs looking at future migrations should consider the IA64
upgrade as a means to clean-up, upgrade, document and optimize their
code in preparation for a future X64 (or other) migration in 3-5 years' time.
While this is definitely the case when migrating from VAX which has
different compilers, 32 bits etc, if one is at Alpha, isn't the cleanup
already done and the move to IA64 providing no advantage when the end
goal is to move to the 8086 ?

Considering that we don't yet know what the 8086 compiler environment
will be for VMS, it is quite possible that moving from Alpha to 8686
will involve the exact same work as going from IA64 to 8086.
Kerry Main
2014-12-22 01:35:37 UTC
Permalink
-----Original Message-----
Mezei
Sent: 21-Dec-14 5:09 PM
Subject: Re: [New Info-vax] replicating a system disk
Post by Kerry Main
Hence, imho, Custs looking at future migrations should consider the
IA64
Post by Kerry Main
upgrade as a means to clean-up, upgrade, document and optimize
their
Post by Kerry Main
code in preparation for a future X64 (or other) migration in 3-5 years'
time.
While this is definitely the case when migrating from VAX which has
different compilers, 32 bits etc, if one is at Alpha, isn't the cleanup
already done and the move to IA64 providing no advantage when the end
goal is to move to the 8086 ?
Considering that we don't yet know what the 8086 compiler environment
will be for VMS, it is quite possible that moving from Alpha to 8686
will involve the exact same work as going from IA64 to 8086.
Each Cust environment is different, but many Alpha Cust environments
do not have well documented Application env. Same goes for ALL other
platforms as well btw. Ask for a detailed listing of all Apps / tools versions,
what servers they run on, what dependencies exist, what acceptance
test plans they have etc. and the look you get will be similar to a deer in
the lights.

By doing all of this now, they can clean up many loose ends, upgrade
compilers to latest versions and also reduce the support / maint.
costs (new HW has warranty periods as well). HP will likely also provide
substantial discounts on HW and new SW costs.

Course, like what was mentioned earlier, Custs will need to review
ISV considerations as well to determine cost/benefits.

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
Stephen Hoffman
2014-12-21 18:36:40 UTC
Permalink
Post by Phillip Helbig (undress to reply)
So I can go to Itanium, get better disks, then go to x86. Or, I can
skip Itanium and get better disks via HBVS in a mixed-architecture
cluster. I would prefer the latter, since it will make it easy to
recompile stuff and make sure it still runs properly.
You're really fond of mixed-architecture here. Definitely would not be
my choice, beyond the time required to migrate the cluster over. Ah,
well.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
There was a discussion within the week of somebody (Terry?) that's had
some success with an unsupported LSI PCI-X SAS/SATA controller on
Alpha, though.
I don't want to mess with unsupported stuff.
Post by Stephen Hoffman
I'd scrounge some of the last of the new-old-stock SCSI disks before
the last of those disappear off the market,
You should see my cellar! I literally have cabinets full of SBB disks.
Enough spares to last me until I die.
Until you find they've all seized.

.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
This storage stuff is also increasingly disposable, and we're presently
seeing a transition from HDD to SSD.
My wife has a MacBook with SSDs. Nice.
See if she'll let you install OS X Server on it.
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
But in general, if you want this to be painless, you don't haul across
old disk storage hardware to newer boxes. That's more work, more
expense, and less performance.
Another attraction of the mixed-architecture cluster: old disks on old
boxes, new disks on new boxes, copy via HBVS. No downtime.
For these cases, downtime is usually from the extra complexity involved
in these configurations, and from the older hardware.

Simpler and more consolidated configurations and newer hardware is a
more common way to try to avoid downtime, at least locally.

But you seem to want that complexity, so...
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 19:25:17 UTC
Permalink
Post by Stephen Hoffman
You're really fond of mixed-architecture here. Definitely would not be
my choice, beyond the time required to migrate the cluster over. Ah,
well.
Suppose I recompile and relink and get different results from some
Fortran code. Whatever the reason, one needs a side-by-side comparison.
Is the new result right or the old result? Without side-by-side
comparison, finding out would be much more difficult. Sure, if it's
just the OS, or third-party stuff, then I wouldn't fix it myself anyway
but would report a bug, but for my own stuff it would be essential.
Stephen Hoffman
2014-12-21 19:54:20 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
You're really fond of mixed-architecture here. Definitely would not be
my choice, beyond the time required to migrate the cluster over. Ah,
well.
Suppose I recompile and relink and get different results from some
Fortran code. Whatever the reason, one needs a side-by-side comparison.
Is the new result right or the old result? Without side-by-side
comparison, finding out would be much more difficult. Sure, if it's
just the OS, or third-party stuff, then I wouldn't fix it myself anyway
but would report a bug, but for my own stuff it would be essential.
Use DECdtm and run your regression tests and verify and compare the
results, or use whatever else you're using to test and verify your
code. Don't have regressions, or need to check something after the
migration has completed? Boot a standalone Alpha, if when you need
it, and have a look. Or run a test with GFortran over on that Mac, as
an alternative.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-12-22 02:11:39 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
You're really fond of mixed-architecture here. Definitely would not be
my choice, beyond the time required to migrate the cluster over. Ah,
well.
Suppose I recompile and relink and get different results from some
Fortran code. Whatever the reason, one needs a side-by-side comparison.
Is the new result right or the old result? Without side-by-side
comparison, finding out would be much more difficult. Sure, if it's
just the OS, or third-party stuff, then I wouldn't fix it myself anyway
but would report a bug, but for my own stuff it would be essential.
Ever heard of DECnet, and FAL ???
Phillip Helbig (undress to reply)
2014-12-22 18:18:29 UTC
Permalink
Post by David Froble
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
You're really fond of mixed-architecture here. Definitely would not be
my choice, beyond the time required to migrate the cluster over. Ah,
well.
Suppose I recompile and relink and get different results from some
Fortran code. Whatever the reason, one needs a side-by-side comparison.
Is the new result right or the old result? Without side-by-side
comparison, finding out would be much more difficult. Sure, if it's
just the OS, or third-party stuff, then I wouldn't fix it myself anyway
but would report a bug, but for my own stuff it would be essential.
Ever heard of DECnet, and FAL ???
Sure, but as I've mentioned before, I really don't need DECnet within a
cluster, and can't reach other VMS machines with it, so I've never set
it up. I will, when I have time, but I don't want to set it up just for
this.
JF Mezei
2014-12-23 03:22:07 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Sure, but as I've mentioned before, I really don't need DECnet within a
cluster, and can't reach other VMS machines with it, so I've never set
it up.
DECnet can be of use locally for network objects that get started on a
specific user name (such as OSU Web server). It can be a more convenient
way to start stuff than RUN/DETACHED
David Froble
2014-12-22 02:10:29 UTC
Permalink
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
You should see my cellar! I literally have cabinets full of SBB disks.
Enough spares to last me until I die.
Until you find they've all seized.
AND THEY WILL !!
Paul Sture
2014-12-22 07:18:37 UTC
Permalink
Post by David Froble
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
You should see my cellar! I literally have cabinets full of SBB disks.
Enough spares to last me until I die.
Until you find they've all seized.
AND THEY WILL !!
There speaks the voice of experience, unless I am very much mistaken.
--
HAL 9000: Dave. Put down those Windows disks. Dave. DAVE!
David Froble
2014-12-22 11:07:35 UTC
Permalink
Post by Paul Sture
Post by David Froble
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
You should see my cellar! I literally have cabinets full of SBB disks.
Enough spares to last me until I die.
Until you find they've all seized.
AND THEY WILL !!
There speaks the voice of experience, unless I am very much mistaken.
Ayep!

Not all have seized, actually only 2-3. But, while I've never looked
inside a drive, I figure there is some type of long term lube. After
it's used, then cools off, I'm thinking that it might thicken, enough
that it won't spin back up.

Maybe throw it in the oven for a while to thin up the lube? :-)

If you're lucky, the electronics in old stuff may last a long time. Bad
power, power failures, and such are not your friend.

But the physical stuff, it will wear out.
Paul Sture
2014-12-21 16:18:34 UTC
Permalink
On 2014-12-21, Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.
With a pile of layered software to consider we found that cloning was
viable for 3 clusters, each with one common system disk (with
shadowing FWIW).

We developed our own check list for the items needing a change for the
node names, which was much akin to the checklist you will find in the
VMS FAQ.
Post by Phillip Helbig (undress to reply)
On the other hand, upgrading from hard disk, rather than CD, should be
faster, so it might be quicker and safer to do three regular upgrades.
I used to have a 1GB external SCSI disk which was ideal for this. A
larger one would be better of course, so that layered software could be
included.

<snip>
Post by Phillip Helbig (undress to reply)
Also, I like a cluster with separate system disks because it allows one
to test things on one node and also allows for a node to drop out
(intentionally or not). Yes, a modern smartphone might have more
performance than my VMS cluster, but it's not the right tool for the job
here.
Too much work in a production environment, but of course we had separate
development and test systems for testing.
--
HAL 9000: Dave. Put down those Windows disks. Dave. DAVE!
Stephen Hoffman
2014-12-21 16:51:23 UTC
Permalink
Post by Paul Sture
On 2014-12-21, Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.
With a pile of layered software to consider we found that cloning was
viable for 3 clusters, each with one common system disk (with
shadowing FWIW).
But then that approach might be over-designing an upgrade that only
happens once or twice per decade, in Phillip's case. V7.3-2 shipped
in 2003. At that same pace, the next VMS upgrade after V8.4 might not
be until ~2021. Or when that old Alpha hardware goes pining for the
fjords, or otherwise gets replaced, of course.
Post by Paul Sture
We developed our own check list for the items needing a change for the
node names, which was much akin to the checklist you will find in the
VMS FAQ.
The list posted on the HL web site
<http://labs.hoffmanlabs.com/node/589> is newer and more inclusive than
what's in the VMS FAQ.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 18:00:29 UTC
Permalink
Post by Stephen Hoffman
The list posted on the HL web site
<http://labs.hoffmanlabs.com/node/589> is newer and more inclusive than
what's in the VMS FAQ.
I just had another look at that. It seems that if one does some things
which many people do anyway, such as moving files off the system disk
and defining logical names to point to them, not using node names in
device names (HBVS takes care of some of this), and so on, then it is
not near as much work.
Stephen Hoffman
2014-12-21 19:23:54 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
The list posted on the HL web site
<http://labs.hoffmanlabs.com/node/589> is newer and more inclusive than
what's in the VMS FAQ.
I just had another look at that. It seems that if one does some things
which many people do anyway, such as moving files off the system disk
and defining logical names to point to them, not using node names in
device names (HBVS takes care of some of this), and so on, then it is
not near as much work.
Yeah. Wouldn't be my choice, having spent more than a little time
dredging around inside (for instance) Apache, et al.
You might succeed here, or you might be rolling in backups, or back
here looking for help digging out of some bit-avalanche.
if you have a host name that's sufficiently unique, SEARCH/WINDOW=0 the
whole disk, and see where it's gotten written. What you're up against.
I'd set up InfoServer and roll each system disk forward separately.
Installs are pretty quick, these days.
But have fun with that, Phillip.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-21 19:31:47 UTC
Permalink
Post by Stephen Hoffman
I'd set up InfoServer and roll each system disk forward separately.
Installs are pretty quick, these days.
But have fun with that, Phillip.
I haven't ruled out parallel installs. I think I'll try the copy and if
it doesn't work, do a parallel install.
Phillip Helbig (undress to reply)
2014-12-21 19:56:46 UTC
Permalink
Post by Stephen Hoffman
if you have a host name that's sufficiently unique, SEARCH/WINDOW=0 the
whole disk, and see where it's gotten written. What you're up against.
Surprisingly, not that much. Apart from files which I have since copied
to a location off the system disk, and some of my own procedures where I
have a deliberately hard-coded list of node names or where the node name
appears in comments, only some CLUE$* files. (This excludes TCPIP$*
files, but I have moved the active files off the system disk on one node
and am now testing that.)

The above is for SYS$COMMON:[*...]. SYS$SPECIFIC:[*...]/exc=[SYSCOMMON]
of course contains many more, but these wouldn't matter if I have all
roots on all system disks and boot from the appropriate root.
Paul Sture
2014-12-21 17:59:25 UTC
Permalink
Post by Stephen Hoffman
Post by Paul Sture
On 2014-12-21, Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.
With a pile of layered software to consider we found that cloning was
viable for 3 clusters, each with one common system disk (with
shadowing FWIW).
But then that approach might be over-designing an upgrade that only
happens once or twice per decade, in Phillip's case. V7.3-2 shipped
in 2003. At that same pace, the next VMS upgrade after V8.4 might not
be until ~2021. Or when that old Alpha hardware goes pining for the
fjords, or otherwise gets replaced, of course.
I forgot to mention that the server room concerned was a pretty hostile
place to work; you were typically standing up using a VT on a crash cart
(and then maybe a DECwindows systems elsewhere in the room once you had
network connectivity). Lots of patches involved too in one particular
upgrade. Our decision to do it that way was very much driven by the
working environment; anything we could do at the comfort of our own
desks, not to mention copy and paste using DECterms, and with full
documentation to hand was done there.
Post by Stephen Hoffman
Post by Paul Sture
We developed our own check list for the items needing a change for the
node names, which was much akin to the checklist you will find in the
VMS FAQ.
The list posted on the HL web site
<http://labs.hoffmanlabs.com/node/589> is newer and more inclusive than
what's in the VMS FAQ.
Noted, thanks.
--
HAL 9000: Dave. Put down those Windows disks. Dave. DAVE!
Phillip Helbig (undress to reply)
2014-12-21 16:56:39 UTC
Permalink
Post by Paul Sture
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.
With a pile of layered software to consider we found that cloning was
viable for 3 clusters, each with one common system disk (with
shadowing FWIW).
Right. If it were just VMS, OK, a few parallel upgrades. But all the
layered products make for much more work. Much more.

I think I'll clean up my system disks as much as possible then upgrade
one. I'll try to clone it for another node. If it works, fine; if not,
I can revert to backup and upgrade directly.

Hopefully, the fact that the system disks are all in the same cluster
will make it easier than if they were system disks of different
clusters.

Assuming that everything which can be moved off the system disk has been
moved off the system disk and that the system roots (SYS$SPECIFIC) for
all nodes are all on each disk, is there any reason why even an
unmodified copy would not work, booting from the appropriate system
root? An obvious problem would be a node name or address somewhere in
SYS$COMMON.
Post by Paul Sture
We developed our own check list for the items needing a change for the
node names, which was much akin to the checklist you will find in the
VMS FAQ.
I've done this before, and it worked. However, it seems to me that a
straight copy would be better. It means having all roots on all disks
and keeping them in synch, but that is probably less work than changing
the node name.
JF Mezei
2014-12-21 22:02:41 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Assuming that everything which can be moved off the system disk has been
moved off the system disk and that the system roots (SYS$SPECIFIC) for
all nodes are all on each disk,
You want the opposite for the upgrade. All systems files have to be
repatriated to their default locations. If there is an upgrade to their
format, the upgrade procedure will not catch them if they are located
elsewhere.

Also, remember that depending on the upgrade process, your node may
reboot in STARTUP_P1="MIN" and /Startup=NOT-STARTUP.COM to continue
installation, during which, none of your logical name definitions will
kick in.
Phillip Helbig (undress to reply)
2014-12-21 22:06:32 UTC
Permalink
Post by JF Mezei
Post by Phillip Helbig (undress to reply)
Assuming that everything which can be moved off the system disk has been
moved off the system disk and that the system roots (SYS$SPECIFIC) for
all nodes are all on each disk,
You want the opposite for the upgrade. All systems files have to be
repatriated to their default locations. If there is an upgrade to their
format, the upgrade procedure will not catch them if they are located
elsewhere.
Good point. :-|

I don't expect SYSUAF to change, but the TCPIP stuff might.
Phillip Helbig (undress to reply)
2014-12-21 22:22:01 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by JF Mezei
Post by Phillip Helbig (undress to reply)
Assuming that everything which can be moved off the system disk has been
moved off the system disk and that the system roots (SYS$SPECIFIC) for
all nodes are all on each disk,
You want the opposite for the upgrade. All systems files have to be
repatriated to their default locations. If there is an upgrade to their
format, the upgrade procedure will not catch them if they are located
elsewhere.
Good point. :-|
I don't expect SYSUAF to change, but the TCPIP stuff might.
SYS$SPECIFIC is clear: it is for node-specific stuff. SYS$COMMON, by
default, is a mixture of 4 things: 1) stuff the same on all system disks
with this version of VMS in the world, 2), cluster-specific stuff (which
has to be kept in-synch if the live files are in SYS$COMMON), 3) stuff
common to all nodes booting from this disk but not to other nodes in the
cluster booting from other system disks (I think this is rare in
practcie), and 4) node-specific stuff hidden inside files, like the
TCPIP stuff using the nodename as a key.

One wants 1) to be on the system disk so that one can test a new version
of VMS, new patches, perform a rolling upgrade etc. 3) is rare in
practice. 4) is in my view a bad design. It is reasonably well
documented how to move the stuff in 2) off of the system disk, but as
you say this could cause problems during an upgrade.

It would be nice if SYS$CLUSTER, say, were a third translation of
SYS$SYSROOT, perhaps defined via a parameter in MODPARAMS.DAT (and
defaulting to SYS$COMMON if not defined). If the definition includes a
disk name, then it would be mounted as soon as the system is up far
enough to do so. One could then make this disk (or a copy of it)
available during an upgrade. This would also provide another method of
moving files off the system disk than individual logical names: just put
them in the proper place in SYS$CLUSTER and rename the original files.
(One might want to keep the possibility to use individual logical names
for added flexibility, and of course backwards compatibilit.)

Yes, one could presumably redefine SYS$SYSROOT oneself, but I don't
think that this is a good idea.
JF Mezei
2014-12-21 22:33:22 UTC
Permalink
Post by Phillip Helbig (undress to reply)
SYS$SPECIFIC is clear: it is for node-specific stuff. SYS$COMMON, by
Some apps install stuff all over the place. TCPIP Services prior the
recent versions was notable for this. Some stuff that belonged in
SYS$SPECIFC was put in COMMON by default etc. So you need to be a tad
careful during the upgrade of the TCPIP services.

Moving from ancient TCPIP to the current one also gives you new
fonctionality, in particular the SMTP server which handles stuff like
RBLs, checking for delivery before accepting message (to eliminate
backscatter) etc. They also finally fixed their stupid mistake of
breaking up long header lines (which rendered the email header
syntactically useless to POP/IMAP server and clients). That one had been
put in by the SMTP receiver to ensur eno line exceeded 255 bytes
(believe it was a DECnet mail restriction).


So the upgrade in functionality itself will get you to spend some time
reading the doc and setting things up to take advantage of new stuff.
Phillip Helbig (undress to reply)
2014-12-22 18:12:18 UTC
Permalink
Post by JF Mezei
Some apps install stuff all over the place. TCPIP Services prior the
recent versions was notable for this. Some stuff that belonged in
SYS$SPECIFC was put in COMMON by default etc. So you need to be a tad
careful during the upgrade of the TCPIP services.
TCPIP is probably more effort than the rest of the upgrade.
Post by JF Mezei
Moving from ancient TCPIP to the current one also gives you new
fonctionality, in particular the SMTP server which handles stuff like
RBLs, checking for delivery before accepting message (to eliminate
backscatter) etc.
Even the ancient one I have has those features; it's one of the reasons
I took VAXen out of the cluster.
David Froble
2014-12-22 11:09:37 UTC
Permalink
Post by JF Mezei
Post by Phillip Helbig (undress to reply)
Assuming that everything which can be moved off the system disk has been
moved off the system disk and that the system roots (SYS$SPECIFIC) for
all nodes are all on each disk,
You want the opposite for the upgrade. All systems files have to be
repatriated to their default locations. If there is an upgrade to their
format, the upgrade procedure will not catch them if they are located
elsewhere.
Also, remember that depending on the upgrade process, your node may
reboot in STARTUP_P1="MIN" and /Startup=NOT-STARTUP.COM to continue
installation, during which, none of your logical name definitions will
kick in.
I've always figured that thing about moving stuff off the system disk to
some common disk was just asking for trouble. Like wearing the "please
kick me" sign.
Stephen Hoffman
2014-12-22 13:50:09 UTC
Permalink
Post by David Froble
I've always figured that thing about moving stuff off the system disk
to some common disk was just asking for trouble. Like wearing the
"please kick me" sign.
When dealing with multiple system disks, the easiest (and supported
course) is to move the shared files
<http://labs.hoffmanlabs.com/node/169> to a common location.

This particular mess is at the core of various of my gripes about the
current state of VMS, too. VMS started out with and grew ad-hoc
file-based user authentication. Yes, which does work. Due largely
to engineering and scheduling inertia — other changes were viewed as
more important and/or less disruptive — redesigned and newer and
distributed implementations such as LDAP and the (lack of) integrated
relational database support were never in a position to supplant the
existing and ad-hoc authentication mechanisms within VMS.

Which means that — beyond the password sharing with LDAP that's still
not even the default behavior of VMS — there's no concept of sharing
authentication data across non-clustered systems, nor any concept of
having machine or machine group records for (for instance) different
quota settings within lobes of a cluster.

Being largely based on RMS indexed files, VMS authentication is also
not transactional, which means that incomplete updates and incomplete
deletions here can leave you with odd states. These failures are rare
as the various authentication files aren't written all that often, but
it's easily possible to get into these states manually. Everybody here
remembers to remove the VMSMAIL_PROFILE record when a user's SYSUAF
entry is removed, for instance? This RMS-based approach also means
that the authentication data is disconnected and scattered over a pile
of files and not in a database. The RMS indexed files also mean that
making changes to record structures are far more difficult than with a
database or with LDAP, too.

In this era, I'd be surprised if anybody actually set out to create
such a hacky multiple-file-based user authentication system as is still
used within VMS. Yet we all deal with it. RMS indexed files were and
are great for many applications — they're NoSQL database, after all —
but I doubt any serious replacement would use the current VMS design.

Many folks would look to use LDAP and/or database files, either running
LDAP locally or distributed, and a database or two to avoid most of the
<http://labs.hoffmanlabs.com/node/169> "where's my profile?" fun.
(Note too that that article does not include other user profile data
that can exist, such as the data stored in DEC Notes notebooks and in
the notes conference files, or in other layered products data files;
everybody that wanted to or needed to add or extend that profile data
all tended to create their own cache of data, meaning that reliably and
globally removing a user in a complex VMS environment is difficult.
You did remember to delete the user from the moderator list in that
notes conference, so that some random new user with that same username
isn't automatically a conference moderator, right?

Then there's using GUIDs to represent user identities on data
structures, rather than relying on the username / UIC pairing,
particularly at sites where users do get deleted. Though like actually
fixing timekeeping on VMS, migrating to GUIDs would also break a lot of
stuff. But that's fodder for another time.

But back to David's comment and preference for not relocating files,
"you can't get there from here". You have to coordinate or to share
those cluster common files, just as soon as you have multiple system
disks around. (Yes, it is documented to have multiple parallel SYSUAF
files, though you have to track UIC values across all of them. Having
used multiple files with some clusters that had some really fast and
really big boxes — very big and powerful servers — and some dinky and
slow boxes — underpowered workstations, underpowered servers — and in
other clusters that used environments where some of the boxes were
deliberately configured as big-quota batch engines and others as
sharing-friendly timesharing boxes — it's a hassle. Though multiple
SYSUAF configurations are documented, the standard VMS tools and the
current VMS implementation just won't help you here.)

Sooooo.... No transactional support for database changes, difficult
changes to the existing RMS-based profiles, no LDAP support for
distributed profiles and single-domain management, no single profile
entry for a user (even with current LDAP enabled!), trivially easy
collisions with remnants of the profiles of deleted users, and
generally configuration and user management that's rather more
difficult and arcane than it really should be.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-22 18:21:54 UTC
Permalink
Post by Stephen Hoffman
Post by David Froble
I've always figured that thing about moving stuff off the system disk
to some common disk was just asking for trouble. Like wearing the
"please kick me" sign.
When dealing with multiple system disks, the easiest (and supported
course) is to move the shared files
<http://labs.hoffmanlabs.com/node/169> to a common location.
Definitely. It works like a charm. It is really only an issue during
upgrades, and then in the unlikely event that one of these files is
affected.
JF Mezei
2014-12-23 02:54:27 UTC
Permalink
Post by Stephen Hoffman
deletions here can leave you with odd states. These failures are rare
as the various authentication files aren't written all that often, but
it's easily possible to get into these states manually. Everybody here
remembers to remove the VMSMAIL_PROFILE record when a user's SYSUAF
entry is removed, for instance?
Is this an RMS problem or an application problem ? Unless all is packed
in the same record in the same file/table, you will need to have an
application (such as authorize) manage deleting multiple related records
when you delete a user.


I think this has more to do with Digital not updating Authorize when
they added new stuff such as the VMSMAIL_PROFILE file.

(Or integrating the info from VMSMAIL_PROFILE into SYSUAF).
Stephen Hoffman
2014-12-23 03:47:32 UTC
Permalink
Post by JF Mezei
Is this an RMS problem or an application problem ?
Yes. RMS is very limited, and in various dimensions. Yes, there are
application problems. Yes, there gaps in what the existing VMS-related
tools provide. These areas are connected together, too. As the joke
goes, if the tool you have is a hammer, then everything looks like a
nail. If the tool you have is RMS, then everything looks like a bunch
of comparatively disjoint and application-managed and not-very-flexible
records all stuffed into various files, and not necessarily with online
backups, nor with the capability of all-or-nothing changes to the
constituent files of an application.

Put another way, here are two questions:

...Have you created an application that uses a transaction-oriented,
relational database? Whether SQLite, PostgreSQL, Oracle Rdb, or
otherwise?

...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?

Most everybody around has certainly experienced these tools and
mechanisms as an end-user of computing, but — like DNS services or
certificates or other areas of computing — many folks haven't really
looked into the details and capabilities of these and other parts of
the foundation.

Put another way, my comments assume some familiarity with these and
with some other areas of VMS.

If you have worked with a transaction-oriented database and have some
experience within LDAP, then it would help to elaborate on what is is
that you find confusing or concerning about my comments, or where you
might disagree with my comments of course. If you don't have at least
a little familiarity or a little experience with databases or LDAP or
DNS or certificates for that matter, then I'd suggest spending a little
time learning more about these and other parts of the foundation of
most current computing configurations, and maybe reading some
documentation or writing some simple applications or however you prefer
to learn.
--
Pure Personal Opinion | HoffmanLabs LLC
JF Mezei
2014-12-23 04:02:17 UTC
Permalink
Post by Stephen Hoffman
...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?
Ahh, but one doesn't know HOW LDAP stores it records because the
application has been built to manage the different aspects of a user,s
existence.

LDAP may have separate tables internally, but the application knows that
when you delete a user, it needs to process that user,s records in a
number of different tables to ensure consistency.

So this goes back to the authrization application being responsible for
deletes, and this is where authorize.exe failed to include the extra
logic to delete associated VMSMAIL_PROFILE record(s), and perhaps also
offer to delete the user's directory and files or archive them.

You could conceivably build an authorization server on VMS that takes
care of all of that and presents an LDAP interface to the user, even
though the actual data remains stored in the original files.
Stephen Hoffman
2014-12-23 04:18:37 UTC
Permalink
Post by JF Mezei
Post by Stephen Hoffman
...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?
Ahh, but one doesn't know HOW LDAP stores it records because the
application has been built to manage the different aspects of a user,s
existence.
LDAP may have separate tables internally, but the application knows that
when you delete a user, it needs to process that user,s records in a
number of different tables to ensure consistency.
So this goes back to the authrization application being responsible for
deletes, and this is where authorize.exe failed to include the extra
logic to delete associated VMSMAIL_PROFILE record(s), and perhaps also
offer to delete the user's directory and files or archive them.
You could conceivably build an authorization server on VMS that takes
care of all of that and presents an LDAP interface to the user, even
though the actual data remains stored in the original files.
None of which is particularly relevent to my comments.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-12-23 10:33:35 UTC
Permalink
Post by Stephen Hoffman
Post by JF Mezei
Post by Stephen Hoffman
...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?
Ahh, but one doesn't know HOW LDAP stores it records because the
application has been built to manage the different aspects of a user,s
existence.
LDAP may have separate tables internally, but the application knows that
when you delete a user, it needs to process that user,s records in a
number of different tables to ensure consistency.
So this goes back to the authrization application being responsible for
deletes, and this is where authorize.exe failed to include the extra
logic to delete associated VMSMAIL_PROFILE record(s), and perhaps also
offer to delete the user's directory and files or archive them.
You could conceivably build an authorization server on VMS that takes
care of all of that and presents an LDAP interface to the user, even
though the actual data remains stored in the original files.
None of which is particularly relevent to my comments.
Well, >>IF<< you are claiming that using a RDBMS would make a difference
with regard to current deficiencies, then I'd say that it is relevant.

I'm not saying that using a RDBMS for implementation isn't a good idea.
RMS is similar to my DAS database, which was decent in 1984, but not
decent in 2014. I'll make the same claim for RMS, it also is no longer
a good choice for new work. At least much of the time.
David Froble
2014-12-23 10:21:01 UTC
Permalink
Post by Stephen Hoffman
Post by JF Mezei
Is this an RMS problem or an application problem ?
Yes. RMS is very limited, and in various dimensions. Yes, there are
application problems. Yes, there gaps in what the existing VMS-related
tools provide. These areas are connected together, too. As the joke
goes, if the tool you have is a hammer, then everything looks like a
nail. If the tool you have is RMS, then everything looks like a bunch
of comparatively disjoint and application-managed and not-very-flexible
records all stuffed into various files, and not necessarily with online
backups, nor with the capability of all-or-nothing changes to the
constituent files of an application.
...Have you created an application that uses a transaction-oriented,
relational database? Whether SQLite, PostgreSQL, Oracle Rdb, or
otherwise?
Yes, I have. Not on VMS, but for why you're asking, it doesn't matter.
The DB is SQL-2000. Anyone who does anything like this cannot help
but have their nose rubbed into some rather nice capabilities.

DELETE * from USERS where USERID="JF"

Now, what is included is an application called Enterprise Manager. With
that you can do many things, including execute SQL commands, scripts,
and such. But, it IS an application, is is no better or worse than how
well it is implimented. Just being a DB utility has no bearing on how
well it is implemented.

I do not think it is fair to blame RMS for poor application implementation.

I'll also say that having a GUI based application was very good. Any DB
running on VMS should include such an GUI application, whether running
on weendoze, Mac, or whatever.
Post by Stephen Hoffman
...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?
No, I have not. In fact, I'm not sure I understand what is represented
by those labels actually do.
Post by Stephen Hoffman
Most everybody around has certainly experienced these tools and
mechanisms as an end-user of computing, but — like DNS services or
certificates or other areas of computing — many folks haven't really
looked into the details and capabilities of these and other parts of the
foundation.
Put another way, my comments assume some familiarity with these and with
some other areas of VMS.
If you have worked with a transaction-oriented database and have some
experience within LDAP, then it would help to elaborate on what is is
that you find confusing or concerning about my comments, or where you
might disagree with my comments of course. If you don't have at least a
little familiarity or a little experience with databases or LDAP or DNS
or certificates for that matter, then I'd suggest spending a little time
learning more about these and other parts of the foundation of most
current computing configurations, and maybe reading some documentation
or writing some simple applications or however you prefer to learn.
Sure, assign the old fart more work ....


Ok, here's my question. Is LDAP (whatever it is) really all that good
of a design? Or perhaps is it just all that's available?

I got no problem with stating the need for some things. But I'd suggest
that if possible, they should be able to be used in a single VMS system
configuration. That would mean implementation on VMS. And if the
implementation would be better in a GUI environment ....

Since VMS really doesn't have a GUI environment, and don't say
otherwise, unless all VMS systems have a good graphics capability, which
they don't, then what you're saying is that you need some other system
to manage the use of VMS. Now, that pill is a little hard to swallow.

Sure, this from the same guy that just advocated a non-VMS application
to manage a RDBMS ....
Scott Dorsey
2014-12-23 15:19:53 UTC
Permalink
Post by David Froble
Ok, here's my question. Is LDAP (whatever it is) really all that good
of a design? Or perhaps is it just all that's available?
It is a lousy design, but it is what everybody uses. Absolutely everybody,
in every large organization. They might use Linux, they might use Windows,
they might use AIX, but they're using LDAP.
Post by David Froble
I got no problem with stating the need for some things. But I'd suggest
that if possible, they should be able to be used in a single VMS system
configuration. That would mean implementation on VMS. And if the
implementation would be better in a GUI environment ....
I don't think it is better in a GUI environment. Certainly the LDAP server
we use on our Sun has no GUI. There are GUI clients, though. But the
server is really just a front end to a database.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Stephen Hoffman
2014-12-23 16:53:40 UTC
Permalink
Post by David Froble
Post by Stephen Hoffman
Post by JF Mezei
Is this an RMS problem or an application problem ?
Yes. RMS is very limited, and in various dimensions. Yes, there are
application problems. Yes, there gaps in what the existing VMS-related
tools provide. These areas are connected together, too. As the joke
goes, if the tool you have is a hammer, then everything looks like a
nail. If the tool you have is RMS, then everything looks like a bunch
of comparatively disjoint and application-managed and not-very-flexible
records all stuffed into various files, and not necessarily with online
backups, nor with the capability of all-or-nothing changes to the
constituent files of an application.
...Have you created an application that uses a transaction-oriented,
relational database? Whether SQLite, PostgreSQL, Oracle Rdb, or
otherwise?
Yes, I have. Not on VMS, but for why you're asking, it doesn't matter.
The DB is SQL-2000. Anyone who does anything like this cannot help
but have their nose rubbed into some rather nice capabilities.
DELETE * from USERS where USERID="JF"
Now, what is included is an application called Enterprise Manager.
With that you can do many things, including execute SQL commands,
scripts, and such. But, it IS an application, is is no better or worse
than how well it is implemented. Just being a DB utility has no
bearing on how well it is implemented.
I do not think it is fair to blame RMS for poor application implementation.
Tried that under transactions, where you can start a transaction, start
deleting the table rows related to the user — all stored in the same
database — and then finish the transaction. Either all the changes
happen, or none of the changes happen. Having done that task with
various databases and having done that same sort of resilience
manually, a database is vastly easier to deal with. *This* is what I
am objecting to.

If you don't have access to a transactional, relational database, then
you usually get multiple files, and you get cases where tools can't or
don't or won't deal with data that resides across multiple "cockroach"
files, and you get what VMS has — a small forest of side files that
grow and that accrete and that "cockroach" as the folks working on the
environment and the requirements (reasonably) don't or can't deal with
the limitations of RMS, and work around those by adding additional
files. As an example of this file "cockroaching", SYSUAF and
RIGHTSLIST. These two contain closely-related data. But in two
separate files, because storing multiple different sorts of data in one
file is rather painfully complex with RMS (there's no easy way to
CREATE TABLE here, etc), and also because even minor changes to SYSUAF
would could and variously would break stuff.

This home-grown multi-file non-transactional database approach also
usually means that you don't or can't easily get features like online
backups.

This is reinventing the wheel, after all. *This* is what I am
objecting to. This problem has been solved.

Now if somebody had looked at the implementation and the issues and the
trade-offs now — LDAP long post-dates the particular "cockroaching" of
SYSUAF and RIGHTSLIST files — they'd wonder whether migrating all that
data into LDAP storage might make more sense.

But migrating to LDAP locally and/or LDAP on a network will definitely
break stuff that rummages SYSUAF. Now from another of my long-term
gripes about VMS, application compatibility is valued more highly than
changes that can provide more valuable features and more upgrades.
Yes, there are definitely trade-offs here. But over the long term,
there's not a whole lot of difference between the results of excessive
compatibility and of randomly breaking stuff. As a vendor,
compatibility and fewer new features means a stable and probably
declining support income, but it also means you're spending more time
and effort on older releases, and less time on newer releases and newer
features and the sorts of stuff that make folks interested in your
platform and make folks more interested in upgrading and staying more
current, and that can sell new versions of tools and new and different
products. Again, yes, trade-offs. I'd rather see new and
interesting stuff, and stuff that makes end-users and partners go
"gotta have that" and want to upgrade. That's a much more vibrant
computing environment, after all. Does it mean I'm occasionally fixing
and updating old code? Yes.
Post by David Froble
I'll also say that having a GUI based application was very good. Any
DB running on VMS should include such an GUI application, whether
running on weendoze, Mac, or whatever.
Preferably a GUI, and some sort of API, and a command interface for
arcane tasks and for scripting.

OS X has a management GUI, as well as commands including networksetup,
systemsetup, serveradmin, etc. Also ldapsearch, ldapadd, ldapdelete,
ldapmodify, etc. But when there's a decent GUI around, few users and
few system managers really want to use the command line or the
low-level tools for infrequent or unfamiliar or error-prone tasks,
either.

Old, but then Apple is heading away from expecting very many folks to
know and to work at this level, too:
<http://manuals.info.apple.com/MANUALS/1000/MA1173/en_US/IntroCommandLine_v10.6.pdf>


Most databases do have configuration tools. Rdb had a pretty good one
too, with InstantSQL. Other packages have command-line or graphical
administration tools, either from the vendor or from third-party
providers, or open source. Tools such as Sequel Pro or phpMyAdmin,
for instance.

For a database that's used for system authentication — whether a
transactional database or file-based local LDAP or network LDAP or
something else — you'd still need to provide reasonable compatibility
with AUTHORIZE, $getuai, $setuai and other existing
non-direct-database-access implementations, akin to what DECnet Phase V
did with NCP. My preference here would be to get as many folks moved
forward as quickly as feasible, and to document the longer-term path
forward (and with the new features and improvements involved), but yes,
some compatibiliy with existing applications is still necessary. The
key here being the on-going removal of deprecated and
compatibility-targeted features for the deprecated interfaces and
tools, in the same way that you add new features and new tools, as you
move forward. Getting rid of deprecated old code can be as important
as adding code, particularly when you're looking at stability and
maintainability and security. Source code that you don't have can't be
vulnerable to a buffer overrun or to fuzzing, after all. Yes, removing
the deprecated APIs will break system and application dependent on
those interfaces.

For remote management, the OpenVMS management station (OMS, "Argus")
was around, but never really seemed to catch on. DEC, Compaq and HP
have certainly had various remote management tools and mechanisms over
the years, with SMH seemingly being the most recent (and limited)
approach available on VMS, and with web-based management — remember my
earlier comments about having web tools and APIs in the base distro? —
being another approach used with some platforms, and profile-based
management also becoming common. Profiles combined with InfoServer for
storage and distibution might be interesting with VMS, too.

Some related humor (sort of): AFAIK, the VMS Obsolete Features Manual
<http://h71000.www7.hp.com/doc/73final/documentation/pdf/ovms_obsolete_feat.pdf>
was itself obsoleted, meaning that there isn't even a good list of
what's officially gone from VMS.
Post by David Froble
Post by Stephen Hoffman
...Have you configured and managed an LDAP-based authentication
configuration before? Either Open Directory, Active Directory, or
otherwise?
No, I have not. In fact, I'm not sure I understand what is represented
by those labels actually do.
Try "tree-structured form of replicated, network-wide logical names,
with a much cleaner design, and with the ability to back up the current
state, and with APIs available for adding, removing and searching" or
some such? This given the way some VMS folks are fond of storing
configuration data in logical names. In addition to the tree
structures, LDAP also lets you apply defaults, so that you can have a
more flexible form of the tables available with logical names; akin to
the lnm$process_directory and lnm$system_directory stuff. But in
reality, logical names aren't very good at this
configuration-data-storage task.
Post by David Froble
Post by Stephen Hoffman
Most everybody around has certainly experienced these tools and
mechanisms as an end-user of computing, but — like DNS services or
certificates or other areas of computing — many folks haven't really
looked into the details and capabilities of these and other parts of
the foundation.
Put another way, my comments assume some familiarity with these and
with some other areas of VMS.
If you have worked with a transaction-oriented database and have some
experience within LDAP, then it would help to elaborate on what is is
that you find confusing or concerning about my comments, or where you
might disagree with my comments of course. If you don't have at least
a little familiarity or a little experience with databases or LDAP or
DNS or certificates for that matter, then I'd suggest spending a little
time learning more about these and other parts of the foundation of
most current computing configurations, and maybe reading some
documentation or writing some simple applications or however you prefer
to learn.
Sure, assign the old fart more work ....
Sometimes those pesky kids out on the lawn do have a good idea or two, eh?
Post by David Froble
Ok, here's my question. Is LDAP (whatever it is) really all that good
of a design? Or perhaps is it just all that's available?
LDAP, Kerberos and DNS do have their warts, but they do work. At
scale. LDAP is how large organizations can reasonably manage those
gazillions of Windows and Linux boxes. Or conversely and if you're in
a more perverse mood, it's how organizations can manage their security
to allow a nefarious entity to spearphish one person in IT, and to then
gain access to the whole organization. As has reportedly happened.

If you look around for distributed authentication packages and
platforms, then LDAP and Kerberos are the norm. Either distributed,
or — as is the case on OS X — both distributed and with an LDAP
configuration stored locally. Or you spend your time writing your own
LDAP and Kerberos analog, and then you've spent all that time and
effort, and you probably can't completely interoperate with existing
LDAP and Kerberos, and you get to enjoy dealing with at least some of
the same sorts of attacks and vulnerabilities and bugs that the
existing LDAP and Kerberos implementations have already weathered or
have overcome.
Post by David Froble
I got no problem with stating the need for some things. But I'd
suggest that if possible, they should be able to be used in a single
VMS system configuration. That would mean implementation on VMS.
That's why I've mentioned local and network LDAP.
Post by David Froble
And if the implementation would be better in a GUI environment ....
Yep. Though the VMS GUI tools are — as you're about to note —
exceedingly weak. Apple's Xcode environment and tools completely and
utterly blows the sneakers off what VMS offers here. And most VMS
folks haven't used the VUIT or ICS bx or similar tools. Might more
easily get some traction with the addition of Qt (LGPL, last I looked)
or gtk+ (GPL) support, or analogous.

But until HP and VSI decide to open-source enough of the VMS source —
even if it's to existing customers only, and/or akin to how Apple and
AdaCore and some other providers comply with GPL2 — forward progress in
various areas of VMS will be more difficult.
Post by David Froble
Since VMS really doesn't have a GUI environment, and don't say
otherwise, unless all VMS systems have a good graphics capability,
which they don't, then what you're saying is that you need some other
system to manage the use of VMS. Now, that pill is a little hard to
swallow.
VSI is probably eventually going to be looking at what to do with
DECwindows and X and the GUI, both with updates to X and then whether
they'll look at implementing something like Wayland and Weston, or
otherwise? There are command-line LDAP tools. I use those to manage
OS X. Or yes, web-based or remote clients. For many folks integrating
VMS here, they'll likely continue to use whatever tools they're already
using to administer Open Directory or Active Directory.
Post by David Froble
Sure, this from the same guy that just advocated a non-VMS application
to manage a RDBMS ....
As opposed to the VMS guy who spends a whole lot of time working on
Unix and C, and with C on VMS? (Having a more robust C environment
would be useful on VMS — working on a list of the local snags with C99
and C11 to send along to Mr Reagan as part of that — and maybe access
to some newer tools like, maybe, Rust would be nice. The versions of
Fortran or BASIC available for VMS, alas, tend to be non-starters for
the sort of work I usually use C for. If not C on VMS, it'd be Bliss
or Macro32, but then more folks are familiar with C than with either
Bliss or Macro32.)
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2014-12-24 04:19:38 UTC
Permalink
As opposed to the VMS guy who spends a whole lot of time working on Unix
and C, and with C on VMS? (Having a more robust C environment would be
useful on VMS — working on a list of the local snags with C99 and C11 to
send along to Mr Reagan as part of that — and maybe access to some newer
tools like, maybe, Rust would be nice. The versions of Fortran or BASIC
available for VMS, alas, tend to be non-starters for the sort of work I
usually use C for. If not C on VMS, it'd be Bliss or Macro32, but then
more folks are familiar with C than with either Bliss or Macro32.)
Gee, and just why are some of the legacy (stuff that works) non-starters?

I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.

How naive could I be?

It's crap such as this that makes me thing that VMS is good and *ix and
C are bad.

I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.

(Now I'll probably still be upset on Christmas, call me the grinch)
JF Mezei
2014-12-24 06:27:20 UTC
Permalink
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C respects the VMS calling standard. This standards lays out how
arguments are passed to a called routine and its return code returned to
the caller, it does not define the syntax/content of those arguments.
Post by David Froble
I still cannot understand why I could not call the SSL routines from
Basic,
I am pretty sure you could call them. You just need to know how to
properly setup the arguments, just as C programmers need to know how to
setup string descriptors when calling VMS services.
j***@gmail.com
2014-12-24 13:00:30 UTC
Permalink
Post by David Froble
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
It might be interesting to attempt to create a small reproducer of your
problem and share that with others. One possible forum in which to
consider such a posting would be stackoverflow.com. It's a great Q&A
site that's designed to promote thoughtful questions and civil answers.

EJ
Stephen Hoffman
2014-12-24 20:20:09 UTC
Permalink
Post by j***@gmail.com
Post by David Froble
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
It might be interesting to attempt to create a small reproducer of your
problem and share that with others. One possible forum in which to
consider such a posting would be stackoverflow.com. It's a great Q&A
site that's designed to promote thoughtful questions and civil answers.
EJ
Here's some background to the discussion:
<http://labs.hoffmanlabs.com/node/1853>
--
Pure Personal Opinion | HoffmanLabs LLC
Stephen Hoffman
2014-12-24 20:17:53 UTC
Permalink
Post by David Froble
Post by Stephen Hoffman
As opposed to the VMS guy who spends a whole lot of time working on
Unix and C, and with C on VMS? (Having a more robust C environment
would be useful on VMS — working on a list of the local snags with C99
and C11 to send along to Mr Reagan as part of that — and maybe access
to some newer tools like, maybe, Rust would be nice. The versions of
Fortran or BASIC available for VMS, alas, tend to be non-starters for
the sort of work I usually use C for. If not C on VMS, it'd be Bliss
or Macro32, but then more folks are familiar with C than with either
Bliss or Macro32.)
Gee, and just why are some of the legacy (stuff that works) non-starters?
I've worked with some great old power tools. Those old power tools
just work, too. One old half-inch drill I've worked with is probably
fifty years old, with a two-foot-long, 2" galvanized iron pipe as a
side handle, threaded onto the drill. Monstrous torque, too. But I
don't use those old power tools very often. There can be easier (and
lighter!) ways to do the same work.
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C, Bliss and Macro all comply with the VMS calling standard, and can
more easily do certain things; things that BASIC, Fortran and COBOL
just aren't so fond of.
Post by David Froble
How naive could I be?
That's more a question of programmer familiarity and of application
requirements, I'd expect.
Post by David Froble
It's crap such as this that makes me thing that VMS is good and *ix and
C are bad.
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas — for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
Post by David Froble
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
You can call the SSL routines from BASIC, if you were inclined to
create ASCIZ strings and pass around %REF-style pointers.

Now as to why VMS lacks a LIB-style interface for network encryption,
that's a question for HP and VSI. That, and there's also the "fun"
that OpenSSL APIs are a moving target.

Here's a nice overview of what Apple is up to with Crypto, written by
one of the third-party developers:
<http://rentzsch.tumblr.com/post/33696323211/wherein-i-write-apples-technote-about-openssl-on>.
Hopefully VSI starts looking at the crypto morass within VMS, with
OpenSSL or LibreSSL or something like Common Crypto, and with replacing
the long deprecated CDSA, and resolving the crypto APIs and certificate
handling within VMS. But that probably won't happen for a year or
three...
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2014-12-24 21:23:29 UTC
Permalink
Post by Stephen Hoffman
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C, Bliss and Macro all comply with the VMS calling standard, and can
more easily do certain things; things that BASIC, Fortran and COBOL
just aren't so fond of.
The Calling Standard actually has few requirements on how arguments are passed
(by immediate value, by reference, by descriptor are all valid mechanisms).

Traditionally the compilers only have one or two schemes to accept arguments and
then have additional directives, etc. to generate other schemes when calling other routines.

I'll wager the best language for different schemes is the one that Hoff keeps forgetting... Pascal... :)
Post by Stephen Hoffman
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas -- for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
The best string languages would be BASIC, COBOL, and Pascal (in that order).
JF Mezei
2014-12-24 21:52:33 UTC
Permalink
Post by John Reagan
The best string languages would be BASIC, COBOL, and Pascal (in that order).
I know COBOL has the ability to pass arguments by descriptor or by
reference which is neat compared to say C.

However, unless COBOL magically acquired the ability to have dymanic
length strings, I suspect it really shouldn't have a place in that list
above.


05 MYVARIABLE PIC X(80). really means that you can have "JF" followed by
78 space characters.
John Reagan
2014-12-25 01:51:05 UTC
Permalink
Post by JF Mezei
However, unless COBOL magically acquired the ability to have dymanic
length strings, I suspect it really shouldn't have a place in that list
above.
Dynamic or variable length? Pascal does CLASS_VS and CLASS_VSA strings
which BASIC doesn't do. BASIC is the only language that does CLASS_D.
Pascal's schema types allow for true run-time maximum sized strings that
can have variable length.

John
David Froble
2014-12-25 05:47:53 UTC
Permalink
Post by John Reagan
Post by JF Mezei
However, unless COBOL magically acquired the ability to have dymanic
length strings, I suspect it really shouldn't have a place in that list
above.
Dynamic or variable length? Pascal does CLASS_VS and CLASS_VSA strings
which BASIC doesn't do. BASIC is the only language that does CLASS_D.
Pascal's schema types allow for true run-time maximum sized strings that
can have variable length.
John
I'm reading this, and trying to understand it. Are you saying the
CLASS_VS and CLASS_VSA strings are dynamic strings with a max length
limit that can be specified?

Regardless, if I'm programming in BASIC, I can use a non-BASIC data
type, as long as I always use appropriate routines with the data, right?
I'm guessing LIB$ or such.
Stephen Hoffman
2014-12-25 15:49:55 UTC
Permalink
Post by David Froble
Regardless, if I'm programming in BASIC, I can use a non-BASIC data
type, as long as I always use appropriate routines with the data,
right? I'm guessing LIB$ or such.
Or C.

*ducks*
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2014-12-25 16:19:32 UTC
Permalink
Post by David Froble
I'm reading this, and trying to understand it. Are you saying the
CLASS_VS and CLASS_VSA strings are dynamic strings with a max length
limit that can be specified?
Not dynamic in the CLASS_D sense where some low level routine can reallocate
and extend it on-the-fly. CLASS_VS (Pascal's VARYING OF CHAR and PL/1's
VARYING CHAR) has a word-length at the front of the string. Pascal lets you pick
the maximum at run-time with its STRING() schema type. However, once you pick
the maximum (64K of course), you are stuck with it (and the memory).
Post by David Froble
Regardless, if I'm programming in BASIC, I can use a non-BASIC data
type, as long as I always use appropriate routines with the data, right?
I'm guessing LIB$ or such.
Yes, feel free to create a CLASS_V, CLASS_SD, or any other DTYPE_T descriptor
and pass them to LIB$ and STR$ routines from BASIC.
Paul Sture
2014-12-25 05:49:56 UTC
Permalink
Post by JF Mezei
Post by John Reagan
The best string languages would be BASIC, COBOL, and Pascal (in that order).
I know COBOL has the ability to pass arguments by descriptor or by
reference which is neat compared to say C.
However, unless COBOL magically acquired the ability to have dymanic
length strings, I suspect it really shouldn't have a place in that list
above.
05 MYVARIABLE PIC X(80). really means that you can have "JF" followed by
78 space characters.
but you can do stuff this when passing strings to descriptors:

move 'JF' to myvariable.
call "humbug" using by descriptor myvariable(1:2)
by reference err-return.
--
Merry Humbug All!
JF Mezei
2014-12-25 06:37:30 UTC
Permalink
Post by Paul Sture
move 'JF' to myvariable.
call "humbug" using by descriptor myvariable(1:2)
by reference err-return.
If you pass myvariable without the (1:2), you expect humbug to right
fill "myvariable" with blanks to the full length of the allocation. In
other words, "humbug" has to be written knowing that it will be called
by a COBOL program and expect all string arguments to be static length
and need to be filled with blanks.
Paul Sture
2014-12-25 06:41:37 UTC
Permalink
Post by Paul Sture
Post by JF Mezei
Post by John Reagan
The best string languages would be BASIC, COBOL, and Pascal (in that order).
I know COBOL has the ability to pass arguments by descriptor or by
reference which is neat compared to say C.
However, unless COBOL magically acquired the ability to have dymanic
length strings, I suspect it really shouldn't have a place in that list
above.
05 MYVARIABLE PIC X(80). really means that you can have "JF" followed by
78 space characters.
****
Post by Paul Sture
move 'JF' to myvariable.
call "humbug" using by descriptor myvariable(1:2)
by reference err-return.
--
Merry Humbug All!
Stephen Hoffman
2014-12-24 22:13:36 UTC
Permalink
Post by John Reagan
I'll wager the best language for different schemes is the one that Hoff
keeps forgetting... Pascal... :)
Um, no. Haven't forgotten about Pascal. At all. My experience with
Pascal started out with UCSD Pascal on Terak:
<http://www.threedee.com/jcm/terak/>
<http://www.threedee.com/jcm/psystem/>. Then Pascal taught me much
about VMS descriptors, and about using the VMS debugger to rummage the
call frames in order to learn how Pascal was implementing a particular
subroutine call. Pascal is probably the best language to learn about
VMS descriptors too, particularly if you're performing mixed-language
programming. That's where I first met the NCA descriptor, and Pascal
is probably one of the few languages around where the NCA descriptor is
even used.
Post by John Reagan
Post by Stephen Hoffman
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas -- for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
The best string languages would be BASIC, COBOL, and Pascal (in that order).
Those three do well with strings using ASCII, MCS / ISO Latin-1
encoding, and there are others that also do semi-well with similar
sorts of strings. I might also look to use Perl, Python or Lua for
good string handling, but those aren't "DEC classic" languages on VMS.
Unfortunately, comparatively few languages on VMS do very well with
UTF-8 string handling, or regular expressions; with some newer
environments. C on VMS can sorta-kinda do UTF-8, but it's ugly.
Python, Perl and Lua have regular expressions.

VMS string descriptors never picked up any sort of character encoding
indication, either. These character encodings have been multiplying in
recent years, too.

The NCS support got deprecated AFAIK, too.

But as for the programming languages, again, use the right tool(s) for
what you're doing. Better still, look to enhance or upgrade your tools
and your familiarity and your skills, because if you're still doing the
same things the same way you did with Pascal on Terak, with C89 on VMS,
or with RMS indexed files, you're likely missing out on many
enhancements and advancements; things that'll often make it easier to
do more, and easier and faster to do what you need.
--
Pure Personal Opinion | HoffmanLabs LLC
John Reagan
2014-12-25 01:54:27 UTC
Permalink
On Wednesday, December 24, 2014 5:13:51 PM UTC-5, Stephen Hoffman wrote:
Pascal is probably the best language to learn about
Post by Stephen Hoffman
VMS descriptors too, particularly if you're performing mixed-language
programming. That's where I first met the NCA descriptor, and Pascal
is probably one of the few languages around where the NCA descriptor is
even used.
Pascal can do CLASS_A, CLASS_NCA, CLASS_VS, CLASS_VSA, CLASS_S strings.
Pascal can do create scalar descriptors for almost any datatype, not just DTYPE_T.
Post by Stephen Hoffman
Those three do well with strings using ASCII, MCS / ISO Latin-1
encoding, and there are others that also do semi-well with similar
sorts of strings. I might also look to use Perl, Python or Lua for
good string handling, but those aren't "DEC classic" languages on VMS.
Unfortunately, comparatively few languages on VMS do very well with
UTF-8 string handling, or regular expressions; with some newer
environments. C on VMS can sorta-kinda do UTF-8, but it's ugly.
Python, Perl and Lua have regular expressions.
Absolutely, I'll pick Perl over any of our standard languages for string
processing anyday.
David Froble
2014-12-25 00:12:25 UTC
Permalink
Post by John Reagan
Post by Stephen Hoffman
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C, Bliss and Macro all comply with the VMS calling standard, and can
more easily do certain things; things that BASIC, Fortran and COBOL
just aren't so fond of.
The Calling Standard actually has few requirements on how arguments are passed
(by immediate value, by reference, by descriptor are all valid mechanisms).
Well, I seem to have attempted to forget the issue. I do that with
things that bother me. But, I >think< that there was / is a Context
pointer to some type of record structure, and inept me could not find
any definition (that I could understand) of the structure, if that is
indeed what it was.

Nor am I thinking it is just me, as Steve was heard to utter some pithy
comments as he was working on the solution for me.

:-) :-)
Post by John Reagan
Traditionally the compilers only have one or two schemes to accept arguments and
then have additional directives, etc. to generate other schemes when calling other routines.
I'll wager the best language for different schemes is the one that Hoff keeps forgetting... Pascal... :)
Post by Stephen Hoffman
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas -- for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
The best string languages would be BASIC, COBOL, and Pascal (in that order).
I understand the concepts, and have no problem working with non-standard
data, including passing it in an argument list. But the SSL stuff drove
me far over the edge.
Bill Gunshannon
2015-01-05 15:30:58 UTC
Permalink
Post by John Reagan
Post by Stephen Hoffman
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C, Bliss and Macro all comply with the VMS calling standard, and can
more easily do certain things; things that BASIC, Fortran and COBOL
just aren't so fond of.
The Calling Standard actually has few requirements on how arguments are passed
(by immediate value, by reference, by descriptor are all valid mechanisms).
Traditionally the compilers only have one or two schemes to accept arguments and
then have additional directives, etc. to generate other schemes when calling other routines.
I'll wager the best language for different schemes is the one that Hoff keeps forgetting... Pascal... :)
While I think Pascal is great and still use it a lot (and love playng with
Pascal Compilers) it is yet another example of a language that was tasked
with doing things it was not deisigned for (kinda like C :-). Maybe Pascal
users should actually be using Modula.
Post by John Reagan
Post by Stephen Hoffman
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas -- for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
The best string languages would be BASIC, COBOL, and Pascal (in that order).
What!! What about SNOBOL? :-)

bill
--
Bill Gunshannon | de-moc-ra-cy (di mok' ra see) n. Three wolves
***@cs.scranton.edu | and a sheep voting on what's for dinner.
University of Scranton |
Scranton, Pennsylvania | #include <std.disclaimer.h>
V***@SendSpamHere.ORG
2015-01-05 17:01:37 UTC
Permalink
Post by Bill Gunshannon
Post by John Reagan
Post by Stephen Hoffman
Post by David Froble
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
C, Bliss and Macro all comply with the VMS calling standard, and can
more easily do certain things; things that BASIC, Fortran and COBOL
just aren't so fond of.
The Calling Standard actually has few requirements on how arguments are passed
(by immediate value, by reference, by descriptor are all valid mechanisms).
Traditionally the compilers only have one or two schemes to accept arguments and
then have additional directives, etc. to generate other schemes when calling other routines.
I'll wager the best language for different schemes is the one that Hoff keeps forgetting... Pascal... :)
While I think Pascal is great and still use it a lot (and love playng with
Pascal Compilers) it is yet another example of a language that was tasked
with doing things it was not deisigned for (kinda like C :-). Maybe Pascal
users should actually be using Modula.
Post by John Reagan
Post by Stephen Hoffman
Use the right tools. BASIC deals with strings very well, and C, not so
much. C has advantages in other areas -- for tasks where I would once
have used Macro32 or sometimes Bliss, C usually works.
The best string languages would be BASIC, COBOL, and Pascal (in that order).
What!! What about SNOBOL? :-)
That's only available in certain winter climates. :)
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Rob Brown
2015-01-27 23:44:24 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by Bill Gunshannon
Post by John Reagan
The best string languages would be BASIC, COBOL, and Pascal (in that order).
What!! What about SNOBOL? :-)
That's only available in certain winter climates. :)
True. Where I come from, most of the time it was either too cold or too hot.
When I was in school, we used SPITBOL.

John Reagan
2015-01-05 19:02:26 UTC
Permalink
Post by Bill Gunshannon
While I think Pascal is great and still use it a lot (and love playng with
Pascal Compilers) it is yet another example of a language that was tasked
with doing things it was not deisigned for (kinda like C :-). Maybe Pascal
users should actually be using Modula.
In general, that is probably true. On OpenVMS, the Pascal compiler has a consider amount of additional features. Some are extensions to deal with things like descriptors, calling system routines, interface/implementation parts etc and some come from the 1989 Extended Pascal standard (which used many features from Modula, Eiffel, etc.) like type & variable initializers and run-time sized types (we didn't do the EP version of interface/implementation since our environment files serve almost the same purpose - we didn't add complex numbers either)

There are many Pascal users on OpenVMS with HUGE amounts of Pascal code. I was just trying to let them know that I haven't forgotten them (they still send me email).
Jan-Erik Soderholm
2015-01-05 22:37:05 UTC
Permalink
Post by John Reagan
Post by Bill Gunshannon
While I think Pascal is great and still use it a lot (and love playng
with Pascal Compilers) it is yet another example of a language that
was tasked with doing things it was not deisigned for (kinda like C
:-). Maybe Pascal users should actually be using Modula.
In general, that is probably true. On OpenVMS, the Pascal compiler has
a consider amount of additional features. Some are extensions to deal
with things like descriptors, calling system routines,
interface/implementation parts etc and some come from the 1989 Extended
Pascal standard (which used many features from Modula, Eiffel, etc.)
like type & variable initializers and run-time sized types (we didn't do
the EP version of interface/implementation since our environment files
serve almost the same purpose - we didn't add complex numbers either)
There are many Pascal users on OpenVMS with HUGE amounts of Pascal code.
I was just trying to let them know that I haven't forgotten them (they
still send me email).
The well known (?) mail automation tool DELIVER (comes
from PDMF) is/was written in Pascal.

Ned Freed (Innosoft), Kevin Carosso, Sheldon Smith,
Dick Munroe, Doyle (Munroe Consultants) and
Wayne Sewell (Tachyon Software Consulting)
are names mentioned in the DELIVER.PAS file.
Last update (of my version) was in -94.

I have used that for 25 years, something.
Nice tool...

Jan-Erik.
Bob Gezelter
2015-01-05 15:57:38 UTC
Permalink
Post by David Froble
As opposed to the VMS guy who spends a whole lot of time working on Unix
and C, and with C on VMS? (Having a more robust C environment would be
useful on VMS -- working on a list of the local snags with C99 and C11 to
send along to Mr Reagan as part of that -- and maybe access to some newer
tools like, maybe, Rust would be nice. The versions of Fortran or BASIC
available for VMS, alas, tend to be non-starters for the sort of work I
usually use C for. If not C on VMS, it'd be Bliss or Macro32, but then
more folks are familiar with C than with either Bliss or Macro32.)
Gee, and just why are some of the legacy (stuff that works) non-starters?
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
How naive could I be?
It's crap such as this that makes me thing that VMS is good and *ix and
C are bad.
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
(Now I'll probably still be upset on Christmas, call me the grinch)
David,

Insofar as I know, there is no reason that you cannot call SSL routines from BASIC (I have been invoking C routines from BASIC for decades, literally).

If the data structure definitions provided as C header files are not supplied in forms compatible with BASIC, one may have to translate the data structure/record definitions, but that is not a major job, just tedious.

- Bob Gezelter, http://www.rlgsc.com
Stephen Hoffman
2015-01-05 17:20:22 UTC
Permalink
Post by Bob Gezelter
Post by David Froble
As opposed to the VMS guy who spends a whole lot of time working on Unix
and C, and with C on VMS? (Having a more robust C environment would be
useful on VMS -- working on a list of the local snags with C99 and C11 to
send along to Mr Reagan as part of that -- and maybe access to some newer
tools like, maybe, Rust would be nice. The versions of Fortran or BASIC
available for VMS, alas, tend to be non-starters for the sort of work I
usually use C for. If not C on VMS, it'd be Bliss or Macro32, but then
more folks are familiar with C than with either Bliss or Macro32.)
Gee, and just why are some of the legacy (stuff that works) non-starters?
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
How naive could I be?
It's crap such as this that makes me thing that VMS is good and *ix and
C are bad.
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
Insofar as I know, there is no reason that you cannot call SSL routines
from BASIC (I have been invoking C routines from BASIC for decades,
literally).
If the data structure definitions provided as C header files are not
supplied in forms compatible with BASIC, one may have to translate the
data structure/record definitions, but that is not a major job, just
tedious...
What David is likely grumbling about is the lack of VMS
descriptor-based interfaces for these and other calls.

Making the SSL calls directly from BASIC is certainly entirely
possible, it's just more work than creating a jacket with a
descriptor-friendly interface.

The current OpenSSL interface leaves folks the choice of either a more
complex API scattered around within the BASIC code (and the OpenSSL
interface is pretty ugly to start with), or creating a set of jackets
written in BASIC or in C to isolate the OpenSSL calls. Pushing the C
API incantations including the ASCIZ support and the pointers up into
some BASIC-based jacket code does make for somewhat more work and it
can mean having to translate the symbolic constants, which means
somewhat more complex code and more effort. Using C-based jackets
<http://labs.hoffmanlabs.com/node/1853> keeps the BASIC code simpler
and much more consistent, and it avoids having to generate BASIC
versions of the C definitions, where those BASIC definitions don't
already exist.

For OpenSSL, David would have preferred something closer to the
DECwindows APIs where there are C native bindings
<http://h71000.www7.hp.com/doc/73final/5642/5642pro.html> and
VMS-native bindings
<http://h71000.www7.hp.com/doc/732final/documentation/pdf/dw_guide_nonc_bindings.pdf>,
and not having needed to create the C jackets at all.

When compared with David, I'm more interested in looking at languages
past BASIC, Fortran and C, for that matter. Having worked in each,
each of these languages has various issues. I'd prefer to find a
language that means less source code and more reliable and robust code,
and preferably with support and tools for easier code creation and
debugging and related tasks. Which is the comment that started this
off. I'd also prefer to see better and more capable and more
current APIs, which ties back into (part of) what David is looking at
with OpenSSL and the jackets.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2015-01-06 05:59:16 UTC
Permalink
Post by Bob Gezelter
Post by David Froble
As opposed to the VMS guy who spends a whole lot of time working on Unix
and C, and with C on VMS? (Having a more robust C environment would be
useful on VMS -- working on a list of the local snags with C99 and C11 to
send along to Mr Reagan as part of that -- and maybe access to some newer
tools like, maybe, Rust would be nice. The versions of Fortran or BASIC
available for VMS, alas, tend to be non-starters for the sort of work I
usually use C for. If not C on VMS, it'd be Bliss or Macro32, but then
more folks are familiar with C than with either Bliss or Macro32.)
Gee, and just why are some of the legacy (stuff that works)
non-starters?
I mean, I just figured there is / was this thing called the VMS calling
standard, that every language (except for C) seems to respect, and so
one should be able call code from any VMS language.
How naive could I be?
It's crap such as this that makes me thing that VMS is good and *ix and
C are bad.
I still cannot understand why I could not call the SSL routines from
Basic, and I still cannot understand how VMS development could have
stooped so low as to throw things out there that could only be called
from C.
Insofar as I know, there is no reason that you cannot call SSL
routines from BASIC (I have been invoking C routines from BASIC for
decades, literally).
If the data structure definitions provided as C header files are not
supplied in forms compatible with BASIC, one may have to translate the
data structure/record definitions, but that is not a major job, just
tedious...
What David is likely grumbling about is the lack of VMS descriptor-based
interfaces for these and other calls.
Actually, no. I understand descriptors, and other structures that may
not be native to Basic. I can build such and use them.
Making the SSL calls directly from BASIC is certainly entirely possible,
it's just more work than creating a jacket with a descriptor-friendly
interface.
Most definitely! Write it once, and then call it from many places.
The current OpenSSL interface leaves folks the choice of either a more
complex API scattered around within the BASIC code (and the OpenSSL
interface is pretty ugly to start with),
Yes, and who is to blame for that? If I was asked, the finger would
point to HP who didn't care enough to distribute usable tools. I doubt
this would have happened in the days of DEC software development.

And since I'm on another rant, documentation is like the grey / orange
wall, not a snipet of C code.
or creating a set of jackets
written in BASIC or in C to isolate the OpenSSL calls. Pushing the C
API incantations including the ASCIZ support and the pointers up into
some BASIC-based jacket code does make for somewhat more work and it can
mean having to translate the symbolic constants, which means somewhat
more complex code and more effort. Using C-based jackets
<http://labs.hoffmanlabs.com/node/1853> keeps the BASIC code simpler and
much more consistent, and it avoids having to generate BASIC versions of
the C definitions, where those BASIC definitions don't already exist.
For OpenSSL, David would have preferred something closer to the
DECwindows APIs where there are C native bindings
<http://h71000.www7.hp.com/doc/73final/5642/5642pro.html> and VMS-native
bindings
<http://h71000.www7.hp.com/doc/732final/documentation/pdf/dw_guide_nonc_bindings.pdf>,
and not having needed to create the C jackets at all.
What I believe I ran into was the data associated with the context
pointer. It's been a while, and I tend to forget things that cause me
pain. No where could I find (talking about me, not things in general)
how to set up a structure for the context to point to. At least, I
think that was the problem.
Stephen Hoffman
2015-01-06 15:07:55 UTC
Permalink
Post by David Froble
What I believe I ran into was the data associated with the context
pointer. It's been a while, and I tend to forget things that cause me
pain. No where could I find (talking about me, not things in general)
how to set up a structure for the context to point to. At least, I
think that was the problem.
That would be a longword (for APIs and compilations involving 32-bit
addressing) or a quadword (involving 64-bit addressing), usually zeroed
before its first use though that's dependent on the API definitions,
and passed by reference. There are traditional VMS APIs that use
similar contexts.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-24 11:30:11 UTC
Permalink
Post by Stephen Hoffman
For remote management, the OpenVMS management station (OMS, "Argus")
was around, but never really seemed to catch on.
Wasn't this the thing which ran only on Windows? Who would use it,
except system managers? Saying to VMS system managers "I can make your
life easier: just do your system-management stuff on VMS via a GUI
running on Windows" apparently didn't cut the mustard, and frankly I'm
not surprised.
Kerry Main
2014-12-24 15:54:36 UTC
Permalink
-----Original Message-----
Phillip Helbig (undress to reply)
Sent: 24-Dec-14 6:30 AM
Subject: Re: [New Info-vax] Databases, LDAP and the limitations of RMS-
file-based authentication
Post by Stephen Hoffman
For remote management, the OpenVMS management station (OMS,
"Argus")
Post by Stephen Hoffman
was around, but never really seemed to catch on.
Wasn't this the thing which ran only on Windows? Who would use it,
except system managers? Saying to VMS system managers "I can make your
life easier: just do your system-management stuff on VMS via a GUI
running on Windows" apparently didn't cut the mustard, and frankly I'm
not surprised.
It is actually kind of a nice utility - especially for those environments
where the user support staff are used to using GUI's for things like user
maint (add/delete/update). Point-click management of disks and print
queues etc. Level 1 support types typically do these tasks on Windows.

Most med-large companies are looking for a way to simplify and
standardize how they do level I activities e.g. manage all of the users
on the various platforms they support.

As an example - sort the lists of users by state (disabled or active),
UIC, by name alphabetically etc.

OMS (OpenVMS Management Station) was a nice way to do this,
especially if managing multiple OpenVMS systems not in a cluster.

Unfortunately, OMS has lacked any new updates for some time and
from the link provided below just went into retirement mode.

Reference the following for some screen shots:
http://www.openvms.compaq.com/openvms/products/argus/

It does provide a good example of what future GUI management
utilities should contain.

Regards,

Kerry Main
Back to the Future IT Inc.
.. Learning from the past to plan the future

Kerry dot main at backtothefutureit dot com
Paul Sture
2014-12-23 11:41:36 UTC
Permalink
Post by JF Mezei
Post by Stephen Hoffman
deletions here can leave you with odd states. These failures are rare
as the various authentication files aren't written all that often, but
it's easily possible to get into these states manually. Everybody here
remembers to remove the VMSMAIL_PROFILE record when a user's SYSUAF
entry is removed, for instance?
Is this an RMS problem or an application problem ? Unless all is packed
in the same record in the same file/table, you will need to have an
application (such as authorize) manage deleting multiple related records
when you delete a user.
I think this has more to do with Digital not updating Authorize when
they added new stuff such as the VMSMAIL_PROFILE file.
(Or integrating the info from VMSMAIL_PROFILE into SYSUAF).
RIGHTSLIST.DAT getting out of step can have serious consequences, and
you can achieve that by legitimate commands; you don't need crashes to
mess it up.

I've a feelin that there are still pieces of AUTHORIZE behaviour with
respect to RIGHTSLIST.DAT which aren't documented clearly.
--
HAL 9000: Dave. Put down those Windows disks. Dave. DAVE!
Stephen Hoffman
2014-12-23 17:20:47 UTC
Permalink
Post by Paul Sture
RIGHTSLIST.DAT getting out of step can have serious consequences, and
you can achieve that by legitimate commands; you don't need crashes to
mess it up.
I've a feelin that there are still pieces of AUTHORIZE behaviour with
respect to RIGHTSLIST.DAT which aren't documented clearly.
Ayup.

Having rummaged in RIGHTSLIST for some security-related work, there are
most definitely some features <http://labs.hoffmanlabs.com/node/1809>
that are somewhere between arcane to undocumented lurking in RIGHTSLIST.

There is also an arcane feature or two lurking within SYSUAF, due to
how some "other" non-username-related password information is stored in
that file; the system password. A variant record is where the system
password — that's not the password of the SYSTEM username — is stored.
(Which also means that extending the character set of usernames will
either omit certain characters, or the upgrade might be somewhat more
involved.)

Which also ties back to the inability to do something like a CREATE
TABLE easily, and to sanely store some additional data in the same
file, so these files just get hacked into the files. Technical debt
piles up, after all.
--
Pure Personal Opinion | HoffmanLabs LLC
Phillip Helbig (undress to reply)
2014-12-22 18:19:51 UTC
Permalink
Post by David Froble
I've always figured that thing about moving stuff off the system disk to
some common disk was just asking for trouble. Like wearing the "please
kick me" sign.
In a cluster, anything else is a pain. Make a change to SYSUAF? Update
all 96 SYSUAF.DAT. Change something in TCPIP? Change it several times.
David Froble
2014-12-22 01:57:12 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Post by Phillip Helbig (undress to reply)
The new concept is using a copy of an upgraded disk, rather than a)
upgrading each individually, or b) "changing the node name" of a copy.
Again: not a new concept.
Somehow, I don't think that people who have more than, say, 10 system
disks do individual upgrades. It would be nice if there were an
officially documented and supported replication process.
Changing a node name and related stuff isn't very difficult. If you got
a check list, it's rather simple. That said, I've never run a cluster,
and I have no idea what might be a problem in a cluster.
Post by Phillip Helbig (undress to reply)
On the other hand, upgrading from hard disk, rather than CD, should be
faster, so it might be quicker and safer to do three regular upgrades.
Post by Stephen Hoffman
Whether the VMS x86-64 port might support any box that you have is an
open question,
Certainly not any I have now, but perhaps one I can obtain when the time
comes.
Post by Stephen Hoffman
The open question being why you seek to run a mixed-architecture
cluster here, and to add more complexity here.
Actually, I wish to avoid it, but it might be the only path from Alpha
to x86.
I have to question this statement. What do you envision will be so hard
about moving your applications from Alpha to x86? Executables will need
to be rebuilt of course.

Seems simple to me. You've mentioned I believe that your system disks
have nothing else on them. Makes it even simpler.

1) Install VMS V9.0 on the x86 system
2) Configure the logicals and such that define the storage
3) Since you don't, if I remember correctly, believe in DECnet, FTP
4) Re-build your executables
5) Re-build your user accounts

Probably some small (oe large) things I've forgotten ....
Post by Phillip Helbig (undress to reply)
I probably can't connect the SCSI disks I have on ALPHA
directly to an x86 box, so a mixed-architecture cluster migrating the
disks via volume shadowing seems the logical way to go (and would be
less work than copying them offline on some machine which supported both
the disks I have on Alpha and future disks which would work with x86).
Or, how about BACKUP to save sets, and then re-store over the network
from the save sets still on the ALpha, or that were copied to the x86?
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
One Itanium box --- if
you can deal with the server noise, if you happen to get one of the
louder ones --- will greatly exceed the performance of your existing
collection of hardware.
While more performance would be nice, I do want to keep power and noise
down. More important is having enough replacements which I can
essentially hot swap. There is also the question whether learning the
ins and outs of the Itanium console is worth the trouble for some
intermediate solution.
For you, most definitely NOT! Maybe it's just me, but I don't like it.
Post by Phillip Helbig (undress to reply)
Also, I like a cluster with separate system disks because it allows one
to test things on one node and also allows for a node to drop out
(intentionally or not). Yes, a modern smartphone might have more
performance than my VMS cluster, but it's not the right tool for the job
here.
Post by Stephen Hoffman
I'd expect even a 900 MHz McKinley will, for
that matter --- the 900 MHz zx2000 box was about as fast as an EV6.
I have one EV67 and two EV56 in the cluster boot servers, and an EV6 on
a satellite.
Post by Stephen Hoffman
Assuming that there's a hobbyist license as VSI expects, then I'd
expect to see a complete transition over from that existing Alpha
configuration, maybe using translation or emulation as available if
there's some non-portable hunk of Alpha code in use.
I doubt there is any non-portable Alpha code; it's more a question of
the hardware transition.
You seem to have this capability to make things that should be simple
sound hard and complex ....
Phillip Helbig (undress to reply)
2014-12-22 18:17:10 UTC
Permalink
Post by David Froble
I have to question this statement. What do you envision will be so hard
about moving your applications from Alpha to x86? Executables will need
to be rebuilt of course.
It's not just rebuilding executables.
Post by David Froble
Seems simple to me. You've mentioned I believe that your system disks
have nothing else on them. Makes it even simpler.
Right.
Post by David Froble
1) Install VMS V9.0 on the x86 system
2) Configure the logicals and such that define the storage
3) Since you don't, if I remember correctly, believe in DECnet, FTP
Sure I do. I just don't see the need for DECnet WITHIN a cluster. I
even run the OSU server without it. (Yes, I plan to change this, but
more to learn DECnet than for any real reason.) As for DECnet to other
systems, apart from my cluster there are none on my LAN with DECnet, and
I can't reach the outside world with DECnet. I use FTP a lot.
Post by David Froble
Post by Phillip Helbig (undress to reply)
I probably can't connect the SCSI disks I have on ALPHA
directly to an x86 box, so a mixed-architecture cluster migrating the
disks via volume shadowing seems the logical way to go (and would be
less work than copying them offline on some machine which supported both
the disks I have on Alpha and future disks which would work with x86).
Or, how about BACKUP to save sets, and then re-store over the network
from the save sets still on the ALpha, or that were copied to the x86?
Certainly doable. But this supposes I have Alpha and x86 at the same
time, so why not a mixed cluster?
Post by David Froble
You seem to have this capability to make things that should be simple
sound hard and complex ....
Suppose I rebuild an executable and find that its behaviour is
different. At least the old one or the new one is wrong, maybe both.
Much easier to test in the same cluster.
Paul Sture
2014-12-22 07:15:28 UTC
Permalink
On 2014-12-21, Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Post by Stephen Hoffman
Assuming that there's a hobbyist license as VSI expects, then I'd
expect to see a complete transition over from that existing Alpha
configuration, maybe using translation or emulation as available if
there's some non-portable hunk of Alpha code in use.
I doubt there is any non-portable Alpha code; it's more a question of
the hardware transition.
"Non-portable" in this context probably falls into:

a) stuff you don't have sources for
b) stuff that's a horrible mess that you would rather not touch
c) stuff for which there is no compiler on the target system; this
could include a lack of a suitable compiler on the target systems
due to cost or other reasons
d) lack of suitably experienced folks to do the port
--
HAL 9000: Dave. Put down those Windows disks. Dave. DAVE!
JF Mezei
2014-12-21 21:43:45 UTC
Permalink
Post by Phillip Helbig (undress to reply)
TCPIP$CONFIGURATION
TCPIP$HOST
TCPIP$PRINTCAP
TCPIP$NETWORK
TCPIP$PROXY
TCPIP$ROUTE
TCPIP$SERVICE
Due to weather, I am not undressing to reply, hope you still get it :-)

Although I have not experienced this, it appears that TCPIP services
moved configs from "database" to text files circa 8.4 timeframe. So you
will have to keep an eye out for this as it would change how you treat
node-specific stuff.

Also be careful: the services database on TCPIP services (old style)
could be common. the node NAME is part of the key for each service. So
when you start TCPIP services on node "Cake", only records with
"CAKE<service>" are processed.

So when moving to a new node name,on the old style you need to edit the
indexed file, change node names to the new one, save it and recreated
the indexed file.

If the upgrade to the current TCPIP services converts the config files
from indexed to text, abnd you perform the upgrade on node Cake, it may
or may not generate the text configs for the other nodes.
Phillip Helbig (undress to reply)
2014-12-21 21:58:03 UTC
Permalink
Post by JF Mezei
Post by Phillip Helbig (undress to reply)
TCPIP$CONFIGURATION
TCPIP$HOST
TCPIP$PRINTCAP
TCPIP$NETWORK
TCPIP$PROXY
TCPIP$ROUTE
TCPIP$SERVICE
A bit of research shows that those above, minus PRINTCAP, appear to be
defined on my system, mentioned in the documentation, and visible in
TCPIP$CONFIG.COM. In addition, EXPORT is mentioned in the
documentation. The list above was from a test I did with VAX a while
back.
Post by JF Mezei
Although I have not experienced this, it appears that TCPIP services
moved configs from "database" to text files circa 8.4 timeframe. So you
will have to keep an eye out for this as it would change how you treat
node-specific stuff.
I'll have to read the manual or, rather, the release notes, as
apparently this is not completely documented in the upgrade manual. As
long as an automatic conversion from supported configuration to
supported configuration works, that's good enough. I can then read up
and customize. It would be unfortunate, though, if I have to upgrade,
edit stuff by hand, then hope it works.

From one version to the next, TCPIP once removed a command or two
completely, not even left in undocumented, much less documented as
deprecated with a suggestion to change. This broke some of my
procedures which were using the documented interface. (IIRC it was
TCPIP SHOW INTERFACE/CLUSTER; interstingly, the corresponding SET
command was left in.)
Post by JF Mezei
Also be careful: the services database on TCPIP services (old style)
could be common. the node NAME is part of the key for each service. So
when you start TCPIP services on node "Cake", only records with
"CAKE<service>" are processed.
Right. However, if one moves the database from the SYS$COMMON on one
system disk to somewhere off the system disk, defines the logical names
above, then restarts TCPIP on the corresponding node, it works. So,
presumably, defining the logicals, shutting down, reconfiguring and
restarting on the other nodes should move the stuff from these nodes
into the new "database" files as well. (I don't want to think about
trying to merge them somehow at file/record level.)
Post by JF Mezei
So when moving to a new node name,on the old style you need to edit the
indexed file, change node names to the new one, save it and recreated
the indexed file.
It's stuff like this I want to avoid, hence moving these files off the
system disk.
Post by JF Mezei
If the upgrade to the current TCPIP services converts the config files
from indexed to text, abnd you perform the upgrade on node Cake, it may
or may not generate the text configs for the other nodes.
Surely it would. Some of the records might be for satellites booting
from the same system disk. Surely these would be updated, and I don't
see why there should be a difference between these, those for other boot
nodes using the same system disk, and nodes which don't actually use the
system disk.
Loading...