Discussion:
VMS internals design, was: Re: BASIC and AST routines
(too old to reply)
Simon Clubley
2021-11-25 14:01:06 UTC
Permalink
VMS should have been designed 5-10 years later on than when it was.
In a thread of backpedaling inanities, that has to be the most inane.
Hunter
+1
I am seriously annoyed by that comment Hunter because you have
completely missed (either accidentally or deliberately) the point
I am making (and have made before).

Compared to later operating system designs the internal design
of VMS is a direct product of the 1970s mindset because it is
ugly, hard to alter, not modular, full of internal hacks such
as jumping internally all over the place and was designed when
it was getting close to the end of when assembly language was
considered to be both an acceptable system implementation language
and an application language.

VMS has given us great things such as world-leading clustering,
but that doesn't change the ugly nature of its internal design.

This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.

Another such example is playing out right now as we speak.

The engineers at VSI are talented, experienced and generally skilled
overall. However, due to how VMS was designed, it has taken even these
skilled people over 7 years so far to port VMS to x86-64 and they will
not be finished until the middle of next year at the earliest.

As far as porting operating systems to a new architecture goes, that's
pathetic (but due to no fault of the above skilled engineers I hasten to add).

And even then, the port is not finished. After that, they need to provide
a filesystem that's suitable for today's hardware and today's disk sizes.

They have already had two goes at this and abandoned them. At current
schedules, you can easily add another couple of years for a new filesystem.

For comparison, I would expect a port of Linux to a new architecture to
take about 6-12 months to achieve first boot (if you also had to do
the compiler work as well) and about another 6-9 months after that
to deliver initial versions of the port into the hands of the customers.

How many people would have stayed with VMS if they knew in 2014 that
it would take another 8 years before they had VMS on x86-64 and another
couple of years after that before they had a filesystem suitable for
today's hardware ?

I say things that people don't like to hear. They are also the same
things that need to be said.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Dave Froble
2021-11-25 15:48:27 UTC
Permalink
Post by Simon Clubley
VMS should have been designed 5-10 years later on than when it was.
In a thread of backpedaling inanities, that has to be the most inane.
Hunter
+1
I am seriously annoyed by that comment Hunter because you have
completely missed (either accidentally or deliberately) the point
I am making (and have made before).
You may not be the only one who can be annoyed ...
Post by Simon Clubley
Compared to later operating system designs the internal design
of VMS is a direct product of the 1970s mindset because it is
ugly, hard to alter, not modular, full of internal hacks such
as jumping internally all over the place and was designed when
it was getting close to the end of when assembly language was
considered to be both an acceptable system implementation language
and an application language.
Two statements clearly at odds, and both totally accurate:

1) The VAX 11/780 was a wonderful computer offering greatly enhanced
capabilities and performance.

2) The VAX 11/780 was a slow pig of a computer and wasted way too much floor
space and cost way too much, and had almost no memory.

It all depends on one's perspective, doesn't it? The perspective of 1978 or the
perspective of 2021. If ugly hacks and jumping around were implemented to
attempt to get a bit of performance out of the pig, well what's wrong with that?
But no, you Simon wish to judge VMS (which was written specifically for the
VAX) from the perspective of 2021. You're of course free to do so, but those
who lived the years of VMS can see and announce your prejudice.
Post by Simon Clubley
VMS has given us great things such as world-leading clustering,
but that doesn't change the ugly nature of its internal design.
This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.
Solving a "need" is a problem?
Post by Simon Clubley
Another such example is playing out right now as we speak.
The engineers at VSI are talented, experienced and generally skilled
overall. However, due to how VMS was designed, it has taken even these
skilled people over 7 years so far to port VMS to x86-64 and they will
not be finished until the middle of next year at the earliest.
VMS was designed and implemented for VAX, not generic computers.
Post by Simon Clubley
As far as porting operating systems to a new architecture goes, that's
pathetic (but due to no fault of the above skilled engineers I hasten to add).
And even then, the port is not finished. After that, they need to provide
a filesystem that's suitable for today's hardware and today's disk sizes.
They have already had two goes at this and abandoned them. At current
schedules, you can easily add another couple of years for a new filesystem.
For comparison, I would expect a port of Linux to a new architecture to
take about 6-12 months to achieve first boot (if you also had to do
the compiler work as well) and about another 6-9 months after that
to deliver initial versions of the port into the hands of the customers.
Don't want no stinkin Linux ...
Post by Simon Clubley
How many people would have stayed with VMS if they knew in 2014 that
it would take another 8 years before they had VMS on x86-64
Me! Me me me me me me ....
Post by Simon Clubley
and another
couple of years after that before they had a filesystem suitable for
today's hardware ?
ODS2 works for me ....
Post by Simon Clubley
I say things that people don't like to hear. They are also the same
things that need to be said.
What need? And what makes you think we don't already know?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2021-11-25 18:25:00 UTC
Permalink
Post by Dave Froble
Post by Simon Clubley
VMS has given us great things such as world-leading clustering,
but that doesn't change the ugly nature of its internal design.
This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.
Solving a "need" is a problem?
In the way the underlying VMS design forced it do be done, yes, big time.

In other 64-bit operating systems, you have a mixture of pure 32-bit
ABI processes and pure 64-bit ABI processes running in the same
operating system. Far more cleaner and elegant.
Post by Dave Froble
Post by Simon Clubley
Another such example is playing out right now as we speak.
The engineers at VSI are talented, experienced and generally skilled
overall. However, due to how VMS was designed, it has taken even these
skilled people over 7 years so far to port VMS to x86-64 and they will
not be finished until the middle of next year at the earliest.
VMS was designed and implemented for VAX, not generic computers.
And that, along with the Macro-32 implementation language, is one of
the reasons why we still don't have a production-ready port for x86-64
after 7 years of porting effort, even though VMS has already been through
ports to 2 different architectures.
Post by Dave Froble
Post by Simon Clubley
As far as porting operating systems to a new architecture goes, that's
pathetic (but due to no fault of the above skilled engineers I hasten to add).
And even then, the port is not finished. After that, they need to provide
a filesystem that's suitable for today's hardware and today's disk sizes.
They have already had two goes at this and abandoned them. At current
schedules, you can easily add another couple of years for a new filesystem.
For comparison, I would expect a port of Linux to a new architecture to
take about 6-12 months to achieve first boot (if you also had to do
the compiler work as well) and about another 6-9 months after that
to deliver initial versions of the port into the hands of the customers.
Don't want no stinkin Linux ...
The comparison is to show what the expected timescale is for porting
to a new architecture.
Post by Dave Froble
Post by Simon Clubley
How many people would have stayed with VMS if they knew in 2014 that
it would take another 8 years before they had VMS on x86-64
Me! Me me me me me me ....
I doubt many would have joined you.

That would be like saying a port is being started here in 2021 and will
be ready in 2029 (probably). How many other people would have gone for that ?
Post by Dave Froble
Post by Simon Clubley
and another
couple of years after that before they had a filesystem suitable for
today's hardware ?
ODS2 works for me ....
What happens when you need to start working with multi-TB drives ?
Post by Dave Froble
Post by Simon Clubley
I say things that people don't like to hear. They are also the same
things that need to be said.
What need? And what makes you think we don't already know?
Your own comments above suggest you might not.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-25 19:20:23 UTC
Permalink
Post by Simon Clubley
Post by Dave Froble
Post by Simon Clubley
VMS has given us great things such as world-leading clustering,
but that doesn't change the ugly nature of its internal design.
This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.
Solving a "need" is a problem?
In the way the underlying VMS design forced it do be done, yes, big time.
In other 64-bit operating systems, you have a mixture of pure 32-bit
ABI processes and pure 64-bit ABI processes running in the same
operating system. Far more cleaner and elegant.
I do not see that design as so clean.

Just look at Windows where the 64 bit stuff is in C:\Windows\System32
and the 32 bit stuff is in C:\Windows\SysWOW64 and in registry
HKEY_CLASSES_ROOT vs HKEY_CLASSES_ROOT\Wow6432Node and the fun with
ODBC drivers.

But even if we consider it a clean design, then it really does
not matter. It is all facilitated by the fact that the x86-64
CPU have a 32 bit mode and a 64 bit mode. Alpha did not have
two modes. So it was not an option for VMS.

For a handful of reasons it was decided to support
both 32 bit pointers and 64 bit pointers on VMS.
Post by Simon Clubley
Post by Dave Froble
Post by Simon Clubley
Another such example is playing out right now as we speak.
The engineers at VSI are talented, experienced and generally skilled
overall. However, due to how VMS was designed, it has taken even these
skilled people over 7 years so far to port VMS to x86-64 and they will
not be finished until the middle of next year at the earliest.
VMS was designed and implemented for VAX, not generic computers.
And that, along with the Macro-32 implementation language, is one of
the reasons why we still don't have a production-ready port for x86-64
after 7 years of porting effort, even though VMS has already been through
ports to 2 different architectures.
I am not so sure that the Macro-32 is a major reason. When the
Macro-32 compiler is done and no changes are needed then
it is just compile. It is only when the code need to be modified
that it takes longer to understand and change Macro-32 than a HLL.

And regarding the 7 years, then I would consider it very interesting
to know how many people worked on the VAX->Alpha migration and how
many people work on the Itanium->x86-64 migration. I have a strong
suspicion that the latest migration has spent a lot less man years.
Post by Simon Clubley
Post by Dave Froble
Post by Simon Clubley
How many people would have stayed with VMS if they knew in 2014 that
it would take another 8 years before they had VMS on x86-64
Me! Me me me me me me ....
I doubt many would have joined you.
I think most would.

Those running VMS today are mostly those without an easy way off
VMS.

Continuing with VMS is what they want.

They are very interested in that VMS has a future. If not then
migration becomes necessary at some point in time.

So the confidence in VSI completing the port is very important.
But the timeline is less important.

Sure a x86-64 box is way more powerful and way cheaper than
Itanium and old Alpha's. But the Itanium's and Alpha's can
still do the job those VMS users need.

The drop dead time is when IslandCo can no longer deliver
Itanium's and Alpha's.

Arne
Scott Dorsey
2021-11-25 19:51:21 UTC
Permalink
Post by Arne Vajhøj
Just look at Windows where the 64 bit stuff is in C:\Windows\System32
and the 32 bit stuff is in C:\Windows\SysWOW64 and in registry
HKEY_CLASSES_ROOT vs HKEY_CLASSES_ROOT\Wow6432Node and the fun with
ODBC drivers.
But even if we consider it a clean design, then it really does
not matter. It is all facilitated by the fact that the x86-64
CPU have a 32 bit mode and a 64 bit mode. Alpha did not have
two modes. So it was not an option for VMS.
Windows did it very very badly. Solaris is a much better example of
how it can be done right.

Remember the 11/780 had a 16-bit mode and could run RSX binaries too.
So we have been through all this before.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Bill Gunshannon
2021-11-25 21:04:42 UTC
Permalink
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?

bill
Phillip Helbig (undress to reply)
2021-11-25 21:24:29 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
Rdb. Sure, if you use standard SQL, you could port it to another
database without much trouble. But there is also RMU for things such as
backups and unloads. Yes, can be done in another way. Any Turing
machine can emulate another. :-) But those who have worked with Rdb,
especially those who can compare it with other databases, know how good
it is.

Clustering. Real clustering.

The two together. The database is open on all nodes in the cluster,
processes running on all nodes. It works. Lose a node? Reconnect to a
generic interface and continue.

Good documentation.

Built-in file versions.

EDT. :-)
Arne Vajhøj
2021-11-25 22:12:26 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
Rdb. Sure, if you use standard SQL, you could port it to another
database without much trouble. But there is also RMU for things such as
backups and unloads. Yes, can be done in another way. Any Turing
machine can emulate another. :-) But those who have worked with Rdb,
especially those who can compare it with other databases, know how good
it is.
I don't know if it is better.

But it is different.

You mention tools, but there is also the entire performance tuning.
Post by Phillip Helbig (undress to reply)
Clustering. Real clustering.
The two together. The database is open on all nodes in the cluster,
processes running on all nodes. It works. Lose a node? Reconnect to a
generic interface and continue.
Active/Active database cluster is not unique for VMS and Rdb.

Oracle DB RAC, IBM DB2 PureScale and various NoSQL databases
(Cassandra, HBase, Voldemort etc.) all do it.

But differently.

Arne
Dave Froble
2021-11-26 01:49:47 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
Rdb. Sure, if you use standard SQL, you could port it to another
database without much trouble. But there is also RMU for things such as
backups and unloads. Yes, can be done in another way. Any Turing
machine can emulate another. :-) But those who have worked with Rdb,
especially those who can compare it with other databases, know how good
it is.
Clustering. Real clustering.
The two together. The database is open on all nodes in the cluster,
processes running on all nodes. It works. Lose a node? Reconnect to a
generic interface and continue.
Good documentation.
Built-in file versions.
EDT. :-)
Sorry Phillip, but none of your arguments really matter. Perhaps disaster
tolerant clusters. I doubt those are the majority of current VMS users.

What matters are the current applications.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
David Goodwin
2021-11-25 21:31:31 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
Same things people are doing on Windows or MacOS that make a move
to a totally different system like Linux difficult. Calling APIs provided by
the operating system.
Arne Vajhøj
2021-11-25 22:07:08 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system.  Any applications are almost
guaranteed to be written in an HLL.  Just what is it that
they are doing on VMS that can not be done on another
system?
It can be done on another OS.

But the cost and risk migrating can be significant.

VMS specific language extensions, SYS$ and LIB$ calls,
reliance on special features in Rdb or index-sequential
files not available in other RDBMS or ISAM, huge amount
of DCL scripts, Macro-32 pieces etc.etc..

A thousand easily solvable problems can when combined
be a very problematic cost and risk (risk may very
well be considered a bigger problem than cost).

It obviously depend on how the code is written. Some
C or Cobol programs may be pretty easy to migrate.
Most Pascal and Basic programs would be a rewrite from
scratch to migrate.

Arne
Arne Vajhøj
2021-11-25 23:55:03 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system.  Any applications are almost
guaranteed to be written in an HLL.  Just what is it that
they are doing on VMS that can not be done on another
system?
It can be done on another OS.
But the cost and risk migrating can be significant.
VMS specific language extensions, SYS$ and LIB$ calls,
reliance on special features in Rdb or index-sequential
files not available in other RDBMS or ISAM, huge amount
of DCL scripts, Macro-32 pieces etc.etc..
A thousand easily solvable problems can when combined
be a very problematic cost and risk (risk may very
well be considered a bigger problem than cost).
It obviously depend on how the code is written. Some
C or Cobol programs may be pretty easy to migrate.
Most Pascal and Basic programs would be a rewrite from
scratch to migrate.
The main reason that C programs are not ported off VMS
is probably that the authors went overboard with VMS
specifics instead of sticking to standard C.

I suspect that the main reason that Cobol programs are
not ported off VMS is that Cobol/VMS -> Cobol/Linux is
not considered good enough so that the migration is
Cobol/VMS -> X/Linux.

Arne
JP DEMONA
2022-02-16 19:28:44 UTC
Permalink
No DEC language is EASY to migrate.
in 1M lines of VMS COBOL there will be (generally) 120,000 individual instances that need to be remediated either manually or automatically.
just count how many BY DESCRIPTORS there are your code and that doesn't even scratch the surface.
for PASCAL we had to wrote a DEC PASCAL to C++ converter - does a great job - no other option.
for BASIC we wrote a VAX BASIC to C translator.
for FORTRAN we have FORTRAN partner which is a mil spec VMS FORTRAN to ifort translator - yes - fort is very DEC compatible but dropped many of theold VMS "things" - like initialized common blocks - can have multipack on VMS - only 1 on linux.
Even DEC C in large quanties can be a real pain the the ass - &0, &sizeof, fopen extended to use rms extensions.
we have a DEC COBOL to Fujitsu COBOL / MF COBOL translator - and it earns every penny
Generally - you can assume that every large DEC 3GL application (1M to 2M LOC) will have 100K+ code modifications to even compile on an ANSI 3GL.
C++ is obviously the easiest - but not immune to DEC isms (you can printf a "string" class)
DEC C - depending on age - can quickly become a quagmire - and we wrote a DEC C to ANSI C converter.

/Jon
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
It can be done on another OS.
But the cost and risk migrating can be significant.
VMS specific language extensions, SYS$ and LIB$ calls,
reliance on special features in Rdb or index-sequential
files not available in other RDBMS or ISAM, huge amount
of DCL scripts, Macro-32 pieces etc.etc..
A thousand easily solvable problems can when combined
be a very problematic cost and risk (risk may very
well be considered a bigger problem than cost).
It obviously depend on how the code is written. Some
C or Cobol programs may be pretty easy to migrate.
Most Pascal and Basic programs would be a rewrite from
scratch to migrate.
Arne
Dave Froble
2021-11-26 01:47:18 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
bill
I sort of don't like doing this, but, it is what it is ...

Your posts include working for the government and a university. Perhaps others,
but that is what I recall.

I feel that perhaps you've never worked in an environment where being profitable
is the first consideration. I may be wrong.

Increasingly, companies depend on computer applications, note, not computers,
but applications, to run their businesses. The cost is an expense, not
profitable, other than what such applications allow.

First, lets look at: "Any applications are almost guaranteed to be written in an
HLL." I can point to at lease one application where that is not true. I've
seen references to others also. Just because you don't have one doesn't mean
others don't.

Many applications, maybe most, at least those still being used, running on VMS
would be costly to migrate/port to another environment. I know that that is
true for Basic, and I suspect it to be true for most or all languages.

Even if there might be some migration path other than total re-write, there will
be costs, there will be mistakes, and such. This costs money. Money that takes
away from profits, if not worse.

Just about anything can be done, with enough money and effort. Where would this
money and effort come from? For those whose business is providing tha effort,
perhaps they think the money should be available. Those who would bear the cost
might think otherwise.

Would VSI have customers now, if your conjecture had any merit?

In my case, an extensive application written almost exclusively in
Basic+/BP2/VAX Basic/DEC Basic used to run multiple companies. There is no
substitute that I'm aware of that could be used to run these applications. In
all cases, there would be design work, programming work, mistakes, business
disruption, failure, and such. Nothing insignificant either, we're talking 7 or
8 digits.

If it was so easy, why would not software competitors already have come forth
with replacement applications. It's not like VMS users have been treated well
for many years by the OS vendor. Where are those supposed alternatives?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Bill Gunshannon
2021-11-26 13:51:58 UTC
Permalink
Post by Dave Froble
Post by Bill Gunshannon
Post by Arne Vajhøj
Those running VMS today are mostly those without an easy way off
VMS.
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system.  Any applications are almost
guaranteed to be written in an HLL.  Just what is it that
they are doing on VMS that can not be done on another
system?
bill
I sort of don't like doing this, but, it is what it is ...
Your posts include working for the government and a university.  Perhaps
others, but that is what I recall.
I feel that perhaps you've never worked in an environment where being
profitable is the first consideration.  I may be wrong.
You would be wrong. I have also worked for Martin Marietta (now
Lockheed Martin) and TRW (IND, not the car parts division).
Post by Dave Froble
Increasingly, companies depend on computer applications, note, not
computers, but applications, to run their businesses.  The cost is an
expense, not profitable, other than what such applications allow.
First, lets look at: "Any applications are almost guaranteed to be
written in an HLL."  I can point to at lease one application where that
is not true.  I've seen references to others also.  Just because you
don't have one doesn't mean others don't.
Considering that, like COBOL, no one is teaching any kind of assembler
they would have to be real legacy applications.
Post by Dave Froble
Many applications, maybe most, at least those still being used, running
on VMS would be costly to migrate/port to another environment.  I know
that that is true for Basic, and I suspect it to be true for most or all
languages.
Even if there might be some migration path other than total re-write,
there will be costs, there will be mistakes, and such.  This costs
money.  Money that takes away from profits, if not worse.
I never said it would be free. But as someone else mentioned cost is
important but so is risk.
Post by Dave Froble
Just about anything can be done, with enough money and effort.  Where
would this money and effort come from?  For those whose business is
providing tha effort, perhaps they think the money should be available.
Those who would bear the cost might think otherwise.
Would VSI have customers now, if your conjecture had any merit?
But there is the rub. VSI does have customers but based on the
comments I am still seeing here the numbers may be decreasing.
And people may be getting concerned about the time scale.
Post by Dave Froble
In my case, an extensive application written almost exclusively in
Basic+/BP2/VAX Basic/DEC Basic used to run multiple companies.  There is
no substitute that I'm aware of that could be used to run these
applications.  In all cases, there would be design work, programming
work, mistakes, business disruption, failure, and such.  Nothing
insignificant either, we're talking 7 or 8 digits.
Yes, but what does it do that your customers can't find another
product to do. It may take major changes in the way they handle
the nuts and bolts of their business, but it happens every day.
The University I worked at had all in-house applications when I
got there. Running on Big Blue. Moved the whole thing to VMS.
And then moved the whole thing to Banner. No more VMS. It can
be done and it is being done every day. Don't get me wrong, I
don't think the model is a good idea, but then, I am not a CIO.
They do.
Post by Dave Froble
If it was so easy, why would not software competitors already have come
forth with replacement applications.  It's not like VMS users have been
treated well for many years by the OS vendor.  Where are those supposed
alternatives?
VMS is a religion. Just read what is said here. But there have been
desertions. The VMS constant no longer exists. I expect there are
maybe 10% of that number left. And it is a self-fulfilling prophecy.
Every defection decreases the long term existence of the product.

And, it still avoids my original question. What can you do on VMS
that can not be done on another system? The effort and cost are
not a part of this equation. Only the task to be done. Cost
and effort decrease as risk increases.

bill
Jan-Erik Söderholm
2021-11-26 14:25:58 UTC
Permalink
Post by Bill Gunshannon
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system.  Any applications are almost
guaranteed to be written in an HLL.  Just what is it that
they are doing on VMS that can not be done on another
system?
bill
An "application" is not only the core routines in Cobol or
some other HLL. There are usually also a lot of routines that
are written (for VMS) in DCL that "runs" the applications.

And the whole application infrastructure can rely on VMS unique
things like logical names, mailboxes and other stuff.

I guess the majority of an porting effort is to move off the
VMS unique features.
Phillip Helbig (undress to reply)
2021-11-26 14:57:50 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Bill Gunshannon
I would really like to know what people could be running on
VMS on an Itanic that would be so difficult to move to a
totally different system. Any applications are almost
guaranteed to be written in an HLL. Just what is it that
they are doing on VMS that can not be done on another
system?
bill
An "application" is not only the core routines in Cobol or
some other HLL. There are usually also a lot of routines that
are written (for VMS) in DCL that "runs" the applications.
Indeed, e.g. job schedulers written in DCL.
Post by Jan-Erik Söderholm
And the whole application infrastructure can rely on VMS unique
things like logical names, mailboxes and other stuff.
Especially combinations such as cluster-wide logical names visible to
only a group.
Post by Jan-Erik Söderholm
I guess the majority of an porting effort is to move off the
VMS unique features.
Yes.
Phillip Helbig (undress to reply)
2021-11-26 14:32:17 UTC
Permalink
Post by Bill Gunshannon
And, it still avoids my original question. What can you do on VMS
that can not be done on another system? The effort and cost are
not a part of this equation. Only the task to be done.
Any Turing machine can emulate another. So the answer is "nothing".
But important to most people are the time, effort, risk, and money
involved and how much enjoyment it brings.
Simon Clubley
2021-11-26 19:45:05 UTC
Permalink
Post by Bill Gunshannon
But there is the rub. VSI does have customers but based on the
comments I am still seeing here the numbers may be decreasing.
And people may be getting concerned about the time scale.
Longer term, they have not done themselves any favours with those
time-limited production licences either.

I wonder how many potential VSI customers that decision has scared off.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Simon Clubley
2021-11-26 18:54:27 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
I doubt many would have joined you.
I think most would.
Those running VMS today are mostly those without an easy way off
VMS.
Continuing with VMS is what they want.
They are very interested in that VMS has a future. If not then
migration becomes necessary at some point in time.
So the confidence in VSI completing the port is very important.
But the timeline is less important.
The problem Arne is that in many companies, those who want to stay
on VMS are not those who actually make the final decision about whether
to stay on VMS.

Those kinds of decisions are usually made one or two levels higher up
and those people tend not to have an emotional bond to VMS and they also
want to take what they perceive as the safer decision (both for them and
for their pension.)

Saying that a port to x86-64 would be available in about 8 years is
not the kind of thing that endears you to those types of managers. :-)
Post by Arne Vajhøj
Sure a x86-64 box is way more powerful and way cheaper than
Itanium and old Alpha's. But the Itanium's and Alpha's can
still do the job those VMS users need.
The drop dead time is when IslandCo can no longer deliver
Itanium's and Alpha's.
Alpha not so much, because full system emulation is available.

But yes, for Itanium, when replacement physical boxes are no longer
available is when the real problems start to occur.

It really is a pity that a full system emulator was never developed
for the Itanium architecture. That would have given people a few more
options.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Chris Townley
2021-11-26 18:59:48 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
I doubt many would have joined you.
I think most would.
Those running VMS today are mostly those without an easy way off
VMS.
Continuing with VMS is what they want.
They are very interested in that VMS has a future. If not then
migration becomes necessary at some point in time.
So the confidence in VSI completing the port is very important.
But the timeline is less important.
The problem Arne is that in many companies, those who want to stay
on VMS are not those who actually make the final decision about whether
to stay on VMS.
Those kinds of decisions are usually made one or two levels higher up
and those people tend not to have an emotional bond to VMS and they also
want to take what they perceive as the safer decision (both for them and
for their pension.)
Saying that a port to x86-64 would be available in about 8 years is
not the kind of thing that endears you to those types of managers. :-)
Post by Arne Vajhøj
Sure a x86-64 box is way more powerful and way cheaper than
Itanium and old Alpha's. But the Itanium's and Alpha's can
still do the job those VMS users need.
The drop dead time is when IslandCo can no longer deliver
Itanium's and Alpha's.
Alpha not so much, because full system emulation is available.
But yes, for Itanium, when replacement physical boxes are no longer
available is when the real problems start to occur.
It really is a pity that a full system emulator was never developed
for the Itanium architecture. That would have given people a few more
options.
Simon.
Well there's an opportunity for you!
--
Chris
Simon Clubley
2021-11-26 19:16:53 UTC
Permalink
Post by Chris Townley
Post by Simon Clubley
Alpha not so much, because full system emulation is available.
But yes, for Itanium, when replacement physical boxes are no longer
available is when the real problems start to occur.
It really is a pity that a full system emulator was never developed
for the Itanium architecture. That would have given people a few more
options.
Well there's an opportunity for you!
I already had a look a while back. :-)

Given the complexity of the architecture, it's a massive undertaking
and required quite a bit of access to documents, firmware and knowledge
that was not public (or at least I could not find the materials).

Start with the system firmware. That's locked up behind a HPE paywall
and is not freely available.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-26 23:38:48 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
I doubt many would have joined you.
I think most would.
Those running VMS today are mostly those without an easy way off
VMS.
Continuing with VMS is what they want.
They are very interested in that VMS has a future. If not then
migration becomes necessary at some point in time.
So the confidence in VSI completing the port is very important.
But the timeline is less important.
The problem Arne is that in many companies, those who want to stay
on VMS are not those who actually make the final decision about whether
to stay on VMS.
Those kinds of decisions are usually made one or two levels higher up
and those people tend not to have an emotional bond to VMS and they also
want to take what they perceive as the safer decision (both for them and
for their pension.)
Saying that a port to x86-64 would be available in about 8 years is
not the kind of thing that endears you to those types of managers. :-)
Most won't care.

They are only interested in whether there will continue to be HW
available to run VMS on. At a reasonable price.

Whether that is Itanium for 4 years and x86-64 after that or
Itanium for 8 years and x86-64 after that does not matter
much.

There can be exceptions like they want to move VMS to
their on-prem ESXi cluster or to public cloud - then
the date does matter.
Post by Simon Clubley
Post by Arne Vajhøj
Sure a x86-64 box is way more powerful and way cheaper than
Itanium and old Alpha's. But the Itanium's and Alpha's can
still do the job those VMS users need.
The drop dead time is when IslandCo can no longer deliver
Itanium's and Alpha's.
Alpha not so much, because full system emulation is available.
True.
Post by Simon Clubley
But yes, for Itanium, when replacement physical boxes are no longer
available is when the real problems start to occur.
It really is a pity that a full system emulator was never developed
for the Itanium architecture. That would have given people a few more
options.
Yes.

Arne
Félim Doyle
2021-11-26 05:57:08 UTC
Permalink
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be the ideal OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some during development of the harware and software but the facilities that this combination provided, especially in comparison to the price range of other systems, was

It's not proprietary in the traditional sense but it is difficult to port to hardware that was not designed to run it. It is certainly worth tidying up the legacy flaws to make future ports easier but until somebody designs new hardware that can utilise / exploit the full functionality of OpenVMS any port will be difficult, incomplete and imperfect.
Félim Doyle
2021-11-26 11:40:40 UTC
Permalink
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some miscommunications during parallel development of the hardware and software but the facilities that this combination provided, especially in comparison to the price range of other systems at the time, was revolutionary.

OpenVMS is not proprietary in the traditional sense but it is difficult to port to hardware that was not designed to run it. It is certainly worth tidying up the legacy flaws to make future ports easier but, until somebody designs new hardware that can utilise / exploit the full functionality of OpenVMS, any port will be difficult, incomplete and imperfect.
Clair Grant
2021-11-26 14:12:16 UTC
Permalink
Post by Félim Doyle
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some miscommunications during parallel development of the hardware and software but the facilities that this combination provided, especially in comparison to the price range of other systems at the time, was revolutionary.
OpenVMS is not proprietary in the traditional sense but it is difficult to port to hardware that was not designed to run it. It is certainly worth tidying up the legacy flaws to make future ports easier but, until somebody designs new hardware that can utilise / exploit the full functionality of OpenVMS, any port will be difficult, incomplete and imperfect.
Have not been here for a bit. My answer to the "how many people" question.......

If you include the compiler teams who were actually separate from VMS Engineering itself in the bad old days, there was somewhere in the range of 250 people in the two organizations combined when we ported from VAX to Alpha and then Alpha to IPF. Not all of these people worked on the port all at once but they were all available to pitch in as needed even when it was not their full-time job. Both of those ports took 3 years; they were very different technically but nonetheless, the total elapsed time was eerily the same.

With the same person power, porting to x86 would have been roughly the same, I believe. No reason to think otherwise (similar issues, same parts of the system needing architecture-specific work). But that is not the case. We have way fewer people now so it is not surprising to me, at least, that it is taking way longer.

Clair

BTW: Right after we ported to IPF, I was asked how long it would take to port VMS to x86. I said, 3 years, given the same of team that just completed porting. I never heard another word about that topic.
Arne Vajhøj
2021-11-26 16:31:16 UTC
Permalink
Post by Clair Grant
Have not been here for a bit. My answer to the "how many people" question.......
If you include the compiler teams who were actually separate from VMS
Engineering itself in the bad old days, there was somewhere in the
range of 250 people in the two organizations combined when we ported
from VAX to Alpha and then Alpha to IPF. Not all of these people
worked on the port all at once but they were all available to pitch
in as needed even when it was not their full-time job. Both of those
ports took 3 years; they were very different technically but
nonetheless, the total elapsed time was eerily the same.
With the same person power, porting to x86 would have been roughly
the same, I believe. No reason to think otherwise (similar issues,
same parts of the system needing architecture-specific work). But
that is not the case. We have way fewer people now so it is not
surprising to me, at least, that it is taking way longer.
People without experience in big software projects tend to think
that it is a few hard to solve problems that take the time. But
that is rarely the case. The time is typical spent on relative
easy problems - there are just a lot of them. I do not expect
the fact that x86-64 is 2 mode instead of 4 modes, the fact
that lot of VMS code is Macro-32 or Bliss to be what is driving
the effort required. But you have N places where there are something
ISA specific. And N is a large number. And for every one of those
somebody needs to understand the problem, make the fix, document it and
test it. It takes 1 day or 1 week or 1 month. But multiply that with
thousands of places. And then mix in the hassle of cross-developing on
a different platform, the coordination effort to ensure that all parts
are changed in a compatible way and the interruptions of 8.4 support
and requests for new features / bug fixes from customers and
the man months start going quickly.

Arne
Simon Clubley
2021-11-26 19:41:57 UTC
Permalink
Post by Félim Doyle
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some miscommunications during parallel development of the hardware and software but the facilities that this combination provided, especially in comparison to the price range of other systems at the time, was revolutionary.
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it was
used.

That 4-mode architecture could have provided some really truly radical
internal security separation within VMS, but once you are in any of the
3 inner modes, you can get to any of the other inner modes so all those
extra modes were wasted from a security isolation point of view.

In case you are wondering, you can escalate from supervisor mode because
DCL has access to the privileges of the programs it runs even though it
doesn't actually need them. That kind of thing should have stayed within
the kernel so DCL never sees those privileges.

Just yet another VMS design "feature". :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
abrsvc
2021-11-26 20:31:14 UTC
Permalink
Post by Simon Clubley
Post by Félim Doyle
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some miscommunications during parallel development of the hardware and software but the facilities that this combination provided, especially in comparison to the price range of other systems at the time, was revolutionary.
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it was
used.
That 4-mode architecture could have provided some really truly radical
internal security separation within VMS, but once you are in any of the
3 inner modes, you can get to any of the other inner modes so all those
extra modes were wasted from a security isolation point of view.
In case you are wondering, you can escalate from supervisor mode because
DCL has access to the privileges of the programs it runs even though it
doesn't actually need them. That kind of thing should have stayed within
the kernel so DCL never sees those privileges.
Just yet another VMS design "feature". :-)
Simon.
--
Walking destinations on a map are further away than they appear.
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Dave Froble
2021-11-26 21:52:51 UTC
Permalink
Post by abrsvc
Post by Simon Clubley
Post by Félim Doyle
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever OS then the VAX hardware was designed and built to run it not the other way around. There were probably some mistakes made, unforeseen implementation issues and some miscommunications during parallel development of the hardware and software but the facilities that this combination provided, especially in comparison to the price range of other systems at the time, was revolutionary.
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it was
used.
That 4-mode architecture could have provided some really truly radical
internal security separation within VMS, but once you are in any of the
3 inner modes, you can get to any of the other inner modes so all those
extra modes were wasted from a security isolation point of view.
In case you are wondering, you can escalate from supervisor mode because
DCL has access to the privileges of the programs it runs even though it
doesn't actually need them. That kind of thing should have stayed within
the kernel so DCL never sees those privileges.
Just yet another VMS design "feature". :-)
Simon.
--
Walking destinations on a map are further away than they appear.
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Yeah, hindsight can be 20/20.

If the Wright bros knew what is known now, they would have flown an F22 Raptor,
or maybe a 747, at Kittyhawk.

But if they would not have flown the Wright Flyer in 1903, perhaps we would not
have F22s and 747s today.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2021-11-29 18:51:46 UTC
Permalink
Post by abrsvc
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Hello Dan,

The problem is not the preserving of R0/R1/PC/PS{L}, but the way in
which it was done. This information should be private to the AST
dispatcher that calls the AST routine. It should never be visible
to the called AST routine itself because that is an outright violation
of good modular design and that's as true back when VMS was designed
as it is now.

In case you are familiar with bare metal interrupt programming, you can
compare the calling of an AST routine with the way that an interrupt
handler is called when working with bare metal code or when implementing
an OS itself.

On more advanced MCUs, you can have an assembly language interrupt
dispatcher that calls the actual C language interrupt handler and you
can end up having to save more interrupt state than just the normal
registers in your assembly language interrupt dispatcher.

However, that information is always private to the interrupt dispatcher,
and is _never_ exposed to the interrupt handler itself and this is so
universally true, that I didn't even realise what the AST registers were
being used for until it was pointed out to me.

For example, in one bare metal assembly language interrupt dispatcher
I wrote a while back for an ARM processor and which I was looking at
recently, the dispatcher has to save what is called the priority limiter
register before programming a new value (this allows nested interrupts
to occur).

This is saved onto the stack by the dispatcher before calling the
C language interrupt handler and is restored by the interrupt dispatcher
upon return from the interrupt handler. This information is private
to the dispatcher, it is not visible to the interrupt handler, and
I would never design a system where it was, because that is an utter
violation of modular and good programming practice.

That priority limiter register can be compared to one of those private
AST registers and that's why I consider it so wrong that those private
registers are there in the AST call frame and hence visible to the
called routine as it's an utter violation of good modular design.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
abrsvc
2021-11-29 18:59:39 UTC
Permalink
Post by Simon Clubley
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Hello Dan,
The problem is not the preserving of R0/R1/PC/PS{L}, but the way in
which it was done. This information should be private to the AST
dispatcher that calls the AST routine. It should never be visible
to the called AST routine itself because that is an outright violation
of good modular design and that's as true back when VMS was designed
as it is now.
In case you are familiar with bare metal interrupt programming, you can
compare the calling of an AST routine with the way that an interrupt
handler is called when working with bare metal code or when implementing
an OS itself.
On more advanced MCUs, you can have an assembly language interrupt
dispatcher that calls the actual C language interrupt handler and you
can end up having to save more interrupt state than just the normal
registers in your assembly language interrupt dispatcher.
However, that information is always private to the interrupt dispatcher,
and is _never_ exposed to the interrupt handler itself and this is so
universally true, that I didn't even realise what the AST registers were
being used for until it was pointed out to me.
For example, in one bare metal assembly language interrupt dispatcher
I wrote a while back for an ARM processor and which I was looking at
recently, the dispatcher has to save what is called the priority limiter
register before programming a new value (this allows nested interrupts
to occur).
This is saved onto the stack by the dispatcher before calling the
C language interrupt handler and is restored by the interrupt dispatcher
upon return from the interrupt handler. This information is private
to the dispatcher, it is not visible to the interrupt handler, and
I would never design a system where it was, because that is an utter
violation of modular and good programming practice.
That priority limiter register can be compared to one of those private
AST registers and that's why I consider it so wrong that those private
registers are there in the AST call frame and hence visible to the
called routine as it's an utter violation of good modular design.
Simon.
--
Walking destinations on a map are further away than they appear.
I am so sick and tired of this "holier than thou" attitude along with "I dictate what is correct design" attitude. Look, while new technologies and techniques often make decisions made years ago seem incorrect, you MUST analyze decisions based upon the knowledge at the time. As I stated earlier, it is easy to criticize based on what is known now.

Can you for once provide constructive criticism? I find it hard to believe that you have not produced "the best" OS in the world given your opinion of yourself since you seem to be the only one that knows the right way to do anything ever.
V***@SendSpamHere.ORG
2021-11-30 01:30:10 UTC
Permalink
I find these kind of comments somewhat offensive since it is easy to cr=
iticize the decisions of people made 40 years ago using the context of know=
ledge today. VMS was designed as a cooperative pairing of both hardware and=
software. The use of R0 and R1 was for consistency across calls and had no=
thing to do with MACRO32 at all. Bliss used the same register conventions. =
If the VMS and VAX engineers knew in the late 70's what was known now, I su=
spect things would have been done differently.=20
Hello Dan,=20
=20
The problem is not the preserving of R0/R1/PC/PS{L}, but the way in=20
which it was done. This information should be private to the AST=20
dispatcher that calls the AST routine. It should never be visible=20
to the called AST routine itself because that is an outright violation=20
of good modular design and that's as true back when VMS was designed=20
as it is now.=20
=20
In case you are familiar with bare metal interrupt programming, you can=
=20
compare the calling of an AST routine with the way that an interrupt=20
handler is called when working with bare metal code or when implementing=
=20
an OS itself.=20
=20
On more advanced MCUs, you can have an assembly language interrupt=20
dispatcher that calls the actual C language interrupt handler and you=20
can end up having to save more interrupt state than just the normal=20
registers in your assembly language interrupt dispatcher.=20
=20
However, that information is always private to the interrupt dispatcher,=
=20
and is _never_ exposed to the interrupt handler itself and this is so=20
universally true, that I didn't even realise what the AST registers were=
=20
being used for until it was pointed out to me.=20
=20
For example, in one bare metal assembly language interrupt dispatcher=20
I wrote a while back for an ARM processor and which I was looking at=20
recently, the dispatcher has to save what is called the priority limiter=
=20
register before programming a new value (this allows nested interrupts=20
to occur).=20
=20
This is saved onto the stack by the dispatcher before calling the=20
C language interrupt handler and is restored by the interrupt dispatcher=
=20
upon return from the interrupt handler. This information is private=20
to the dispatcher, it is not visible to the interrupt handler, and=20
I would never design a system where it was, because that is an utter=20
violation of modular and good programming practice.=20
=20
That priority limiter register can be compared to one of those private=20
AST registers and that's why I consider it so wrong that those private=20
registers are there in the AST call frame and hence visible to the=20
called routine as it's an utter violation of good modular design.
Simon.=20
=20
--=20
Walking destinations on a map are further away than they appear.
I am so sick and tired of this "holier than thou" attitude along with "I di=
ctate what is correct design" attitude. Look, while new technologies and t=
echniques often make decisions made years ago seem incorrect, you MUST anal=
yze decisions based upon the knowledge at the time. As I stated earlier, i=
t is easy to criticize based on what is known now.
Can you for once provide constructive criticism? I find it hard to believe=
that you have not produced "the best" OS in the world given your opinion o=
f yourself since you seem to be the only one that knows the right way to do=
anything ever.
Harrumph! Harrumph! Harrumph!
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Galen
2021-11-30 04:42:30 UTC
Permalink
Post by abrsvc
I am so sick and tired of this "holier than thou" attitude along with "I
dictate what is correct design" attitude. Look, while new technologies
and techniques often make decisions made years ago seem incorrect, you
MUST analyze decisions based upon the knowledge at the time. As I stated
earlier, it is easy to criticize based on what is known now.
Can you for once provide constructive criticism? I find it hard to
believe that you have not produced "the best" OS in the world given your
opinion of yourself since you seem to be the only one that knows the
right way to do anything ever.
+1
Chris Scheers
2021-11-29 20:38:43 UTC
Permalink
I think this message shows the source of the confusion.

The AST dispatch/return mechanism has nothing to do with Macro32 or any
other language.

VMS and the VAX hardware were designed together. The implementation of
each affected the other and various decisions and trade offs were made
to produce a viable solution given the hardware limitations of the time.

A large part of what made VMS's capabilities unique in the day was the
AST. This provided capabilities that other OSes (including Linux and
Windows) have yet to provide. Likewise, it also required restrictions
that other OSes (including Linux and Windows) do not have.

In VAX/VMS, there is not a software AST "routine" dispatcher.

The AST dispatch/routine mechanism is implemented in the VAX hardware.

The extra "arguments" are the hardware context required by the VAX
hardware to correctly execute the AST return.

The programing practice has always been to ignore those arguments.

Would a pure software design have done it some other way. Of course!

But such a software redesign would have impacted more than just VMS.
ASTs are used in many user mode programs.

If you want to blame something, blame the VAX hardware design. But,
that very design is what made VMS viable in the 1970/1980s time frame.
Post by Simon Clubley
Post by abrsvc
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Hello Dan,
The problem is not the preserving of R0/R1/PC/PS{L}, but the way in
which it was done. This information should be private to the AST
dispatcher that calls the AST routine. It should never be visible
to the called AST routine itself because that is an outright violation
of good modular design and that's as true back when VMS was designed
as it is now.
In case you are familiar with bare metal interrupt programming, you can
compare the calling of an AST routine with the way that an interrupt
handler is called when working with bare metal code or when implementing
an OS itself.
On more advanced MCUs, you can have an assembly language interrupt
dispatcher that calls the actual C language interrupt handler and you
can end up having to save more interrupt state than just the normal
registers in your assembly language interrupt dispatcher.
However, that information is always private to the interrupt dispatcher,
and is _never_ exposed to the interrupt handler itself and this is so
universally true, that I didn't even realise what the AST registers were
being used for until it was pointed out to me.
For example, in one bare metal assembly language interrupt dispatcher
I wrote a while back for an ARM processor and which I was looking at
recently, the dispatcher has to save what is called the priority limiter
register before programming a new value (this allows nested interrupts
to occur).
This is saved onto the stack by the dispatcher before calling the
C language interrupt handler and is restored by the interrupt dispatcher
upon return from the interrupt handler. This information is private
to the dispatcher, it is not visible to the interrupt handler, and
I would never design a system where it was, because that is an utter
violation of modular and good programming practice.
That priority limiter register can be compared to one of those private
AST registers and that's why I consider it so wrong that those private
registers are there in the AST call frame and hence visible to the
called routine as it's an utter violation of good modular design.
Simon.
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Stephen Hoffman
2021-11-30 16:31:34 UTC
Permalink
Post by Chris Scheers
I think this message shows the source of the confusion.
The AST dispatch/return mechanism has nothing to do with Macro32 or any
other language.
VMS and the VAX hardware were designed together. The implementation of
each affected the other and various decisions and trade offs were made
to produce a viable solution given the hardware limitations of the time.
A large part of what made VMS's capabilities unique in the day was the
AST. This provided capabilities that other OSes (including Linux and
Windows) have yet to provide. Likewise, it also required restrictions
that other OSes (including Linux and Windows) do not have.
In VAX/VMS, there is not a software AST "routine" dispatcher.
The AST dispatch/routine mechanism is implemented in the VAX hardware.
The extra "arguments" are the hardware context required by the VAX
hardware to correctly execute the AST return.
The programing practice has always been to ignore those arguments.
Would a pure software design have done it some other way. Of course!
But such a software redesign would have impacted more than just VMS.
ASTs are used in many user mode programs.
If you want to blame something, blame the VAX hardware design. But,
that very design is what made VMS viable in the 1970/1980s time frame.
The VAX/VMS AST delivery code is not hardware. It's software. The code
is located in module ASTDEL (see routines SCH$QAST, SC$ASTDEL,
EXE$ASTRET, etc), and the code had the choice of saving arguments onto
the argument list or saving arguments on the other side of the frame
pointer for the AST, and the developer chose oddly.

A REI instruction triggers an interrupt to go check for some pending
work, and that interrupt then runs a whole lot of software. Including
the CALLG used to pass control to the AST code. Since it was a CALLG
instruction used to pass control to the AST, passing one argument would
have worked as well as five. Not that use of a CALLS would have
significantly changed the flow, other than moving the AP/FP around a
little and wasting some stack. But we have five arguments.

AST delivery does need to preserve the registers involved (as CALLS and
CALLG do not preserve R0 and R1 per the calling standard), though the
visibility of those added arguments on the AST call are basically
useful only for causing corruptions in apps. Which is what Simon is
grumbling about. The whole design tends to point to latent
argument-mismatches in many uses of ASTs, too. That'll be fun to fix,
just as soon as better diagnostics are enabled in the various
programming languages the ASTs are written using.

And that AST argument list design is not changing soon. There's no
reason to change this current ASTDEL argument-list design ~forty years
on either, absent larger changes such as those involved with some
hypothetical implementation of object-oriented message-passing support
on OpenVMS.
Post by Chris Scheers
Post by Simon Clubley
Post by abrsvc
I find these kind of comments somewhat offensive since it is easy to
criticize the decisions of people made 40 years ago using the context
of knowledge today. VMS was designed as a cooperative pairing of both
hardware and software. The use of R0 and R1 was for consistency across
calls and had nothing to do with MACRO32 at all. Bliss used the same
register conventions. If the VMS and VAX engineers knew in the late
70's what was known now, I suspect things would have been done
differently.
Hello Dan,
The problem is not the preserving of R0/R1/PC/PS{L}, but the way in
which it was done. This information should be private to the AST
dispatcher that calls the AST routine. It should never be visible to
the called AST routine itself because that is an outright violation of
good modular design and that's as true back when VMS was designed as it
is now.
In case you are familiar with bare metal interrupt programming, you can
compare the calling of an AST routine with the way that an interrupt
handler is called when working with bare metal code or when
implementing an OS itself.
On more advanced MCUs, you can have an assembly language interrupt
dispatcher that calls the actual C language interrupt handler and you
can end up having to save more interrupt state than just the normal
registers in your assembly language interrupt dispatcher.
However, that information is always private to the interrupt
dispatcher, and is _never_ exposed to the interrupt handler itself and
this is so universally true, that I didn't even realise what the AST
registers were being used for until it was pointed out to me.
For example, in one bare metal assembly language interrupt dispatcher I
wrote a while back for an ARM processor and which I was looking at
recently, the dispatcher has to save what is called the priority
limiter register before programming a new value (this allows nested
interrupts to occur).
This is saved onto the stack by the dispatcher before calling the C
language interrupt handler and is restored by the interrupt dispatcher
upon return from the interrupt handler. This information is private to
the dispatcher, it is not visible to the interrupt handler, and I would
never design a system where it was, because that is an utter violation
of modular and good programming practice.
That priority limiter register can be compared to one of those private
AST registers and that's why I consider it so wrong that those private
registers are there in the AST call frame and hence visible to the
called routine as it's an utter violation of good modular design.
As for some of the other replies in this thread...

Threading is now baked into OpenVMS (KP), into Linux (POSIX AIO since
2.6, io_uring/liburing more recently, etc), and similarly built into
most other platforms. ASTs aren't particularly unique in 2021.

Ignoring the hardware register stuff in the VAX/VMS AST argument list,
ASTs are somewhat a pain in the arse as compared with some other
designs too, but ASTs can and do work. And KP threading isn't all that
well integrated into the OpenVMS system service calls. There are some
discussions on mixing threads and ASTs around on OpenVMS, and there are
subtleties awaiting app developers here. The OpenVMS documentation did
not cover this area at all well, when last I checked.

As for whether Macro32 was involved in this design? Donno. Kernels
still tend to mess with registers in some corners, and assemblers are
still better than that than is C with asm or built-ins, or some other
alternative. Though we're getting pretty close with the alternatives.

There are other stupid ideas in OpenVMS, such as the successful access
violation, and using localtime in the system clock.

Oh, and there was a replacement created for VAX/VMS at DEC—built with
things learned from VMS and other work—and that replacement has been
quite successful in the market. That replacement? DEC MICA. For some
posters, a platform which largely sees references around here in the
comp.os.vms newsgroup as a source of examples of mistakes that OpenVMS
should... emulate? implement? something.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2021-11-29 22:58:46 UTC
Permalink
Post by Simon Clubley
Post by abrsvc
I find these kind of comments somewhat offensive since it is easy to criticize the decisions of people made 40 years ago using the context of knowledge today. VMS was designed as a cooperative pairing of both hardware and software. The use of R0 and R1 was for consistency across calls and had nothing to do with MACRO32 at all. Bliss used the same register conventions. If the VMS and VAX engineers knew in the late 70's what was known now, I suspect things would have been done differently.
Hello Dan,
The problem is not the preserving of R0/R1/PC/PS{L}, but the way in
which it was done. This information should be private to the AST
dispatcher that calls the AST routine. It should never be visible
to the called AST routine itself because that is an outright violation
of good modular design and that's as true back when VMS was designed
as it is now.
In case you are familiar with bare metal interrupt programming, you can
compare the calling of an AST routine with the way that an interrupt
handler is called when working with bare metal code or when implementing
an OS itself.
On more advanced MCUs, you can have an assembly language interrupt
dispatcher that calls the actual C language interrupt handler and you
can end up having to save more interrupt state than just the normal
registers in your assembly language interrupt dispatcher.
However, that information is always private to the interrupt dispatcher,
and is _never_ exposed to the interrupt handler itself and this is so
universally true, that I didn't even realise what the AST registers were
being used for until it was pointed out to me.
For example, in one bare metal assembly language interrupt dispatcher
I wrote a while back for an ARM processor and which I was looking at
recently, the dispatcher has to save what is called the priority limiter
register before programming a new value (this allows nested interrupts
to occur).
This is saved onto the stack by the dispatcher before calling the
C language interrupt handler and is restored by the interrupt dispatcher
upon return from the interrupt handler. This information is private
to the dispatcher, it is not visible to the interrupt handler, and
I would never design a system where it was, because that is an utter
violation of modular and good programming practice.
That priority limiter register can be compared to one of those private
AST registers and that's why I consider it so wrong that those private
registers are there in the AST call frame and hence visible to the
called routine as it's an utter violation of good modular design.
I agree with the sentiment here, but this is not something that has
anything to do with VMS.

This is a choice that was done by the people writing the language
runtime system. For some reason they thought it was a good idea to
expose these internal things explicitly in the language. I would not do
it, and it seems you wouldn't either.

But why are you blaming that on VMS?

Johnny
Simon Clubley
2021-11-30 19:22:03 UTC
Permalink
Post by Johnny Billquist
I agree with the sentiment here, but this is not something that has
anything to do with VMS.
This is a choice that was done by the people writing the language
runtime system. For some reason they thought it was a good idea to
expose these internal things explicitly in the language. I would not do
it, and it seems you wouldn't either.
But why are you blaming that on VMS?
In addition to the comments posted by Stephen, VMS has what is called
the Common Language Environment, which all DEC compilers must comply
with. The CLE is a standard which sets down rules to allow modules
written in different programming languages to interact with each other.

As another example of how VMS controls the compilers, VMS also supplies
the Structure Definition Language files and the SDL compiler to generate
the language-specific VMS headers from the VMS supplied SDL files. These
VMS-specific headers are not manually created by the compiler teams.

For these reasons, I have always regarded this kind of thing as being
a part of VMS itself and not just something done by the compiler teams.
The compiler teams do not have a free hand here and have always been
driven by standards and processes laid down by VMS engineering.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-30 19:46:52 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
I agree with the sentiment here, but this is not something that has
anything to do with VMS.
This is a choice that was done by the people writing the language
runtime system. For some reason they thought it was a good idea to
expose these internal things explicitly in the language. I would not do
it, and it seems you wouldn't either.
But why are you blaming that on VMS?
In addition to the comments posted by Stephen, VMS has what is called
the Common Language Environment, which all DEC compilers must comply
with. The CLE is a standard which sets down rules to allow modules
written in different programming languages to interact with each other.
As another example of how VMS controls the compilers, VMS also supplies
the Structure Definition Language files and the SDL compiler to generate
the language-specific VMS headers from the VMS supplied SDL files. These
VMS-specific headers are not manually created by the compiler teams.
For these reasons, I have always regarded this kind of thing as being
a part of VMS itself and not just something done by the compiler teams.
The compiler teams do not have a free hand here and have always been
driven by standards and processes laid down by VMS engineering.
Now I am confused.

I thought the argument was that VMS VAX was doing
(and VMS Alpha + VMS Itanium continued to for
compatibility reasons):

VMS---(5 args)--->AST function

and many people would have preferred:

VMS---(1 arg)--->AST function

The language and language runtimes of the AST function does
not matter for that. The arguments are there.

The difference is that some languages / language runtimes are
fine with 5 args being present and AST function expecting
1 arg while other languages (specifically Basic) complain
and AST function must declare 5 args.

Arne
Phil Howell
2021-12-01 02:11:19 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Johnny Billquist
I agree with the sentiment here, but this is not something that has
anything to do with VMS.
This is a choice that was done by the people writing the language
runtime system. For some reason they thought it was a good idea to
expose these internal things explicitly in the language. I would not do
it, and it seems you wouldn't either.
But why are you blaming that on VMS?
In addition to the comments posted by Stephen, VMS has what is called
the Common Language Environment, which all DEC compilers must comply
with. The CLE is a standard which sets down rules to allow modules
written in different programming languages to interact with each other.
As another example of how VMS controls the compilers, VMS also supplies
the Structure Definition Language files and the SDL compiler to generate
the language-specific VMS headers from the VMS supplied SDL files. These
VMS-specific headers are not manually created by the compiler teams.
For these reasons, I have always regarded this kind of thing as being
a part of VMS itself and not just something done by the compiler teams.
The compiler teams do not have a free hand here and have always been
driven by standards and processes laid down by VMS engineering.
Now I am confused.
I thought the argument was that VMS VAX was doing
(and VMS Alpha + VMS Itanium continued to for
VMS---(5 args)--->AST function
VMS---(1 arg)--->AST function
The language and language runtimes of the AST function does
not matter for that. The arguments are there.
The difference is that some languages / language runtimes are
fine with 5 args being present and AST function expecting
1 arg while other languages (specifically Basic) complain
and AST function must declare 5 args.
Arne
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
Stephen Hoffman
2021-12-01 15:52:39 UTC
Permalink
Post by Phil Howell
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
I'd forgotten about that thread.

What a wonderfully inconsistent trashfire ASTs are.

Somebody at VSI probably now has some (more) writing to do, and some
(more) of the existing documentation to review.

And it seems some BASIC declaration somewhere for the AST API is
arguably busted.

Ah, well.
--
Pure Personal Opinion | HoffmanLabs LLC
Dave Froble
2021-12-01 19:41:00 UTC
Permalink
Post by Stephen Hoffman
Post by Phil Howell
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
I'd forgotten about that thread.
What a wonderfully inconsistent trashfire ASTs are.
Somebody at VSI probably now has some (more) writing to do, and some (more) of
the existing documentation to review.
And it seems some BASIC declaration somewhere for the AST API is arguably busted.
Ah, well.
I'm a bit afraid to ask another question. The last question I asked seemed to
start weeks of, not sure what to call it, but some of it was rather nasty.

Oh, well, another question.

I haven't done any research, so the question might have a simple answer.

When as AST is specified while calling a system service, and an AST parameter
can be specified, other than following the docs, what causes me to need to
specify 5 parameters in the AST subroutine? Unless I declare the subroutine
with 5 parameters, I don't know what might enforce such a requirement.

Ok, I really should just go and try it myself, but, I'm lazy. Anyone have a
simple answer?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Dave Froble
2021-12-01 20:35:57 UTC
Permalink
Post by Dave Froble
Post by Stephen Hoffman
Post by Phil Howell
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
I'd forgotten about that thread.
What a wonderfully inconsistent trashfire ASTs are.
Somebody at VSI probably now has some (more) writing to do, and some (more)
of the existing documentation to review.
And it seems some BASIC declaration somewhere for the AST API is arguably busted.
Ah, well.
I'm a bit afraid to ask another question. The last question I asked seemed to
start weeks of, not sure what to call it, but some of it was rather nasty.
Oh, well, another question.
I haven't done any research, so the question might have a simple answer.
When as AST is specified while calling a system service, and an AST parameter
can be specified, other than following the docs, what causes me to need to
specify 5 parameters in the AST subroutine? Unless I declare the subroutine
with 5 parameters, I don't know what might enforce such a requirement.
Ok, I really should just go and try it myself, but, I'm lazy. Anyone have a
simple answer?
I usually declare the subroutine with one argument for an AST routine, and
that's the context pointer. That's worked in C and C++ for an aeon or three.
Though whether it breaks with x86-64 port?
And I usually use a pointer to some app-local data structure, as that's where I
stash the IOSB or whatever other connection-specific details are required for
the AST.
It's also where I stash the "unwind in progress" flag, if I'm cancelling some
operation and it's unclear whether the cancel or the AST will arrive first.
If that one-argument declaration is tolerated by BASIC, use it.
The Linker isn't particularly sensitive to API declarations, and will probably
not notice any API differences. API contract "enforcement" here is usually by
app failure.
Otherwise—if BASIC won't play nice with a one-argument AST declaration—specify
the context pointer and whatever other four values will be tolerated by BASIC
and the Linker.
Ok, got a bit un-lazy, tried it.

This works:

1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************

SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )

CALL SYS$CANCEL( Loc(CH%) By Value )

SubEnd

This does not work:

1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************

SUB TCP_TIMER( LONG CH% )

CALL SYS$CANCEL( Loc(CH%) By Value )

SubEnd

It seems to have a problem when issuing a read on an I/O channel, not when
invoking the QIO that specifies the AST routine.

I'm not complaining, this was just a test. I'm a bit curious what caused the
error, there was no evident error code or such.

In the debugger, there was a report of "too many arguments" or something like
that. I'm just guessing that at some point Basic caused some count of arguments
and decided that there were too many arguments for the routine as declared.

I hate it when the computer thinks it's smarter than me. Of course that bar
isn't too high.

:-)
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Dave Froble
2021-12-02 02:48:17 UTC
Permalink
Post by Dave Froble
Post by Dave Froble
Post by Stephen Hoffman
Post by Phil Howell
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
I'd forgotten about that thread.
What a wonderfully inconsistent trashfire ASTs are.
Somebody at VSI probably now has some (more) writing to do, and some (more)
of the existing documentation to review.
And it seems some BASIC declaration somewhere for the AST API is arguably busted.
Ah, well.
I'm a bit afraid to ask another question. The last question I asked seemed to
start weeks of, not sure what to call it, but some of it was rather nasty.
Oh, well, another question.
I haven't done any research, so the question might have a simple answer.
When as AST is specified while calling a system service, and an AST parameter
can be specified, other than following the docs, what causes me to need to
specify 5 parameters in the AST subroutine? Unless I declare the subroutine
with 5 parameters, I don't know what might enforce such a requirement.
Ok, I really should just go and try it myself, but, I'm lazy. Anyone have a
simple answer?
I usually declare the subroutine with one argument for an AST routine, and
that's the context pointer. That's worked in C and C++ for an aeon or three.
Though whether it breaks with x86-64 port?
And I usually use a pointer to some app-local data structure, as that's where I
stash the IOSB or whatever other connection-specific details are required for
the AST.
It's also where I stash the "unwind in progress" flag, if I'm cancelling some
operation and it's unclear whether the cancel or the AST will arrive first.
If that one-argument declaration is tolerated by BASIC, use it.
The Linker isn't particularly sensitive to API declarations, and will probably
not notice any API differences. API contract "enforcement" here is usually by
app failure.
Otherwise???if BASIC won't play nice with a one-argument AST declaration???specify
the context pointer and whatever other four values will be tolerated by BASIC
and the Linker.
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
It seems to have a problem when issuing a read on an I/O channel, not when
invoking the QIO that specifies the AST routine.
I'm not complaining, this was just a test. I'm a bit curious what caused the
error, there was no evident error code or such.
In the debugger, there was a report of "too many arguments" or something like
that. I'm just guessing that at some point Basic caused some count of arguments
and decided that there were too many arguments for the routine as declared.
And there is your answer...
$ HELP/LIBRARY=BASICHELP RUN_TIME_ERRORS TOOMANARG
RUN_TIME_ERRORS
TOOMANARG
Too many arguments (ERR=89)
A function call or a SUB or FUNCTION statement passed more arguments
than were expected. Reduce the number of arguments. A SUB or
a function call can pass a maximum of eight arguments. This error
cannot be trapped with a BASIC error handler.
There is also an accompanying TOOFEWARG. BASIC does a lot of things to
protect your fingers from the saw. Howeever, this is not like a blade
guard, more like a pack up the saw in the box and send it back!
Just another reason why BASIC is not one of my favourite languages...
Regards, Tim.
Thanks Tim, that really does clear things up. I got no problem following the
directions, (when all else fails read the directions), I just was curious about why.

I'm told that some compilers are more rigid than some others, with insuring the
argument count when setting up the call to a subroutine or function.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
John Reagan
2021-12-02 16:36:04 UTC
Permalink
Post by Dave Froble
Post by Stephen Hoffman
You are not alone in your confusion
See this post from a long time ago
https://community.hpe.com/t5/Operating-System-OpenVMS/AST-routine-and-C-language-va-count-va-start-va-end-etc/td-p/4878940#.YabUqew8arU
I'd forgotten about that thread.
What a wonderfully inconsistent trashfire ASTs are.
Somebody at VSI probably now has some (more) writing to do, and some (more)
of the existing documentation to review.
And it seems some BASIC declaration somewhere for the AST API is arguably busted.
Ah, well.
I'm a bit afraid to ask another question. The last question I asked seemed to
start weeks of, not sure what to call it, but some of it was rather nasty.
Oh, well, another question.
I haven't done any research, so the question might have a simple answer.
When as AST is specified while calling a system service, and an AST parameter
can be specified, other than following the docs, what causes me to need to
specify 5 parameters in the AST subroutine? Unless I declare the subroutine
with 5 parameters, I don't know what might enforce such a requirement.
Ok, I really should just go and try it myself, but, I'm lazy. Anyone have a
simple answer?
I usually declare the subroutine with one argument for an AST routine, and
that's the context pointer. That's worked in C and C++ for an aeon or three.
Though whether it breaks with x86-64 port?
And I usually use a pointer to some app-local data structure, as that's where I
stash the IOSB or whatever other connection-specific details are required for
the AST.
It's also where I stash the "unwind in progress" flag, if I'm cancelling some
operation and it's unclear whether the cancel or the AST will arrive first.
If that one-argument declaration is tolerated by BASIC, use it.
The Linker isn't particularly sensitive to API declarations, and will probably
not notice any API differences. API contract "enforcement" here is usually by
app failure.
Otherwise???if BASIC won't play nice with a one-argument AST declaration???specify
the context pointer and whatever other four values will be tolerated by BASIC
and the Linker.
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
It seems to have a problem when issuing a read on an I/O channel, not when
invoking the QIO that specifies the AST routine.
I'm not complaining, this was just a test. I'm a bit curious what caused the
error, there was no evident error code or such.
In the debugger, there was a report of "too many arguments" or something like
that. I'm just guessing that at some point Basic caused some count of arguments
and decided that there were too many arguments for the routine as declared.
And there is your answer...
$ HELP/LIBRARY=BASICHELP RUN_TIME_ERRORS TOOMANARG
RUN_TIME_ERRORS
TOOMANARG
Too many arguments (ERR=89)
A function call or a SUB or FUNCTION statement passed more arguments
than were expected. Reduce the number of arguments. A SUB or
a function call can pass a maximum of eight arguments. This error
cannot be trapped with a BASIC error handler.
There is also an accompanying TOOFEWARG. BASIC does a lot of things to
protect your fingers from the saw. Howeever, this is not like a blade
guard, more like a pack up the saw in the box and send it back!
Just another reason why BASIC is not one of my favourite languages...
Regards, Tim.
You can suppress that run-time check (and other heavy-handed BASIC features) with

OPTION INACTIVE=SETUP

In the routine. (Don't yell at me about that ugly syntax. It makes my skin crawl too.)
I saw a reference to a /SETUP and /NOSETUP qualifier while reading the comments, but I don't see that qualifier in the compiler at all.
Arne Vajhøj
2021-12-02 20:14:13 UTC
Permalink
Post by John Reagan
Post by Dave Froble
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
$ HELP/LIBRARY=BASICHELP RUN_TIME_ERRORS TOOMANARG
RUN_TIME_ERRORS
TOOMANARG
Too many arguments (ERR=89)
A function call or a SUB or FUNCTION statement passed more arguments
than were expected. Reduce the number of arguments. A SUB or
a function call can pass a maximum of eight arguments. This error
cannot be trapped with a BASIC error handler.
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
In the routine.
But Basic is correct - there are too many arguments
supplied (or too few arguments expected).

It much be better to fix that than to disable the check.

Arne
Dave Froble
2021-12-02 20:43:04 UTC
Permalink
Post by Arne Vajhøj
Post by John Reagan
Post by Dave Froble
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
$ HELP/LIBRARY=BASICHELP RUN_TIME_ERRORS TOOMANARG
RUN_TIME_ERRORS
TOOMANARG
Too many arguments (ERR=89)
A function call or a SUB or FUNCTION statement passed more arguments
than were expected. Reduce the number of arguments. A SUB or
a function call can pass a maximum of eight arguments. This error
cannot be trapped with a BASIC error handler.
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
In the routine.
But Basic is correct - there are too many arguments
supplied (or too few arguments expected).
It much be better to fix that than to disable the check.
Arne
You going to advocate "fixing" that "lack of argument count check" in C and
other languages too?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2021-12-02 20:55:52 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by John Reagan
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
In the routine.
But Basic is correct - there are too many arguments
supplied (or too few arguments expected).
It much be better to fix that than to disable the check.
You going to advocate "fixing" that "lack of argument count check" in C
and other languages too?
I would not advocate changing C, but I would advocate not using
C unless one really needs the low level features.

C is a great language for code that 40 years ago would have been
written in Macro-32. It is a poor language for code that 40 years
ago would have been written in Pascal, Ada or Basic.

Unfortunately the dominance of C API's for native code are often
forcing the usage of C.

Arne
Bill Gunshannon
2021-12-02 21:20:06 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by John Reagan
Post by Dave Froble
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
$ HELP/LIBRARY=BASICHELP RUN_TIME_ERRORS TOOMANARG
RUN_TIME_ERRORS
TOOMANARG
Too many arguments (ERR=89)
A function call or a SUB or FUNCTION statement passed more arguments
than were expected. Reduce the number of arguments. A SUB or
a function call can pass a maximum of eight arguments. This error
cannot be trapped with a BASIC error handler.
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
In the routine.
But Basic is correct - there are too many arguments
supplied (or too few arguments expected).
It much be better to fix that than to disable the check.
Arne
You going to advocate "fixing" that "lack of argument count check" in C
and other languages too?
They did fix it in C. They called it C++. :-)

bill
Dave Froble
2021-12-02 20:45:36 UTC
Permalink
Post by John Reagan
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
Awesome John. But I have to ask, is there any documentation of all the compiler
options such as that? Maybe I should just RTFM ? Nah, what's the fun in that?
Post by John Reagan
In the routine. (Don't yell at me about that ugly syntax. It makes my skin crawl too.)
I saw a reference to a /SETUP and /NOSETUP qualifier while reading the comments, but I don't see that qualifier in the compiler at all.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2021-12-02 20:59:25 UTC
Permalink
Post by John Reagan
You can suppress that run-time check (and other heavy-handed BASIC features) with
OPTION INACTIVE=SETUP
Awesome John.  But I have to ask, is there any documentation of all the
compiler options such as that?  Maybe I should just RTFM ?  Nah, what's
the fun in that?
The FM called "Basic Reference Manual" lists:

INTEGER OVERFLOW
DECIMAL OVERFLOW
SETUP
DECIMAL ROUNDING
SUBSCRIPT CHECKING

Arne
Bob Gezelter
2021-12-03 00:18:11 UTC
Permalink
Post by Dave Froble
Ok, got a bit un-lazy, tried it.
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% , &
LONG Z2% , &
LONG Z3% , &
LONG Z4% , &
LONG Z5% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
1 !************************************************
! Timer AST Timeout Handler to Cancel I/O
!************************************************
SUB TCP_TIMER( LONG CH% )
CALL SYS$CANCEL( Loc(CH%) By Value )
SubEnd
It seems to have a problem when issuing a read on an I/O channel, not
when invoking the QIO that specifies the AST routine.
I'm not complaining, this was just a test. I'm a bit curious what
caused the error, there was no evident error code or such.
In the debugger, there was a report of "too many arguments" or something
like that. I'm just guessing that at some point Basic caused some count
of arguments and decided that there were too many arguments for the
routine as declared.
I hate it when the computer thinks it's smarter than me. Of course that
bar isn't too high.
:-)
Basic is different from some more low level languages.
But I don't see a problem with Basic here.
The AST function is being called with 5 arguments and
the AST function is declared to receive 1 argument, so
something in the generated code does not like it.
Seems perfectly fair to me.
Obviously the documentation should be very clear
about the 5 arguments.
Arne
Arne,

I know this problem. Encountered it at a client.

BASIC checks the parameter count. Done. C and others do not .

- Bob Gezelter, http://www.rlgsc.com
hb
2021-12-01 21:23:41 UTC
Permalink
The Linker isn't particularly sensitive to API declarations, and will
probably not notice any API differences. API contract "enforcement" here
is usually by app failure.
Otherwise—if BASIC won't play nice with a one-argument AST
declaration—specify the context pointer and whatever other four values
will be tolerated by BASIC and the Linker.
The linker matches symbols, which represent references and definitions.
It complains if it can't find a matching definition for a reference. The
symbol name and the symbol type must match. That is the linker knows
about data and routines. It will not let you define an object for a
routine reference. That's more or less all the linker does, here.

With C++ you get the API encoded in the symbol name, also known as
"decorated" or "mangled" name. With matching such symbols the linker
implicitly checks the API and does notice a difference, that is, it will
print an unresolved reference warning. For example, if you call (or take
the address of) "foo(int,int)" but only define a "foo(int)" you will see

%ILINK-I-UDFSYM, CX3$_Z3FOOII2INROLH
%ILINK-W-USEUNDEF, undefined symbol CX3$_Z3FOOII2INROLH referenced
source code name: "foo(int, int)"
Chris Townley
2021-12-01 23:24:05 UTC
Permalink
Post by hb
The Linker isn't particularly sensitive to API declarations, and will
probably not notice any API differences. API contract "enforcement" here
is usually by app failure.
Otherwise—if BASIC won't play nice with a one-argument AST
declaration—specify the context pointer and whatever other four values
will be tolerated by BASIC and the Linker.
The linker matches symbols, which represent references and definitions.
It complains if it can't find a matching definition for a reference. The
symbol name and the symbol type must match. That is the linker knows
about data and routines. It will not let you define an object for a
routine reference. That's more or less all the linker does, here.
With C++ you get the API encoded in the symbol name, also known as
"decorated" or "mangled" name. With matching such symbols the linker
implicitly checks the API and does notice a difference, that is, it will
print an unresolved reference warning. For example, if you call (or take
the address of) "foo(int,int)" but only define a "foo(int)" you will see
%ILINK-I-UDFSYM, CX3$_Z3FOOII2INROLH
%ILINK-W-USEUNDEF, undefined symbol CX3$_Z3FOOII2INROLH referenced
source code name: "foo(int, int)"
ADA picks that up at compile time
--
Chris
Stephen Hoffman
2021-12-02 20:04:17 UTC
Permalink
Post by hb
The Linker isn't particularly sensitive to API declarations, and will
probably not notice any API differences. API contract "enforcement"
here is usually by app failure.
Otherwise—if BASIC won't play nice with a one-argument AST
declaration—specify the context pointer and whatever other four values
will be tolerated by BASIC and the Linker.
The linker matches symbols, which represent references and definitions.
It complains if it can't find a matching definition for a reference.
The symbol name and the symbol type must match. That is the linker
knows about data and routines. It will not let you define an object for
a routine reference. That's more or less all the linker does, here.
With C++ you get the API encoded in the symbol name, also known as
"decorated" or "mangled" name. With matching such symbols the linker
implicitly checks the API and does notice a difference, that is, it
will print an unresolved reference warning. For example, if you call
(or take the address of) "foo(int,int)" but only define a "foo(int)"
you will see
%ILINK-I-UDFSYM, CX3$_Z3FOOII2INROLH
%ILINK-W-USEUNDEF, undefined symbol CX3$_Z3FOOII2INROLH referenced
source code name: "foo(int, int)"
I'm aware of how the OpenVMS linker works here, as well as with the
OpenVMS implementation of mangling, hence my comment quoted above.

Past resolving the associated symbol—symbol resolution which would
necessarily happen with or without name mangling in use—the C and C++
name mangling doesn't involve the OpenVMS linker.

Nor does the OpenVMS linker check the APIs.

Mangling is in some ways used as a workaround for linker limits around
symbol resolution (as is used on OpenVMS), and also used as a
workaround for linker features lacking (across various languages and
compilers and linkers, including as used on OpenVMS).

Mangling is a hack, but it's the hack we have.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2021-12-03 10:06:23 UTC
Permalink
Post by Stephen Hoffman
Mangling is in some ways used as a workaround for linker limits around
symbol resolution (as is used on OpenVMS), ...
If you refer to the length of the symbol name, there is no limit in the
current VMS linker other than the amount of memory available to the
linker when linking the image. If you refer to another limit or to the
VAX and Alpha linker, let us know.
John Reagan
2021-12-03 15:52:13 UTC
Permalink
Post by hb
Post by Stephen Hoffman
Mangling is in some ways used as a workaround for linker limits around
symbol resolution (as is used on OpenVMS), ...
If you refer to the length of the symbol name, there is no limit in the
current VMS linker other than the amount of memory available to the
linker when linking the image. If you refer to another limit or to the
VAX and Alpha linker, let us know.
On the other hand, the librarian does have some length limits but the max/default is 1024.

Almost all of the imposed limits are in the compiler dating back to the days of the VAX linker and the Alpha linker.
Simon Clubley
2021-12-01 18:32:26 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Johnny Billquist
I agree with the sentiment here, but this is not something that has
anything to do with VMS.
This is a choice that was done by the people writing the language
runtime system. For some reason they thought it was a good idea to
expose these internal things explicitly in the language. I would not do
it, and it seems you wouldn't either.
But why are you blaming that on VMS?
In addition to the comments posted by Stephen, VMS has what is called
the Common Language Environment, which all DEC compilers must comply
with. The CLE is a standard which sets down rules to allow modules
written in different programming languages to interact with each other.
As another example of how VMS controls the compilers, VMS also supplies
the Structure Definition Language files and the SDL compiler to generate
the language-specific VMS headers from the VMS supplied SDL files. These
VMS-specific headers are not manually created by the compiler teams.
For these reasons, I have always regarded this kind of thing as being
a part of VMS itself and not just something done by the compiler teams.
The compiler teams do not have a free hand here and have always been
driven by standards and processes laid down by VMS engineering.
Now I am confused.
I thought the argument was that VMS VAX was doing
(and VMS Alpha + VMS Itanium continued to for
VMS---(5 args)--->AST function
VMS---(1 arg)--->AST function
The language and language runtimes of the AST function does
not matter for that. The arguments are there.
That is indeed exactly what this is about and you are right in
what you say above. This is behaviour defined at VMS level, not
at compiler level.

Johnny was thinking this may be a compiler issue that exposed the
extra arguments, not a VMS issue, based on something he appears to
be seeing in Unix. (I don't have enough internals knowledge in that
specific part of Unix to know if he's right about that or not.)

However, on VMS, this is very much a part of VMS, is based on VMS
standards, and is not something done in the compilers. It's just that
the various compilers react differently to what VMS itself is doing.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2021-12-02 16:12:28 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Now I am confused.
I thought the argument was that VMS VAX was doing
(and VMS Alpha + VMS Itanium continued to for
VMS---(5 args)--->AST function
VMS---(1 arg)--->AST function
The language and language runtimes of the AST function does
not matter for that. The arguments are there.
That is indeed exactly what this is about and you are right in
what you say above. This is behaviour defined at VMS level, not
at compiler level.
Right. VMS was doing a bit more than I was expecting. But with that
said, it's still not purely a VMS issue, although I must admit VMS is
trying to bend you here.
Post by Simon Clubley
Johnny was thinking this may be a compiler issue that exposed the
extra arguments, not a VMS issue, based on something he appears to
be seeing in Unix. (I don't have enough internals knowledge in that
specific part of Unix to know if he's right about that or not.)
No, I wasn't thinking Unix at all. But I was thinking a bit RSX (as
usual). There, an AST isn't really looking like a normal call sequence.
So in C for example, you have to explicitly tell the compiler that the
routine is an AST handler, and the compiler will sort the rest out for you.
But even in VMS, if the language allows for variable number of argument
functions, you can certainly make it much more transparent.

But you could have done the same as in RSX, and have the functions be
tagged to be AST handlers, and then the compiler could have done all
kind of stuff under the hood to make it match just fine.

So even though VMS has one convention for how ASTs are called, any
language could easily have made it look any other way it wanted.

But yes, VMS encourages a certain pattern.


In Unix, it's simply that a signal handler is always just called with
one argument (the signal number) and is not supposed to return anything.

Whatever else is saved as a part of dispatching to the signal handler is
not exposed in the function signature, but of course, if someone wants
to be creative and reach up into the stack, all kind of stuff is there.
But it's not standardized, so I doubt anyone would write anything that
would do this.

And if people want to talk morass, signals in Unix is much worse than
ASTs. The behavior have changed over the years, as well as what kind of
function you are supposed to use to install a signal handler. Things
like what happens if a signal happens while you are in a signal handler,
and what happens when you return from a signal handler have been sortof
undefined and have changed over the years. In old times, you could just
loose signals when you were unlucky.
Post by Simon Clubley
However, on VMS, this is very much a part of VMS, is based on VMS
standards, and is not something done in the compilers. It's just that
the various compilers react differently to what VMS itself is doing.
Well, languages could still do whatever they want. But yeah, the easy
way is to just take what VMS gives straight up, and then you get those
extra arguments that VMS for some reason thought were good to have.

Johnny
Arne Vajhøj
2021-11-26 23:48:30 UTC
Permalink
Post by Simon Clubley
Post by Félim Doyle
Post by Dave Froble
VMS was designed and implemented for VAX, not generic computers.
As I remember it, VAX/VMS was designed by DEC to be its best ever
OS then the VAX hardware was designed and built to run it not the
other way around. There were probably some mistakes made,
unforeseen implementation issues and some miscommunications during
parallel development of the hardware and software but the
facilities that this combination provided, especially in comparison
to the price range of other systems at the time, was
revolutionary.
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it
was used.
That 4-mode architecture could have provided some really truly
radical internal security separation within VMS, but once you are in
any of the 3 inner modes, you can get to any of the other inner modes
so all those extra modes were wasted from a security isolation point
of view.
When that design was done then it was a good protection against
accidental S and E trashing something.

20-40 years later it was realized that it could have been
good if it also protected against malicious code that
tried to switch to an inner mode.

But what so?

It would also have been great if they had foreseen:
* future HW changes and had done a HAL
* the need for later switching from 32 to 64 bit
* etc.etc.

But looking back that way is a rather futile exercise.

There is an old quote:

"A man who never makes a mistake will never make anything."

Nobody can foresee the future and mistakes are made.

Looking back and figuring out what decisions should have been
made is pointless (unless there is lesson learned for
decisions to be made now).

Arne
V***@SendSpamHere.ORG
2021-11-25 23:34:19 UTC
Permalink
Post by Simon Clubley
VMS should have been designed 5-10 years later on than when it was.
In a thread of backpedaling inanities, that has to be the most inane.
Hunter
+1
I am seriously annoyed by that comment Hunter because you have
completely missed (either accidentally or deliberately) the point
I am making (and have made before).
Compared to later operating system designs the internal design
of VMS is a direct product of the 1970s mindset because it is
ugly, hard to alter, not modular, full of internal hacks such
as jumping internally all over the place and was designed when
it was getting close to the end of when assembly language was
considered to be both an acceptable system implementation language
and an application language.
What's hard to alter? Many people say an alternator is difficult to
replace. I did one two weeks ago.
Post by Simon Clubley
VMS has given us great things such as world-leading clustering,
but that doesn't change the ugly nature of its internal design.
It's design in elegant. You just can't get past the fact that it's not
unix.
Post by Simon Clubley
This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.
I don't find that all that much of an issue.
Post by Simon Clubley
Another such example is playing out right now as we speak.
The engineers at VSI are talented, experienced and generally skilled
overall. However, due to how VMS was designed, it has taken even these
skilled people over 7 years so far to port VMS to x86-64 and they will
not be finished until the middle of next year at the earliest.
LOL. I suppose you could do it in a week. I bow to your greatness.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Arne Vajhøj
2021-11-25 23:51:40 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
This has caused major problems going forward as people tried to
enhance VMS. One such example is the need for a combined 32-bit/64-bit
address space.
I don't find that all that much of an issue.
Having P0 and P2 is not an issue in itself.

But having 32 and 64 bit pointers can be an issue.

Maybe not that often, but it comes up occasionally.

Arne
V***@SendSpamHere.ORG
2021-11-27 00:04:21 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Compared to later operating system designs the internal design
of VMS is a direct product of the 1970s mindset because it is
ugly, hard to alter, not modular, full of internal hacks such
as jumping internally all over the place and was designed when
it was getting close to the end of when assembly language was
considered to be both an acceptable system implementation language
and an application language.
What's hard to alter? Many people say an alternator is difficult to
replace. I did one two weeks ago.
Have you ever looked inside the Linux internals to see how clean
they are compared to VMS internals ?
Statement such as yours are purely unfounded without proofs. What is
so *unclean* in the VMS internals? I've never seen a book written as
well as the OpenVMS Internal and Data Structures Manual for Linux. I
demand a proof but not before you answer my question that you have so
completely ignored or skirted around since the inception of this long
thread.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Johnny Billquist
2021-11-28 23:47:05 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by V***@SendSpamHere.ORG
Post by Simon Clubley
Compared to later operating system designs the internal design
of VMS is a direct product of the 1970s mindset because it is
ugly, hard to alter, not modular, full of internal hacks such
as jumping internally all over the place and was designed when
it was getting close to the end of when assembly language was
considered to be both an acceptable system implementation language
and an application language.
What's hard to alter? Many people say an alternator is difficult to
replace. I did one two weeks ago.
Have you ever looked inside the Linux internals to see how clean
they are compared to VMS internals ?
Statement such as yours are purely unfounded without proofs. What is
so *unclean* in the VMS internals? I've never seen a book written as
well as the OpenVMS Internal and Data Structures Manual for Linux. I
demand a proof but not before you answer my question that you have so
completely ignored or skirted around since the inception of this long
thread.
In a way, David is actually completely incorrect. The internals of Linux
is changing all the time. You'll have problems running software more
than a year or two old if it's kernel internal, since the APIs inside
the kernel constantly is changing.

And why is that? Well, obviously because the people writing them and
using them constantly realize ways in which they are not good.

The various BSD systems would be better arguments here. They actually
try to think a little more before doing something, as opposed to Linux,
which is really a case of "do first, think later".

So, no. The claims Simon make are totally unfounded. However, it is
fairly easy to change and evolve in Linux or other Unix like systems,
which shows that there is *something* that is right in there. But it's
not the internals are well designed, stable, and well working.
But there is a simplicitly and modularity, which can be traced back to
Unix of old, which have been a big reason for the good properties in there.
But had the Linux kids been doing things from scratch on their own, it
would most likely have been a mess that noone would have wanted to touch.

Johnny
Scott Dorsey
2021-11-29 00:54:40 UTC
Permalink
Post by Johnny Billquist
In a way, David is actually completely incorrect. The internals of Linux
is changing all the time. You'll have problems running software more
than a year or two old if it's kernel internal, since the APIs inside
the kernel constantly is changing.
This is absolutely true, and I am really pissed off about it.
Post by Johnny Billquist
And why is that? Well, obviously because the people writing them and
using them constantly realize ways in which they are not good.
It's because there is change for change's sake. I don't see dramatic
improvements taking place.
Post by Johnny Billquist
The various BSD systems would be better arguments here. They actually
try to think a little more before doing something, as opposed to Linux,
which is really a case of "do first, think later".
Yes, BSD is much more sane in their approach to making updates of any
sort, and has avoided fundamental changes away from the Unix model in
the user space as well as the kernel.
Post by Johnny Billquist
So, no. The claims Simon make are totally unfounded. However, it is
fairly easy to change and evolve in Linux or other Unix like systems,
which shows that there is *something* that is right in there. But it's
not the internals are well designed, stable, and well working.
But there is a simplicitly and modularity, which can be traced back to
Unix of old, which have been a big reason for the good properties in there.
But had the Linux kids been doing things from scratch on their own, it
would most likely have been a mess that noone would have wanted to touch.
Modularity is a major, major deal here.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Simon Clubley
2021-11-29 19:08:39 UTC
Permalink
Post by V***@SendSpamHere.ORG
Have you ever looked inside the Linux internals to see how clean
they are compared to VMS internals ?
Statement such as yours are purely unfounded without proofs. What is
so *unclean* in the VMS internals? I've never seen a book written as
well as the OpenVMS Internal and Data Structures Manual for Linux. I
demand a proof but not before you answer my question that you have so
completely ignored or skirted around since the inception of this long
thread.
Since I have answered your question, it's time for me to ask one in
return.

If the VMS internals are as clean as you say, then how can a person easily
and cleanly add a new filesystem to VMS with the same comparable effort
as if they were adding one to Linux or another Unix variant that supports
modular filesystems ?

That is not an academic question BTW, Brian.

After the port is finally completed, what is desperately needed is a
brand new filesystem for VMS that matches the hardware of today and the
requirements of today. VSI have tried this twice now, and have abandoned
both efforts (at least for now).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
V***@SendSpamHere.ORG
2021-11-25 23:42:58 UTC
Permalink
Post by Simon Clubley
VMS should have been designed 5-10 years later on than when it was.
In a thread of backpedaling inanities, that has to be the most inane.
Hunter
+1
I am seriously annoyed by that comment Hunter because you have
completely missed (either accidentally or deliberately) the point
I am making (and have made before).
Trolling and changing the thread subject line does not excuse you from
answering the question I asked. <crickets... in November nonetheless>

Troll, troll troll, the troll is marching...
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Simon Clubley
2021-11-26 19:26:31 UTC
Permalink
Post by V***@SendSpamHere.ORG
Trolling and changing the thread subject line does not excuse you from
answering the question I asked. <crickets... in November nonetheless>
Troll, troll troll, the troll is marching...
I already answered your question Brian as you would have seen if you
had actually read my replies.

It turns out that the reason those arguments are available to the called
function is because VMS needs to preserve those registers across the AST
call and some bright spark during the design of VMS thought it was
acceptable to push those registers onto a user-visible call frame instead
of storing them in a private area that the called AST function did not
have access to.

That's the kind of design decision that was so insane it should never
have passed a design review but at least it appears to have been fixed
in later architectures with a private storage area, just like it should
have been on VAX.

And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-26 20:56:05 UTC
Permalink
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
So anything you don't know why is by default a Macro-32 problems??

Arne
Simon Clubley
2021-11-29 18:57:21 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
So anything you don't know why is by default a Macro-32 problems??
No. Read my detailed reply to Dan.

In my eyes, there must have been a reason for those registers being
present and only Macro-32 code could have made any use of them if
they were going to be used by the AST routine itself.

I simply had never considered that possibility that private registers
had been exposed to the called AST routine (instead of being stored
privately by the AST dispatcher) for the dispatcher to use on return
from the AST routine.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-29 19:13:53 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
So anything you don't know why is by default a Macro-32 problems??
No. Read my detailed reply to Dan.
In my eyes, there must have been a reason for those registers being
present and only Macro-32 code could have made any use of them if
they were going to be used by the AST routine itself.
So do something only possible in Macro-32 and not possible in C?

And that is still relevant for "application programming"?

Arne
Simon Clubley
2021-11-29 19:34:00 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
So anything you don't know why is by default a Macro-32 problems??
No. Read my detailed reply to Dan.
In my eyes, there must have been a reason for those registers being
present and only Macro-32 code could have made any use of them if
they were going to be used by the AST routine itself.
So do something only possible in Macro-32 and not possible in C?
And that is still relevant for "application programming"?
That was the thinking yes, before I found out the real reason.

Before that, I _was_ thinking something along the lines of maybe
being able to disturb program state directly in a way that a HLL
would not because the code generated by the HLL compiler would be
guaranteed to save and restore program state for you.

The problem was that I couldn't see what and I now know the reason why. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Dave Froble
2021-11-26 21:56:09 UTC
Permalink
Post by Simon Clubley
it never even occurred to me. :-)
That seems to happen a lot ...
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
V***@SendSpamHere.ORG
2021-11-27 00:35:23 UTC
Permalink
Post by Simon Clubley
Post by V***@SendSpamHere.ORG
Trolling and changing the thread subject line does not excuse you from
answering the question I asked. <crickets... in November nonetheless>
Troll, troll troll, the troll is marching...
I already answered your question Brian as you would have seen if you
had actually read my replies.
It turns out that the reason those arguments are available to the called
function is because VMS needs to preserve those registers across the AST
call and some bright spark during the design of VMS thought it was
acceptable to push those registers onto a user-visible call frame instead
of storing them in a private area that the called AST function did not
have access to.
That's the kind of design decision that was so insane it should never
have passed a design review but at least it appears to have been fixed
in later architectures with a private storage area, just like it should
have been on VAX.
If you really must know, read the "VAX/VMS I&DS at Chapter and Section 7.5
in the V5.2 edition. Oh, I keep forgetting, you despise processor modes,
so that'll just open up this up to more "wank, wank, wank VMS IS NOT UNIX"
hatred you spew.
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
What? What? WHAT? You mean it wasn't because there's something you could
do using C that one could not do with Macro-32? Is that what your saying?
Were you aware that all that "elegant" Murray Hill hieroglyphics you're so
fond of actually gets converted to machine code by these things they call
compilers? On a VAX, that'd be VAX machine code whose assembly is called,
wait for it, Macro-32. The AST mechanism was devised to allow "high level"
languages or C to be used for the AST routine, so why would Macro-32 be ANY
less capable? I knew you couldn't answer this but boy you can sure exhale
volumes of Ye Olde Hot Air.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Andrew Commons
2021-11-27 05:46:14 UTC
Permalink
I've been watching this thread with a mixture of amusement and horror. The
triggering thread regarding AST routines was equally enlightening. In
fact if I ever feel the urge to start a thread here I will probably make
the subject line something like this:

<My Topic>, was: Re: Something Simon Clubley felt strongly about

Purely curiosity to see if it stopped forking :)

So, for Simon...
Post by Simon Clubley
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it was
used.
Well, a bit like Intel implementing a 4-mode architecture and then having
Microsoft completely blow how it is used?

Note that the OS/2 update that Cutler and Co were originally hired to work
on used 3 of the modes. When Windows looked like becoming a success then
it switched to a Windows upgrade instead. Gates wanted it to run on
consumer hardware, so things got dropped. There are still 4 modes available
and I'm sure VSI are using them.
Post by Simon Clubley
That 4-mode architecture could have provided some really truly radical
internal security separation within VMS, but once you are in any of the
3 inner modes, you can get to any of the other inner modes so all those
extra modes were wasted from a security isolation point of view.
Put your money where your mouth is. Prove it. Post examples that show a
fundamental flaw rather than an Ooops in a single privileged program.
Post by Simon Clubley
In case you are wondering, you can escalate from supervisor mode because
DCL has access to the privileges of the programs it runs even though it
doesn't actually need them. That kind of thing should have stayed within
the kernel so DCL never sees those privileges.
If this was such a huge fundamental problem I would expect masses of
vulnerability reports. Where are they? Post examples.

When a program runs in a privileged context then those writing the program
obviously need to exercise care. Ideally you enable/disable privileges
in a Just In Time basis and, obviously, when you are operating in a mode
higher than the originating mode any inputs from the lower mode must be
treated with caution. Failing to do this on one occasion does not invalidate
the security model.

I will now scrub my cookies and history back to bedrock which I recommend
after logging in to anything Google related.
John Doppke
2021-11-28 14:13:17 UTC
Permalink
Post by Andrew Commons
I've been watching this thread with a mixture of amusement and horror. The
triggering thread regarding AST routines was equally enlightening. In
fact if I ever feel the urge to start a thread here I will probably make
<My Topic>, was: Re: Something Simon Clubley felt strongly about
Purely curiosity to see if it stopped forking :)
So, for Simon...
Post by Simon Clubley
One of the biggest mistakes made is that DEC went to the trouble of
implementing a 4-mode architecture and then completely blew how it was
used.
Well, a bit like Intel implementing a 4-mode architecture and then having
Microsoft completely blow how it is used?
Note that the OS/2 update that Cutler and Co were originally hired to work
on used 3 of the modes. When Windows looked like becoming a success then
it switched to a Windows upgrade instead. Gates wanted it to run on
consumer hardware, so things got dropped. There are still 4 modes available
and I'm sure VSI are using them.
Post by Simon Clubley
That 4-mode architecture could have provided some really truly radical
internal security separation within VMS, but once you are in any of the
3 inner modes, you can get to any of the other inner modes so all those
extra modes were wasted from a security isolation point of view.
Put your money where your mouth is. Prove it. Post examples that show a
fundamental flaw rather than an Ooops in a single privileged program.
Post by Simon Clubley
In case you are wondering, you can escalate from supervisor mode because
DCL has access to the privileges of the programs it runs even though it
doesn't actually need them. That kind of thing should have stayed within
the kernel so DCL never sees those privileges.
If this was such a huge fundamental problem I would expect masses of
vulnerability reports. Where are they? Post examples.
When a program runs in a privileged context then those writing the program
obviously need to exercise care. Ideally you enable/disable privileges
in a Just In Time basis and, obviously, when you are operating in a mode
higher than the originating mode any inputs from the lower mode must be
treated with caution. Failing to do this on one occasion does not invalidate
the security model.
I will now scrub my cookies and history back to bedrock which I recommend
after logging in to anything Google related.
Every time I see these posts I think "My God, what have I started?"
Dave Froble
2021-11-28 14:26:47 UTC
Permalink
Post by John Doppke
Every time I see these posts I think "My God, what have I started?"
Actually, nothing. This general "conversation" has been going on for some time
now. It's like a tornado, touch down here, touch down there, etc.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2021-11-29 19:13:00 UTC
Permalink
Post by John Doppke
Every time I see these posts I think "My God, what have I started?"
Relax John, it's not _you_ they want to kill... :-)

Besides, people around here fantasize about killing me on a regular basis. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Johnny Billquist
2021-11-29 01:33:16 UTC
Permalink
Post by Simon Clubley
It turns out that the reason those arguments are available to the called
function is because VMS needs to preserve those registers across the AST
call and some bright spark during the design of VMS thought it was
acceptable to push those registers onto a user-visible call frame instead
of storing them in a private area that the called AST function did not
have access to.
It seems you understand neither how interrupts (hardware or software)
work, nor where the problem actually is here.
Post by Simon Clubley
That's the kind of design decision that was so insane it should never
have passed a design review but at least it appears to have been fixed
in later architectures with a private storage area, just like it should
have been on VAX.
No. You're nuts, and don't seem to understand some very simple things.
Post by Simon Clubley
And the reason I thought it _must_ be Macro-32 related was because the
real reason was so crazy it never even occurred to me. :-)
Well, after reading your rants, I'm not sure you even understand what
the problem actually is.


Let's first observe that saving information on the stack when interrupt
happens are both common and normal, and is not unique to VAX or VMS.
All systems do it, even Unix.
What do you think happens when you get a signal? The previous context is
stored on your stack, and your signal handled is called. Exactly the
same as with the AST under VMS. There is literally no difference.

And when writing in assembler, it's all very simple and straight
forward. You don't normally care what's on the stack beyond the possible
arguments have have there that you might need to remove before
returning. In VMS, that would be AST dependent parameters, which are
there for your use, and which you should remove before returning (I
think you need to remove them manually, but I might be misremembering.
You need to do it manually in RSX at least).

Where things go a bit weird is that in some (maybe all?) high level
languages, some additional stack content is exposed in the function
signature. There is absolutely no requirement that they need to be. But
this has nothing to do with VMS, and all to do with the language
implementation in relation to ASTs.
Why DEC choose to expose these elements on the stack to the AST function
is beyond me. You could possibly argue that there could be some reason
for sometimes being able to examine these in order to do something
specific, but in general I can't really think of a good reason, and if I
had designed the language environment, I'd kept them out of visibility.
It would be easy to do this, so it was obviously a conscious decision by
DEC for the high level languages to expose the information.

But again - this do not have anything to do with VMS itself, and VMS
isn't doing anything different than Unix is (well, actually ASTs work
much better than signals, but the reason and context for that should be
a different thread).

Unix on the other hand made the decision that the similar kind of
information on the stack when a signal handler is called is not being
exposed in the function signature.
Makes much more sense if you ask me, but this is something for the C
language, and don't have anything to do with Unix as such. The
information is there on the stack. It's just not exposed to the function.

Johnny
Simon Clubley
2021-11-29 19:27:02 UTC
Permalink
Post by Johnny Billquist
Post by Simon Clubley
It turns out that the reason those arguments are available to the called
function is because VMS needs to preserve those registers across the AST
call and some bright spark during the design of VMS thought it was
acceptable to push those registers onto a user-visible call frame instead
of storing them in a private area that the called AST function did not
have access to.
It seems you understand neither how interrupts (hardware or software)
work, nor where the problem actually is here.
I actually do understand how all this works (or should work). Read my
reply to Dan.

I write bare metal assembly language dispatchers that call C language
interrupt handlers on a regular basis, and they sometimes need to store
information outside of the usual registers. That information is always
private to the interrupt dispatcher, not the C language interrupt handler.
Post by Johnny Billquist
Post by Simon Clubley
That's the kind of design decision that was so insane it should never
have passed a design review but at least it appears to have been fixed
in later architectures with a private storage area, just like it should
have been on VAX.
No. You're nuts, and don't seem to understand some very simple things.
I'm not nuts. I just have some firm opinions about how it should work... :-)
Post by Johnny Billquist
Why DEC choose to expose these elements on the stack to the AST function
is beyond me. You could possibly argue that there could be some reason
for sometimes being able to examine these in order to do something
specific, but in general I can't really think of a good reason, and if I
had designed the language environment, I'd kept them out of visibility.
It would be easy to do this, so it was obviously a conscious decision by
DEC for the high level languages to expose the information.
This is the bit I simply don't get either.
Post by Johnny Billquist
But again - this do not have anything to do with VMS itself, and VMS
isn't doing anything different than Unix is (well, actually ASTs work
much better than signals, but the reason and context for that should be
a different thread).
Unix on the other hand made the decision that the similar kind of
information on the stack when a signal handler is called is not being
exposed in the function signature.
Makes much more sense if you ask me, but this is something for the C
language, and don't have anything to do with Unix as such. The
information is there on the stack. It's just not exposed to the function.
This is how I design my own bare metal setups and it works robustly
and cleanly. The information is still on the stack, but it is strictly
outside of the bounds of the stack space that the called routine is
allowed to touch or view. The information is considered to belong to
the dispatcher and _not_ to the routine the dispatcher calls.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
chris
2021-12-01 23:16:23 UTC
Permalink
Post by Simon Clubley
This is how I design my own bare metal setups and it works robustly
and cleanly. The information is still on the stack, but it is strictly
outside of the bounds of the stack space that the called routine is
allowed to touch or view. The information is considered to belong to
the dispatcher and _not_ to the routine the dispatcher calls.
Simon.
Lets qualify that by saying the information is not in any protected
memory space and could be accessed using an offset add / subtract
from the current stack pointer, whatever function the code is currently
running. Seem to remember examples of that for context switching in
embedded os's.

It's quite common in some work to specify a call interface as a
structure + pointer for some subgroup of functionality. Some of
the structure elements used within the called functions and some
are call parameters, but the whole structure is visible to both
sides. I don't see a problem with that...

Chris
Hunter Goatley
2021-11-29 04:47:31 UTC
Permalink
Post by Simon Clubley
VMS should have been designed 5-10 years later on than when it was.
In a thread of backpedaling inanities, that has to be the most inane.
Hunter
+1
I am seriously annoyed by that comment Hunter because you have
completely missed (either accidentally or deliberately) the point
I am making (and have made before).
I didn't miss your point. Yes, you're right, there are things in VMS
that make it difficult to port to other architectures.

But your comment I quoted above is inane. If VMS had been designed 5--10
years later, I doubt it ever would have existed. DEC wouldn't have been
what they were. VAX would have never been what it was.

VMS is what it is. You might not like it, but it did what it was
supposed to do when it was designed, and it did it very well.

Your comment is like saying that that people shouldn't have developed
cell phones until the smart phone was designed. Building blocks. Sure,
that Nokia flip-phone isn't much now, but we wouldn't be where we are if
it hadn't existed.

VMS was designed when it was designed because they needed it, and its
design was exceptional for what it needed to do. You can't change
history or blame the original designers for not thinking of things that
were difficult to imagine at the time.
--
Hunter
------
Hunter Goatley, Process Software, http://www.process.com/
***@goatley.com http://hunter.goatley.com/
Loading...