Discussion:
UNIX/Linux features I wish VMS had
(too old to reply)
Neil Rieck
2016-06-30 11:28:39 UTC
Permalink
Some people reading this will not know that UNIX/Linux systems (depending upon the flavour) have up to 6 run levels which can be seen in this brief overview:

https://linuxonfire.wordpress.com/2012/10/19/what-are-init-0-init-1-init-2-init-3-init-4-init-5-init-6-2/

(on our Solaris systems "level = 3" means "fully running")

So to partially close a system running at level 3, the sys admin need only type "init 1". This causes the execution of series of KILL scripts which will terminate processes associated with levels 3 then 2 but not one.

To go to single user mtce mode the sys admin will type "init s"
To shutdown the sys admin can type either "shutdown" or "init 0"
You can also boot to any level including "s".

Contrast this to VMS where you need to do a conversational boot then edit certain params to prevent the startup scripts from running.

Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/
Henry Crun
2016-06-30 12:03:39 UTC
Permalink
Post by Neil Rieck
https://linuxonfire.wordpress.com/2012/10/19/what-are-init-0-init-1-init-2-init-3-init-4-init-5-init-6-2/
(on our Solaris systems "level = 3" means "fully running")
So to partially close a system running at level 3, the sys admin need only type "init 1". This causes the execution of series of KILL scripts which will terminate processes associated with levels 3 then 2 but not one.
To go to single user mtce mode the sys admin will type "init s"
To shutdown the sys admin can type either "shutdown" or "init 0"
You can also boot to any level including "s".
Contrast this to VMS where you need to do a conversational boot then edit certain params to prevent the startup scripts from running.
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/
That reminds me of (why Linux -- mainly desktops -- are easier to shut down than
servers):

To shut down a desktop - you flip the switch

to shut down a mini-computer - You get a confirmation from all the users, then
you run the shutdown procedure

To shut down a mainframe - wadda you mean shut down!
--
Mike R.
Home: http://alpha.mike-r.com/
QOTD: http://alpha.mike-r.com/qotd.php
No Micro$oft products were used in the URLs above, or in preparing this message.
Recommended reading: http://www.catb.org/~esr/faqs/smart-questions.html#before
and: http://alpha.mike-r.com/jargon/T/top-post.html
Missile address: N31.7624/E34.9691

--- news://freenews.netfront.net/ - complaints: ***@netfront.net ---
Kerry Main
2016-06-30 13:29:11 UTC
Permalink
-----Original Message-----
Henry Crun via Info-vax
Sent: 30-Jun-16 8:04 AM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Neil Rieck
Some people reading this will not know that UNIX/Linux systems
(depending upon the flavour) have up to 6 run levels which can be seen
Post by Neil Rieck
https://linuxonfire.wordpress.com/2012/10/19/what-are-init-0-init-1-
init-2-init-3-init-4-init-5-init-6-2/
Post by Neil Rieck
(on our Solaris systems "level = 3" means "fully running")
So to partially close a system running at level 3, the sys admin
need
only type "init 1". This causes the execution of series of KILL
scripts which
will terminate processes associated with levels 3 then 2 but not one.
Post by Neil Rieck
To go to single user mtce mode the sys admin will type "init s"
To shutdown the sys admin can type either "shutdown" or "init 0"
You can also boot to any level including "s".
Contrast this to VMS where you need to do a conversational boot then
edit certain params to prevent the startup scripts from running.
Post by Neil Rieck
Neil Rieck
Waterloo, Ontario, Canada.
http://www3.sympatico.ca/n.rieck/
That reminds me of (why Linux -- mainly desktops -- are easier to shut down than
To shut down a desktop - you flip the switch
to shut down a mini-computer - You get a confirmation from all the users, then
you run the shutdown procedure
To shut down a mainframe - wadda you mean shut down!
While I have a healthy respect for mainframe computing, their uptime
reputation is not as good as one typically hears.

While I am sure it has changed now, but for many decades, mainframe
type culture (others used this as well btw) had convinced their
customers
that "planned" maint was not to be counted against their uptime and
availability stats.

Mainframes would typically go down for a number of hours at least once
per month for prev maint., but their uptime was still recorded as 100%.

In today's globally connected world, downtime is downtime and Custs
have little patience with "planned" downtimes if the hosted service is
impacted.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
IanD
2016-07-09 19:51:15 UTC
Permalink
On Thursday, June 30, 2016 at 11:35:04 PM UTC+10, Kerry Main wrote:

<snip>
Post by Kerry Main
While I have a healthy respect for mainframe computing, their uptime
reputation is not as good as one typically hears.
While I am sure it has changed now, but for many decades, mainframe
type culture (others used this as well btw) had convinced their
customers
that "planned" maint was not to be counted against their uptime and
availability stats.
Mainframes would typically go down for a number of hours at least once
per month for prev maint., but their uptime was still recorded as 100%.
In today's globally connected world, downtime is downtime and Custs
have little patience with "planned" downtimes if the hosted service is
impacted.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
That was the same with most IT shops back in the 90's. I remember we had different up time measurements, some that included every outage and some that didn't.

The difference I saw in mainframe versus mini was the amount of money thrown at the computer facilities. Mainframes had redundant everything while minis were not given the same redundancy in terms of supporting infrastructure.

Banks had deep pockets and when IBM said your system neededa ton of backup everything, it was just part of operating that type of platform

I remember a certain water cooled mainframe having to have multiple water supplies coming into the building from different streets just to satisfy the redundancy provisions

In large centres the customer would pay IBM to have an engineer sit there 24 x 7 just in case.

I remember DEC also pushing the preventative maintenance schedule too. Every month we would take down our systems. Then you has the 6 month ones where they supossidly did more. Eventually hardware got good enough and there was good enough diagnostic stuff around that this money making practice was put to rest

These days, for major applications, the customer will not accept any downtime to the point where instead of accepting a lengthy outage for major changes they may even build a parallel system to run along side the primary one so as to minimise the outage window. This is one of the reasons VMs are so popular, it's dead easy to migrate a live instance off to another hardware instance so that hardware changes or faults can be fixed with zero downtime

The more physically tied to a bit of hardware you are the more inflexible you are seen to be.
Kerry Main
2016-07-10 00:51:04 UTC
Permalink
-----Original Message-----
via Info-vax
Sent: 09-Jul-16 3:51 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
<snip>
Post by Kerry Main
While I have a healthy respect for mainframe computing, their uptime
reputation is not as good as one typically hears.
While I am sure it has changed now, but for many decades, mainframe
type culture (others used this as well btw) had convinced their
customers
that "planned" maint was not to be counted against their uptime and
availability stats.
Mainframes would typically go down for a number of hours at least
once
Post by Kerry Main
per month for prev maint., but their uptime was still recorded as 100%.
In today's globally connected world, downtime is downtime and Custs
have little patience with "planned" downtimes if the hosted service is
impacted.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
That was the same with most IT shops back in the 90's. I remember we
had different up time measurements, some that included every outage
and some that didn't.
The difference I saw in mainframe versus mini was the amount of money
thrown at the computer facilities. Mainframes had redundant everything
while minis were not given the same redundancy in terms of supporting
infrastructure.
Banks had deep pockets and when IBM said your system neededa ton of
backup everything, it was just part of operating that type of platform
I remember a certain water cooled mainframe having to have multiple
water supplies coming into the building from different streets just to
satisfy the redundancy provisions
In large centres the customer would pay IBM to have an engineer sit
there 24 x 7 just in case.
I remember DEC also pushing the preventative maintenance schedule
too. Every month we would take down our systems. Then you has the 6
month ones where they supossidly did more. Eventually hardware got
good enough and there was good enough diagnostic stuff around that
this money making practice was put to rest
Yep - back in my field days, spent many an early AM doing these PM's as
well. Showed up in early AM with scope and tool kit. Of course, back
then
the disk drives had filters that needed replacement, disk head
alignments
checked and diag scripts were run to shake out any intermittents. Might
also replace some HW that was showing up with errors in error log.

When done right, the morning PM was coordinated to happen just after
the previous evenings backups had completed (just in case of course).

The reason for doing the PM after backups was to mitigate potential
Issues such as one support call I went on to assist with upset Cust ..
tech
had asked Cust to remove all their disk packs. They did (kinda), but OP
forgot two that were not in same line as the rest of the disks. The tech
put in some scratch packs and started running disk diags which went out
and started R/W'ing all available packs .. you can guess what happened
then.
These days, for major applications, the customer will not accept any
downtime to the point where instead of accepting a lengthy outage for
major changes they may even build a parallel system to run along side
the primary one so as to minimise the outage window. This is one of
the
reasons VMs are so popular, it's dead easy to migrate a live instance
off
to another hardware instance so that hardware changes or faults can be
fixed with zero downtime
The more physically tied to a bit of hardware you are the more
inflexible
you are seen to be.
Today it's all about service availability - not HW availability.
Business does
not care if a OS/server /disk fails as long as it is transparent to the
business
& it is noticed & fixed asap.

Btw, unless you have setup proper alerting, it's not always easy to
notice
when things break when technologies like RAID and clustering are used in

a lights out DC.

This is another reason btw, for the current OpenVMS cluster high
licensing
levels to be addressed in the future. Ideally, you want all med-high VMS

apps to be clustered - especially when you get into the X86-64 world
and
one is competing with the likes of VMware.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
l***@gmail.com
2016-06-30 22:55:30 UTC
Permalink
Post by Neil Rieck
Some people reading this will not know that UNIX/Linux systems (depending
upon the flavour) have up to 6 run levels ...
The whole concept of “run level” has been rendered obsolete by systemd <https://www.freedesktop.org/software/systemd/man/telinit.html>.
Johnny Billquist
2016-07-01 12:22:45 UTC
Permalink
Post by l***@gmail.com
Post by Neil Rieck
Some people reading this will not know that UNIX/Linux systems (depending
upon the flavour) have up to 6 run levels ...
The whole concept of “run level” has been rendered obsolete by systemd <https://www.freedesktop.org/software/systemd/man/telinit.html>.
Run levels is a weird thing to start with, and do not exist on BSD systems.

And anyone who uses systemd deserves whatever he gets...

Johnny
j***@yahoo.co.uk
2016-07-01 19:29:19 UTC
Permalink
Post by l***@gmail.com
Post by Neil Rieck
Some people reading this will not know that UNIX/Linux systems (depending
upon the flavour) have up to 6 run levels ...
The whole concept of “run level” has been rendered obsolete by systemd <https://www.freedesktop.org/software/systemd/man/telinit.html>.
I was wondering if anyone would mention systemd vs runlevels. It's not
entirely vanished, but it's certainly moved on incompatibly. Which is
good, right, it keeps sysadmins, devops people, RedHat devotees, etc
in employment?
Richard Levitte
2016-07-01 14:47:14 UTC
Permalink
If there's any feature I'd like to see on VMS, then it's runtime shared image resolution by name instead of (or as well as) symbol vector position.

Cheers,
Richard
Stephen Hoffman
2016-07-01 16:21:36 UTC
Permalink
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime shared
image resolution by name instead of (or as well as) symbol vector
position.
The dlopen() routines and the lib$find_image_symbol() call and related
are about the best that's available for that sort of thing. Beyond
those, I haven't encountered any run-time tools within OpenVMS for
processing executables either on storage or activated and mapped, and
the image activator tends to be a pretty dark and complex place.
There is some open source in this area, but run-time tools for
introspection aren't particularly available. The innards of the
debugger, of symbol resolution and mangling, of the source code
analyzer, and related, just don't have a whole lot of documentation.
This outside of HPE or VSI, that is.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2016-07-01 23:21:19 UTC
Permalink
Post by Stephen Hoffman
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime
shared image resolution by name instead of (or as well as) symbol
vector position.
I understand this as a request for a dynamic linker, which (in VMS
terms) activates and links shareable images.
Post by Stephen Hoffman
The dlopen() routines and the lib$find_image_symbol() call and related
are about the best that's available for that sort of thing. Beyond
those, I haven't encountered any run-time tools within OpenVMS for
processing executables either on storage or activated and mapped, and
the image activator tends to be a pretty dark and complex place. There
is some open source in this area, but run-time tools for introspection
aren't particularly available. The innards of the debugger, of symbol
resolution and mangling, of the source code analyzer, and related, just
don't have a whole lot of documentation. This outside of HPE or VSI,
that is.
The dlopen() routines and friends have the disadvantage, that you have
to explicitly code them and that you have to call/use what was returned.
So source code for a printf("Hello world!") needs to change. And, in
case you want to activate DECC$SHR.EXE this way, you need to know what
and how the compiler prefixed/changed the symbol "printf": something
like DECC$GXPRINTF.

With a dynamic linker you don't need to change your source code and you
get DECC$SHR.EXE activated to resolve the "printf" or - when the dynamic
linker supports symbol preemption - the "printf" is resolved by any
already activated shareable image.

The current image activator does no symbol processing at all, all it
needs are the shareable image names and the offsets and values in the
symbol vector (and some image relocation and fixup information).

For I64 and ELF the debugger is based on DWARF and there is public
information on that format. So it should be possible to look at and
understand debug information in I64 images.
Stephen Hoffman
2016-07-02 20:24:10 UTC
Permalink
Post by hb
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime shared
image resolution by name instead of (or as well as) symbol vector
position.
I understand this as a request for a dynamic linker, which (in VMS
terms) activates and links shareable images.
...
The dlopen() routines and friends have the disadvantage, that you have
to explicitly code them and that you have to call/use what was
returned. So source code for a printf("Hello world!") needs to change...
Ah, okay. This seems akin to method overrides and inheretence in an
OO environment.
--
Pure Personal Opinion | HoffmanLabs LLC
John E. Malmberg
2016-07-02 13:51:20 UTC
Permalink
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime
shared image resolution by name instead of (or as well as) symbol vector
position.
1. Create a stub library with the same entry points as the shared image.

2. Each entry point calls lib$find_image_symbol() to look up the shared
image.

3. If the shared image is not present, the routine exits with the
appropriate error status and errno set.

4. If the routine is present, it is run.

5. Build this as libname.olb or libname.a.

The next question is if any of Hoff's tools can automate creating this
library.

Regards,
-John
***@qsl.net_work
Stephen Hoffman
2016-07-02 20:48:45 UTC
Permalink
Post by John E. Malmberg
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime shared
image resolution by name instead of (or as well as) symbol vector
position.
1. Create a stub library with the same entry points as the shared image.
2. Each entry point calls lib$find_image_symbol() to look up the shared image.
3. If the shared image is not present, the routine exits with the
appropriate error status and errno set.
4. If the routine is present, it is run.
5. Build this as libname.olb or libname.a.
The next question is if any of Hoff's tools can automate creating this library.
Shimmer reads the transfer vectors from the shareable image, and could
be cobbled into something different by replacing the existing code that
writes the transfer vector and generates the shim. A moderate-sized
project and certainly feasible, and shimmer is already emitting and
building code for the shim shareable image that'd be pretty close to
the FIS calls. But — for implementing something similar to this
within OpenVMS — this whole sequence is very reminiscent of
inheritance, method overrides, protocols and related in Objective C.
Which is an approach conceptually simpler and more flexible and much
more dynamic, but also vastly different from how OpenVMS presently
works, and none of the OpenVMS system APIs are OO nor does OpenVMS have
even the faintest idea of OO message-passing designs.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2016-07-02 22:21:21 UTC
Permalink
Post by Stephen Hoffman
Post by John E. Malmberg
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime
shared image resolution by name instead of (or as well as) symbol
vector position.
1. Create a stub library with the same entry points as the shared image.
2. Each entry point calls lib$find_image_symbol() to look up the shared image.
3. If the shared image is not present, the routine exits with the
appropriate error status and errno set.
4. If the routine is present, it is run.
5. Build this as libname.olb or libname.a.
The next question is if any of Hoff's tools can automate creating this library.
Shimmer reads the transfer vectors from the shareable image, and could
be cobbled into something different by replacing the existing code that
writes the transfer vector and generates the shim. A moderate-sized
project and certainly feasible, and shimmer is already emitting and
building code for the shim shareable image that'd be pretty close to the
FIS calls. But — for implementing something similar to this within
OpenVMS — this whole sequence is very reminiscent of inheritance, method
overrides, protocols and related in Objective C. Which is an approach
conceptually simpler and more flexible and much more dynamic, but also
vastly different from how OpenVMS presently works, and none of the
OpenVMS system APIs are OO nor does OpenVMS have even the faintest idea
of OO message-passing designs.
I haven't looked at shimmer, but you don't need FIS calls. If you want
to replace a shareable image or intercept some routines in a shareable
image you can do it - more or less - with logical names. However, and as
far as I understand this is the same with the FIS approach, the to be
replaced/intercepted shareable image should not expose global data,
which requires more complex code to fake it in a shareable image.

Did that for my little xhb project, eXternal History Buffer, which
writes and reads the recall buffer to/from a file for any VMS
utility/program, based on smg$create_virtual_keyboard. That is, by
intercepting at most two smg routines you get command recall across
image activation.
m***@gmail.com
2016-07-12 00:34:45 UTC
Permalink
Post by John E. Malmberg
Post by Richard Levitte
If there's any feature I'd like to see on VMS, then it's runtime
shared image resolution by name instead of (or as well as) symbol vector
position.
1. Create a stub library with the same entry points as the shared image.
2. Each entry point calls lib$find_image_symbol() to look up the shared
image.
3. If the shared image is not present, the routine exits with the
appropriate error status and errno set.
4. If the routine is present, it is run.
5. Build this as libname.olb or libname.a.
The next question is if any of Hoff's tools can automate creating this
library.
Regards,
-John
We didn't bother with a stub-library when we did this in my last position; it would have been too clumsy for what we were dealing with.

IIRC we had a couple of hundred shareable images and a couple of thousand entry points. The calling to each was simple. Whenever a routine (program or other shareable) linked an reported an undefined symbol that we recognised as an entry point in a shareable we built a small "interface" routine of the same name. We used a C macro and built a source code file that we subsequently compiled and included when we linked the code a second time.

In the interface routine we either (a) looked up the name of the shareable and called lib$find_imge_symbol to get our entry point or (b) called the already-found entry point.

At one stage I looked at the entry-point address to the called routine being a global value in C that was initially set to a look-up routine that found the shareable name and called lib$signal, then replaced the entry-point address with the true address in memory. This would have avoided repeating the test to see if we had previously found the address.

In practice it meant that in the calling routine there was a call to X that would go to a local interface routine called X, which would take the arguments passed to it and use them to call the real X in a shareable image.

With some monitoring added to the in-memory table of entry point names and shareable image names were were able to identify performance bottlenecks, and that helped guide us in our mapping of entry points to shareable images.

FWIW we also used this technique with the new version of OpenSSL when the VMS version we were running still used the old one. We also looked at interfacing VMS software like callable Backup and callable FDL routines so that even these were activate-on-demand.
Stephen Hoffman
2016-07-12 12:55:54 UTC
Permalink
Post by m***@gmail.com
We didn't bother with a stub-library when we did this in my last
position; it would have been too clumsy for what we were dealing with.
IIRC we had a couple of hundred shareable images and a couple of
thousand entry points. The calling to each was simple. Whenever a
routine (program or other shareable) linked an reported an undefined
symbol that we recognised as an entry point in a shareable we built a
small "interface" routine of the same name. We used a C macro and
built a source code file that we subsequently compiled and included
when we linked the code a second time.
In the interface routine we either (a) looked up the name of the
shareable and called lib$find_imge_symbol to get our entry point or (b)
called the already-found entry point.
This is heading toward microservices, FWIW.
--
Pure Personal Opinion | HoffmanLabs LLC
hb
2016-07-12 15:48:30 UTC
Permalink
Post by m***@gmail.com
In practice it meant that in the calling routine there was a call to
X that would go to a local interface routine called X, which would
take the arguments passed to it and use them to call the real X in a
shareable image.
"local" means local to the main image, statically linked.

Just curious, what was the reason(s) - besides the monitoring, that
probably means intercepting the calls - to implement such a dynamic
loader in your main image? As you describe, at link time of the main
image you already knew the names of the routines.

Obviously you didn't have to maintain the order of the entries in the
symbol vector of the shareable image. But in one or another way you had
to define symbol vector entries, anyway. OK, I know that there are
ways/tools/... to "export everything", which is not recommended for VMS.
And as you probably know with lib$fis, there is no check for GSMATCH
(SYSTEM-F-SHRIDMISMAT).

Or was the reason to avoid overhead like image relocation, fixups for
all/many shareable images your main image would link against?
Stephen Hoffman
2016-07-12 16:32:51 UTC
Permalink
Post by hb
Or was the reason to avoid overhead like image relocation, fixups for
all/many shareable images your main image would link against?
I've done more limited versions of these lib$find_image_symbol
shenanigans to avoid linking against shareables that might not exist in
some configurations (e.g. X), or that get dynamically mapped in to
service specific protocols or requests.

Jacketing the calls would allow the target sharables to be mapped out,
but then you're headed toward rolling your own RPC / X / microservice
scheme.

The ability to unmap and reload shareable images would be nice, but
that — much like unloading and reloading device drivers — isn't
something I'd expect anytime soon.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2016-07-12 17:16:39 UTC
Permalink
The ability to unmap and reload shareable images would be nice, but that
— much like unloading and reloading device drivers — isn't something I'd
expect anytime soon.
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.

Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?

It sounds surprising, considering that RSX can...

Johnny
Paul Sture
2016-07-12 18:50:05 UTC
Permalink
Post by Johnny Billquist
The ability to unmap and reload shareable images would be nice, but that
— much like unloading and reloading device drivers — isn't something I'd
expect anytime soon.
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.

However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
--
When you have eliminated the JavaScript, whatever remains must be an
empty page. Enable JavaScript to see Google Maps. -- Google Maps

Uh? -- me
Jan-Erik Soderholm
2016-07-12 18:56:13 UTC
Permalink
Post by Paul Sture
Post by Johnny Billquist
The ability to unmap and reload shareable images would be nice, but that
— much like unloading and reloading device drivers — isn't something I'd
expect anytime soon.
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
I have never missed it. But then I've mainly developed end user
applications, not drivers. How many would be helped by this
feature *today*? Apart from VSI?
Simon Clubley
2016-07-12 19:24:38 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Paul Sture
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
I have never missed it. But then I've mainly developed end user
applications, not drivers. How many would be helped by this
feature *today*? Apart from VSI?
Anyone doing low level custom hardware interfacing for example.

However, most of that market has moved away from VMS to either bare
metal or a RTOS (for the hard realtime stuff) or maybe Linux if
you don't have any hard realtime (or tight soft realtime) requirements.

Either way, it's a lot quicker to develop a driver for those
environments than it is for VMS.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Jan-Erik Soderholm
2016-07-12 22:22:10 UTC
Permalink
Post by Simon Clubley
Post by Jan-Erik Soderholm
Post by Paul Sture
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
I have never missed it. But then I've mainly developed end user
applications, not drivers. How many would be helped by this
feature *today*? Apart from VSI?
Anyone doing low level custom hardware interfacing for example.
However, most of that market has moved away from VMS...
Yes, that was my thought. Today you do not have all those fancy
special hardware interfaces, everything (more or less) you'd like
to talk to has a network interface anyway.

Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
Post by Simon Clubley
to either bare
metal or a RTOS (for the hard realtime stuff) or maybe Linux if
you don't have any hard realtime (or tight soft realtime) requirements.
Either way, it's a lot quicker to develop a driver for those
environments than it is for VMS.
Does it really matter? Today? And matter that much that it is worth
a redesign of core parts of VMS?
Post by Simon Clubley
Simon.
Craig A. Berry
2016-07-13 00:12:31 UTC
Permalink
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the roads
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either in
currency or in time and aggravation spent scrounging exotic parts. Just
as an example, the interminable time it takes an rx2660 to boot from DVD
might not be quite so interminable if the USB mass storage support had
required a bit less wizardry to develop and debug.
John Reagan
2016-07-13 01:33:09 UTC
Permalink
Post by Craig A. Berry
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the roads
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either in
currency or in time and aggravation spent scrounging exotic parts. Just
as an example, the interminable time it takes an rx2660 to boot from DVD
might not be quite so interminable if the USB mass storage support had
required a bit less wizardry to develop and debug.
In my opinion (disclosure: I've never developed a driver, but sat near people who did), the "benefit" for driver reload was slow booting. Back in the early VAX days, it took a while for a 780 to boot from those RX01. I won't even discuss booting a 750 from TU58s. Nowadays, what is the time for booting a standalone system? I'm not talking about booting from a DVD (yep, that sucks), but booting from a local disk.
Craig A. Berry
2016-07-13 03:22:02 UTC
Permalink
Post by John Reagan
Post by Craig A. Berry
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the roads
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either in
currency or in time and aggravation spent scrounging exotic parts. Just
as an example, the interminable time it takes an rx2660 to boot from DVD
might not be quite so interminable if the USB mass storage support had
required a bit less wizardry to develop and debug.
In my opinion (disclosure: I've never developed a driver, but sat
near people who did), the "benefit" for driver reload was slow
booting. Back in the early VAX days, it took a while for a 780 to
boot from those RX01. I won't even discuss booting a 750 from TU58s.
I don't follow. VMS does not have unloadable drivers yet boots more
slowly than other OS's that do. If, twenty or thirty or forty years ago,
non-reloadable drivers made the boot faster, then clearly that was the
right thing to do at the time. Doesn't mean it should be done that way now.

Booting an rx2600 from a SATA DVD is not nearly as slow as booting an
rx2660 from a USB DVD. So a newer, somewhat faster machine takes much
longer to boot the same DVD only because, as far as I can tell, the boot
volume is USB-based rather than SATA (and there have been complaints
about the speed of the SATA driver as well).

Usually the USB-based DVD boot volume goes into mount verification
multiple times during the boot, which seems to be a testament to the
boot loader that it can keep going when the media from which it's
reading the boot image disappears periodically, but it's not such a
glowing recommendation of the ability of the USB driver to handle a boot
device. My now somewhat vague memory is that the USB driver was only
intended to deal with things like keyboards and mice and missile
launchers and never really tuned to move significant amounts of data
quickly.

My reason for bringing this up here is that, just maybe, if writing VMS
device drivers were easy and quick and fun, then this driver might have
seen more development than it did. If you can crash and reload the
driver 10 times a minute rather than a couple times an hour, you're
simply going to get farther in developing and testing that driver.
Post by John Reagan
Nowadays, what is the time for booting a standalone system? I'm not
talking about booting from a DVD (yep, that sucks), but booting from
a local disk.
A few seconds for my recent Mac laptop with an SSD drive or a Linux
instance under VirtualBox on that laptop. Minutes for any VMS system
I've ever seen, though I've never had my hands on any i2 or i4 systems
nor SSD boot volumes nor a particularly fast SAN or network boot option.
Craig A. Berry
2016-07-13 11:54:44 UTC
Permalink
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Robert A. Brooks
2016-07-13 13:21:42 UTC
Permalink
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
--
-- Rob
Stephen Hoffman
2016-07-13 17:14:09 UTC
Permalink
Post by Robert A. Brooks
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
DQDRIVER isn't bad. It's pretty straightforward. I've had my
fingers in that one several times. For grins, I did a 48-bit version
of DQ, and one that was instrumented heavily for development work.

Where DQDRIVER gets ugly involves the IDE/ATA/ATAPI hardware in the
Alpha boxes, as some of that had... errata.

DQDRIVER is far from the most hardware-sensitive device drivers around,
too. There's more than a little hardware (and device firmware) that
doesn't always follow the published and documented specifications.

This is also where scheduling and deprecating the older hardware bits
is the usual strategy. Is hanging on to support for those old boxes
and old bridges the best use of development? (Tough to answer that,
too.)

The Alpha IDE/ATA/ATAPI hardware is all ~ten to ~twenty years old.
Even the last generation of Alpha used USB, not IDE/ATA/ATAPI.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2016-07-14 00:07:47 UTC
Permalink
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
David Froble
2016-07-14 01:27:43 UTC
Permalink
Post by Craig A. Berry
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?

From things I've read in the past, driver load and unload isn't necessarily all
that difficult, just not now implemented. Perhaps VSI will surprise us with x86
VMS.

Why do I consider this? Because as some have mentioned, it would help with
driver development work. Now, for the bonus question. Does anybody think that
driver development on x86 is going to be a recurring issue for VSI? I think so.
Craig A. Berry
2016-07-14 02:01:02 UTC
Permalink
Post by David Froble
Post by Craig A. Berry
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?
From things I've read in the past, driver load and unload isn't
necessarily all that difficult, just not now implemented. Perhaps VSI
will surprise us with x86 VMS.
Why do I consider this? Because as some have mentioned, it would help
with driver development work.
If you can talk about driver development, why is it off-topic when I do?
The whole point I was trying to make is that one of the relatively newer
drivers in VMS is abysmally slow, at least under certain conditions.
Drivers that are easier to develop would get their performance bugs
worked out during development. An incomplete or bare bones
implementations would be more complete if it were easier to do.
David Froble
2016-07-14 05:19:47 UTC
Permalink
Post by Craig A. Berry
Post by David Froble
Post by Craig A. Berry
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?
From things I've read in the past, driver load and unload isn't
necessarily all that difficult, just not now implemented. Perhaps VSI
will surprise us with x86 VMS.
Why do I consider this? Because as some have mentioned, it would help
with driver development work.
If you can talk about driver development, why is it off-topic when I do?
The whole point I was trying to make is that one of the relatively newer
drivers in VMS is abysmally slow, at least under certain conditions.
Drivers that are easier to develop would get their performance bugs
worked out during development. An incomplete or bare bones
implementations would be more complete if it were easier to do.
What I was meaning is the topic was driver development, including loading and
unloading. Yes, difficulty to develop might produce poor drivers. I just
thought it would be more helpful to stick to the need for loading and unloading
drivers.
Robert A. Brooks
2016-07-14 02:50:22 UTC
Permalink
Post by David Froble
Post by Craig A. Berry
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?
From things I've read in the past, driver load and unload isn't necessarily all
that difficult, just not now implemented. Perhaps VSI will surprise us with x86
VMS.
Given the current driver model, yeah, it would be difficult, especially for the
system disk. The concept of disappearing disk UCB's is not something with which
VMS deals well. Yes, I have been giving this some thought.

For those who are following along at home with their source listings, DUDRIVER
does support disappearing UCB's, but only under relatively rare situations, all
of which I cannot recall right now.
--
-- Rob
Stephen Hoffman
2016-07-14 16:04:06 UTC
Permalink
Post by Robert A. Brooks
Post by David Froble
Post by Craig A. Berry
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?
From things I've read in the past, driver load and unload isn't
necessarily all that difficult, just not now implemented. Perhaps VSI
will surprise us with x86 VMS.
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
Drivers don't always stash those I/O request packets where generic
code can find and flush them, and drivers can have memory and other
system or controller or device resources allocated that would have to
be freed. Some of that processing could be independent of the driver
and its code (if that's what the design was), but other constructs
would require driver assistance to quiesce and release.
Post by Robert A. Brooks
Given the current driver model, yeah, it would be difficult, especially
for the system disk. The concept of disappearing disk UCB's is not
something with which VMS deals well.
The system disk is special in various ways, not the least of which is
in the boot path. But cloned-UCB disk devices such as DN, LD and NF
do work and can vaporize UCBs.
Post by Robert A. Brooks
Yes, I have been giving this some thought.
For those who are following along at home with their source listings,
DUDRIVER does support disappearing UCB's, but only under relatively
rare situations, all of which I cannot recall right now.
Was qioserver involved? 😀 I'd hope that DUDRIVER is on the "we're
eventually going to deprecate that" list. 😀 But I digress.

Faster I/O devices are already giving Linux I/O some problems; where
the driver code and the operating system code is being examined and
reworked and restructured to allow it to better handle the speeds of
some newer I/O devices. There were pointers to discussions of the Linux
network driver changes here a while back IIRC; around adding support
for 40 GbE and faster connections. You already know this, and it'll
get more common on the x86-64 boxes — Integrity servers didn't and
don't tend to have bleeding-edge I/O.
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2016-07-16 15:09:20 UTC
Permalink
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device is
zero. You could also optionally use a flag on the unload command to allow
existing requests to complete but deny any new requests after the unload
command was issued. This is how Linux works; check out the man page for
rmmod.
Post by Stephen Hoffman
Drivers don't always stash those I/O request packets where generic
code can find and flush them, and drivers can have memory and other
system or controller or device resources allocated that would have to
be freed. Some of that processing could be independent of the driver
and its code (if that's what the design was), but other constructs
would require driver assistance to quiesce and release.
On Linux, the OS calls an unload routine within the driver itself as
part of unloading the driver; this routine can then cleanly handle any
shutdown of the hardware the driver controls.

There is some serious potential for being able to reload a driver which
is in active use with outstanding I/Os but I would be happy with simply
being able to unload the driver once the reference count is zero.

You could then reload the driver using a normal driver load command.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2016-07-16 17:37:03 UTC
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device
is zero. You could also optionally use a flag on the unload command to
allow existing requests to complete but deny any new requests after the
unload command was issued.
Which does little for IRPs initiated by the driver itself (drivers can
autonomously communicate with their associated devices and with other
device drivers).

Drivers can allocate system or device memory or other resources for use
as buffers or mapping windows or caches, and those do not necessarily
exist at known and fixed offsets in the UCB.

There are any number of other details that can arise within some
drivers. Not all device I/O activity involves user I/O channels.

Any unload request has to notify the driver to request the clean up.
Post by Simon Clubley
This is how Linux works; check out the man page for rmmod.
OpenVMS is not Linux.
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2016-07-16 19:22:05 UTC
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device
is zero. You could also optionally use a flag on the unload command to
allow existing requests to complete but deny any new requests after the
unload command was issued.
Which does little for IRPs initiated by the driver itself (drivers can
autonomously communicate with their associated devices and with other
device drivers).
Why not ?

You could have a mechanism to reject the queueing of new IRPs in the
same way as you could reject user level I/Os and you should have a
mechanism to tell the other kernel components to rundown the existing
I/O requests in a controlled manner.

Conceptually, it's exactly the same thing regardless of whether
another kernel component issues the I/O request or whether a user
application issues the I/O request.

Also, any new unload mechanism could still be used for all the drivers
out there which don't have any special hardware requirements which stop
them from being safely unloaded. Notice I said hardware requirements
BTW; an IRP which cannot be cancelled purely because the software
design does not allow it is a flaw in that software and not in any
unload mechanism.
Post by Stephen Hoffman
Drivers can allocate system or device memory or other resources for use
as buffers or mapping windows or caches, and those do not necessarily
exist at known and fixed offsets in the UCB.
And that's exactly the kind of thing you take care of during the
driver's unload routine.
Post by Stephen Hoffman
There are any number of other details that can arise within some
drivers. Not all device I/O activity involves user I/O channels.
Any unload request has to notify the driver to request the clean up.
Exactly. Having a driver rundown routine within the driver itself is
a perfectly normal thing to have when your OS supports unloadable
drivers.
Post by Stephen Hoffman
Post by Simon Clubley
This is how Linux works; check out the man page for rmmod.
OpenVMS is not Linux.
No, it most certainly is not.

At the user visible level, VMS has got some _really_ nice features
which don't exist elsewhere, but down inside the privileged guts
of the code, it's clearly a tightly bound monolithic mess of
intertwined code with interdependencies which simply shouldn't exist
to the extent that they do in today's world.

I can understand why it was done that way when the design was
implemented in the 1970s with the limited hardware of the day (when
you had to have a really good reason to implement various levels of
potentially resource consuming abstractions) and when portability
simply was not a concern (because VMS was designed to take full
advantage of the VAX architecture).

I also actually think the decisions taken at the time in the 1970s were
fully valid for the time because they resulted in a _very_ solidly
designed product for the hardware of the time but here in the 21st
century that mess of interdependencies is now a major problem.

For example, it's now been over 18 months since the VMS port to x86-64
was announced.

I appreciate the hard work that VSI are doing (and I sincerely hope
they succeed; I look forward to VMS on x86-64 and even maybe other
architectures), but by now a port of Linux (or any other OS implemented
with portability in mind) to a new architecture would almost certainly
have had some early testing versions out of the door and into the
hands of customers.

And as you yourself point out at every opportunity you can, VMS is now
competing in today's world, not the world of the 1970s/1980s/1990s.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2016-07-18 13:20:41 UTC
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device
is zero. You could also optionally use a flag on the unload command to
allow existing requests to complete but deny any new requests after the
unload command was issued.
Which does little for IRPs initiated by the driver itself (drivers can
autonomously communicate with their associated devices and with other
device drivers).
Why not ?
Because there are no I/O channels associated with synthesized IRPs.
There's no last channel deassign with these. There are no CCBs.
There's no user process associated with these IRPs. There are
variously multiple layers of device drivers — class and port drivers —
or drivers communicating with ACPs. It's not entirely clear to me
that some of the drivers I've worked on could even find and clean up
those IRPs, not without adding some additional tracking for the IRPs.
It's also the case that the driver would have to shut down the target
device or deallocate certain system resources, and it's not always
appropriate to have a last channel deassign do that. Some of the
drivers I've written would have to be notified to clean up pending I/O
as part of a reload or an unload. So either there's an added
callback, or there's an added I/O function code, and the logic to deal
with that. All of which is possible, but — as I've repeatedly stated
here — some drivers will require notification and local processing
prior to the unload; it's not a process that can occur in isolation and
outside of the context of these more complex drivers.
--
Pure Personal Opinion | HoffmanLabs LLC
David Froble
2016-07-18 15:18:36 UTC
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Post by Stephen Hoffman
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this.
The driver needs to be told (and routines in the driver written
to) gather up and flush all the pending I/O requests back to the
requesting code.
You stall any driver unloading until the reference count on the
device is zero. You could also optionally use a flag on the unload
command to allow existing requests to complete but deny any new
requests after the unload command was issued.
Which does little for IRPs initiated by the driver itself (drivers
can autonomously communicate with their associated devices and with
other device drivers).
Why not ?
Because there are no I/O channels associated with synthesized IRPs.
There's no last channel deassign with these. There are no CCBs.
There's no user process associated with these IRPs. There are variously
multiple layers of device drivers — class and port drivers — or drivers
communicating with ACPs. It's not entirely clear to me that some of
the drivers I've worked on could even find and clean up those IRPs, not
without adding some additional tracking for the IRPs. It's also the
case that the driver would have to shut down the target device or
deallocate certain system resources, and it's not always appropriate to
have a last channel deassign do that. Some of the drivers I've written
would have to be notified to clean up pending I/O as part of a reload or
an unload. So either there's an added callback, or there's an added
I/O function code, and the logic to deal with that. All of which is
possible, but — as I've repeatedly stated here — some drivers will
require notification and local processing prior to the unload; it's not
a process that can occur in isolation and outside of the context of
these more complex drivers.
I think you guys are both saying basically the same thing. That unloading a
driver is not the job of the OS, it's the job of a driver that has been written
to keep track of everything necessary to shut down the driver and for that
capability to be part of the driver.

Or, am I not understanding ???

Of course, this would require one of two things. Each driver being capable of
being unloaded, or not capable of being unloaded, and some flag indicating
which. Or every driver on VMS being re-designed to be able to stop itself.
Either case would still require some modification of every VMS driver.

Is that what would make it so difficult to implement?
V***@SendSpamHere.ORG
2016-07-16 22:18:40 UTC
Permalink
Post by Stephen Hoffman
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device
is zero. You could also optionally use a flag on the unload command to
allow existing requests to complete but deny any new requests after the
unload command was issued.
Which does little for IRPs initiated by the driver itself (drivers can
autonomously communicate with their associated devices and with other
device drivers).
Drivers can allocate system or device memory or other resources for use
as buffers or mapping windows or caches, and those do not necessarily
exist at known and fixed offsets in the UCB.
There are any number of other details that can arise within some
drivers. Not all device I/O activity involves user I/O channels.
Any unload request has to notify the driver to request the clean up.
Post by Simon Clubley
This is how Linux works; check out the man page for rmmod.
OpenVMS is not Linux.

--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
V***@SendSpamHere.ORG
2016-07-16 22:16:28 UTC
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
Coordinating with the device driver code is a fair part of this. The
driver needs to be told (and routines in the driver written to) gather
up and flush all the pending I/O requests back to the requesting code.
You stall any driver unloading until the reference count on the device is
zero. You could also optionally use a flag on the unload command to allow
existing requests to complete but deny any new requests after the unload
command was issued. This is how Linux works; check out the man page for
rmmod.
Post by Stephen Hoffman
Drivers don't always stash those I/O request packets where generic
code can find and flush them, and drivers can have memory and other
system or controller or device resources allocated that would have to
be freed. Some of that processing could be independent of the driver
and its code (if that's what the design was), but other constructs
would require driver assistance to quiesce and release.
On Linux, the OS calls an unload routine within the driver itself as
part of unloading the driver; this routine can then cleanly handle any
shutdown of the hardware the driver controls.
There is some serious potential for being able to reload a driver which
is in active use with outstanding I/Os but I would be happy with simply
being able to unload the driver once the reference count is zero.
You could then reload the driver using a normal driver load command.
Loadable images can have specified unloader routines. They can be loaded and
unloaded at will. If you look at how a driver image is constructed, it looks
a lot like an executive image. So much of the infrastructure could be there!

I am finding this whole discussion rather amusing because I am seeing some of
its participants discussing something which they've never done on VMS but now
complain that it needs to be done differently/better/magically/like-*ix/etc.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Simon Clubley
2016-07-17 01:26:41 UTC
Permalink
Post by V***@SendSpamHere.ORG
I am finding this whole discussion rather amusing because I am seeing some of
its participants discussing something which they've never done on VMS but now
complain that it needs to be done differently/better/magically/like-*ix/etc.
If that's directed towards me, I should point out that I _have_
done a VMS device driver project in the past for Alpha. The most
complicated VMS device driver I wrote was for a PCI based WinTV
card (back when we still had analogue television in the UK) so
that I could extract the teletext data stream out of the video
signal and display teletext pages. It was quite interesting
seeing Ceefax pages displayed within a VT emulator from a program
running on VMS.

I have also written various device drivers for at least Linux, RTEMS
and various bare metal architectures and I can tell you that writing
a VMS driver was easily the most painful of all the environments that
I've worked in for device driver work.

IOW Brian, I haven't written lots of VMS drivers like you have but at
the same time, I am actually describing things I have done in the past
and comparing them to other development environments which I also have
direct experience of.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2016-07-17 03:19:36 UTC
Permalink
Post by V***@SendSpamHere.ORG
Loadable images can have specified unloader routines. They can be loaded and
unloaded at will. If you look at how a driver image is constructed, it looks
a lot like an executive image. So much of the infrastructure could be there!
Excellent.
Post by V***@SendSpamHere.ORG
I am finding this whole discussion rather amusing because I am seeing some of
its participants discussing something which they've never done on VMS but now
complain that it needs to be done differently/better/magically/like-*ix/etc.
IIRC it started with the observation that unloading a driver could be done on
RT-11 and RSX but not on VMS.
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
Chris Scheers
2016-07-14 21:25:44 UTC
Permalink
Post by Robert A. Brooks
Post by David Froble
Post by Craig A. Berry
Post by Robert A. Brooks
Post by Craig A. Berry
Booting an rx2600 from a SATA DVD...
Sorry, I think that should have been IDE, not SATA. Whatever the DQ
driver is.
Yeah, DQDRIVER is nobody's favourite driver to touch. I've been lucky
to avoid it so far. . .
OK, but what I was whingeing about is that DVD boot using DNDRIVER is
slower than DVD boot using DQDRIVER. Whatever the problems with
DQDRIVER, things got worse with USB-based optical media.
Getting a bit off topic, are we?
From things I've read in the past, driver load and unload isn't necessarily all
that difficult, just not now implemented. Perhaps VSI will surprise us with x86
VMS.
Given the current driver model, yeah, it would be difficult, especially
for the system disk. The concept of disappearing disk UCB's is not
something with which VMS deals well. Yes, I have been giving this some
thought.
For those who are following along at home with their source listings,
DUDRIVER does support disappearing UCB's, but only under relatively rare
situations, all of which I cannot recall right now.
VAX did not support driver unload. It supported driver reload.

In particular, the data structures were not touched. Just the code was
reloaded.

If you made a change that required a data structure change, you had to
reboot.

Even with this restriction, I found driver reload worthwhile.
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Simon Clubley
2016-07-16 15:10:31 UTC
Permalink
Post by Chris Scheers
VAX did not support driver unload. It supported driver reload.
In particular, the data structures were not touched. Just the code was
reloaded.
If you made a change that required a data structure change, you had to
reboot.
Even with this restriction, I found driver reload worthwhile.
I knew there was something hackish about the VAX reload driver support
but I couldn't remember what it was. Thanks for the information.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
David Froble
2016-07-16 18:48:55 UTC
Permalink
Post by Simon Clubley
Post by Chris Scheers
VAX did not support driver unload. It supported driver reload.
In particular, the data structures were not touched. Just the code was
reloaded.
If you made a change that required a data structure change, you had to
reboot.
Even with this restriction, I found driver reload worthwhile.
I knew there was something hackish about the VAX reload driver support
but I couldn't remember what it was. Thanks for the information.
Simon.
At this point I've got to ask, do you really need the ROllsRoyce, or is the
Chevy perhaps good enough.

If the data structures don't change, then re-loading the code might be good
enough for some driver development. And if you got to change the data
structures, there is always the re-boot.

I'm going to doubt driver development will occur on some large cluster running a
company. More likely on a small stand alone system. If so, just how much I/O
is the unload going to have to wait for? How much of a problem if something
unexpected does happen?

I don't do driver development, so this isn't a question I can address. Those
doing so will need to decide what might be rather helpful, and what just might
not be worth the additional effort.
Kerry Main
2016-07-16 19:30:45 UTC
Permalink
-----Original Message-----
David Froble via Info-vax
Sent: 16-Jul-16 2:49 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
[snip..]
I'm going to doubt driver development will occur on some large cluster running a
company. More likely on a small stand alone system. If so, just how much I/O
is the unload going to have to wait for? How much of a problem if something
unexpected does happen?
I don't do driver development, so this isn't a question I can address.
Those
doing so will need to decide what might be rather helpful, and what
just
might
not be worth the additional effort.
The real benefit is to do driver patching without rebooting - especially
in
Prod environments.

This is just a part of the overall goal of not having to impact service
availability reboot for any OS / LP patches.

While this can be done with clusters (with a few caveats), it would be
really, really great to be able to this on standalone systems as well.

As far as I know, no OS can do this today for both driver & kernel
patches
on standalone systems. They all need a reboot.

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Simon Clubley
2016-07-16 20:07:50 UTC
Permalink
Post by Kerry Main
The real benefit is to do driver patching without rebooting - especially
in
Prod environments.
This is just a part of the overall goal of not having to impact service
availability reboot for any OS / LP patches.
While this can be done with clusters (with a few caveats), it would be
really, really great to be able to this on standalone systems as well.
Exactly. This is a _very_ valid reason for a full formally supported
driver unload and reload mechanism which exists independently of
whatever people like myself might desire.
Post by Kerry Main
As far as I know, no OS can do this today for both driver & kernel
patches
on standalone systems. They all need a reboot.
$ set response/mode=good_natured

Tut, Tut, Kerry! :-)

I noticed you combined kernel _and_ drivers in the same statement
above because you know that reloadable drivers are a fully supported
thing in some other environments.

However, to address the kernel issue, some microkernels are perfectly
capable of reloading many of their kernel components without a reboot.
For example, I believe QNX has this capability, but I can't remember
to what extent; I stopped using it when it's owners changed the licence
terms.

I should also point out that Linux kernel components which are not
hardware device drivers but which are still implemented as loadable
modules (such as filesystem drivers for example) are also fully
reloadable if they are not in use.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Kerry Main
2016-07-16 21:04:28 UTC
Permalink
-----Original Message-----
Simon Clubley via Info-vax
Sent: 16-Jul-16 4:08 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Kerry Main
The real benefit is to do driver patching without rebooting - especially
in
Prod environments.
This is just a part of the overall goal of not having to impact service
availability reboot for any OS / LP patches.
While this can be done with clusters (with a few caveats), it would be
really, really great to be able to this on standalone systems as well.
Exactly. This is a _very_ valid reason for a full formally supported
driver unload and reload mechanism which exists independently of
whatever people like myself might desire.
Post by Kerry Main
As far as I know, no OS can do this today for both driver & kernel
patches
on standalone systems. They all need a reboot.
$ set response/mode=good_natured
Tut, Tut, Kerry! :-)
I noticed you combined kernel _and_ drivers in the same statement
above because you know that reloadable drivers are a fully supported
thing in some other environments.
However, to address the kernel issue, some microkernels are perfectly
capable of reloading many of their kernel components without a reboot.
For example, I believe QNX has this capability, but I can't remember
to what extent; I stopped using it when it's owners changed the licence
terms.
Reloadable drivers is but on part of the requirement for prod Cust's.

The point is that what Customers are looking for is a prod OS platform
that does not need to be rebooted for ANY OS/LP/Security patches.

The drive for service availability is increasing all the time. Being
unable
to reboot systems if it impacts service availability is also a major
reason
why many Customers do not apply patches in the timeframes that they
should.

I am not aware of any OS platforms that can state this ANY capability,
but
that is the ultimate target for some future advanced OS ..
I should also point out that Linux kernel components which are not
hardware device drivers but which are still implemented as loadable
modules (such as filesystem drivers for example) are also fully
reloadable if they are not in use.
All OS platforms are getting "better" at reducing the number of reboots
required, but Linux still requires reboots for many kernel patches -
just
look at the monthly security notices on the RH security web site that
state "server must be rebooted for this kernel patch to take effect.."

(sorry, could not resist ..)

:-)

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Simon Clubley
2016-07-17 01:01:23 UTC
Permalink
Post by Kerry Main
Reloadable drivers is but on part of the requirement for prod Cust's.
The point is that what Customers are looking for is a prod OS platform
that does not need to be rebooted for ANY OS/LP/Security patches.
The drive for service availability is increasing all the time. Being
unable
to reboot systems if it impacts service availability is also a major
reason
why many Customers do not apply patches in the timeframes that they
should.
I am not aware of any OS platforms that can state this ANY capability,
but
that is the ultimate target for some future advanced OS ..
Is Ksplice the kind of thing you are thinking of ?

See http://www.ksplice.com/ and https://en.wikipedia.org/wiki/Ksplice
for some information. There are also other efforts going on to implement
this type of functionality within the Linux kernel in addition to the
Oracle driven Ksplice efforts but I don't know what the current status
of those efforts is.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Scott Dorsey
2016-07-17 02:00:51 UTC
Permalink
Post by Kerry Main
The point is that what Customers are looking for is a prod OS platform
that does not need to be rebooted for ANY OS/LP/Security patches.
From the standpoint of reliability, I'd really want a statically-linked
kernel so that any changes to be made to any driver would require a new
sysgen. There is less to go wrong and this has security advantages.

However, this also means it's a real pain to make any driver or kernel
changes, and not only does the system need to be rebooted but there's a
considerable amount of building needed before rebooting.

A good solution would be a third ring in between system and user, so that
drivers couldn't corrupt the kernel or vice-versa. Sadly, though, that is
not likely to happen in the x86 world.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Stephen Hoffman
2016-07-18 13:44:23 UTC
Permalink
Post by Scott Dorsey
Post by Kerry Main
The point is that what Customers are looking for is a prod OS platform
that does not need to be rebooted for ANY OS/LP/Security patches.
From the standpoint of reliability, I'd really want a statically-linked
kernel so that any changes to be made to any driver would require a new
sysgen. There is less to go wrong and this has security advantages.
This is usually referred to as a unikernel in recent parlance. In
"recent" DEC environments, VAX ELN is one of the closest analogs.
Post by Scott Dorsey
However, this also means it's a real pain to make any driver or kernel
changes, and not only does the system need to be rebooted but there's a
considerable amount of building needed before rebooting.
Ayup. Not just driver or kernel changes, but also any hardware
changes. Which proved to be unpopular with many folks, as it tends to
require sufficiently identical hardware configurations.
Post by Scott Dorsey
A good solution would be a third ring in between system and user, so
that drivers couldn't corrupt the kernel or vice-versa. Sadly, though,
that is not likely to happen in the x86 world.
Some of various approaches here — ignoring the VM negative rings that
already exist but aren't usually used for this — involves either SeL4
or Qubes OS, and also have a look at Intel SGX work and the
counterattacks against same.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2016-07-17 03:04:22 UTC
Permalink
-----Original Message-----
Simon Clubley via Info-vax
Sent: 16-Jul-16 9:01 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Kerry Main
Reloadable drivers is but on part of the requirement for prod Cust's.
The point is that what Customers are looking for is a prod OS platform
that does not need to be rebooted for ANY OS/LP/Security patches.
The drive for service availability is increasing all the time. Being
unable
to reboot systems if it impacts service availability is also a major
reason
why many Customers do not apply patches in the timeframes that they
should.
I am not aware of any OS platforms that can state this ANY
capability,
Post by Kerry Main
but
that is the ultimate target for some future advanced OS ..
Is Ksplice the kind of thing you are thinking of ?
Thx - had not heard of this, but yes, imho, this is heading to point
where all
OS's who want to claim "mission critical" need to be within the next 5
years
or so ..

Key point though - "Ksplice supports only the patches that do not make
significant semantic changes to kernel's data structures.[4]

http://manpages.ubuntu.com/manpages/trusty/man8/ksplice-create.8.html
See http://www.ksplice.com/ and https://en.wikipedia.org/wiki/Ksplice
for some information. There are also other efforts going on to
implement
this type of functionality within the Linux kernel in addition to the
Oracle driven Ksplice efforts but I don't know what the current status
of those efforts is.
Well, given the animosity I have heard between Linux and Oracle, I
seriously doubt Larry and company are going to cooperate to much,
so Linus and company will be on their own.

Larry likely already has an IP law suit waiting as well.

As the Wikipedia reference says, when Oracle bought Ksplice, they
dropped RH Linux right away.

Let's not forget Oracle Linux tends to lag RH Linux by a few releases,
and there was no love lost when Oracle spun off its own release.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2016-07-18 13:56:14 UTC
Permalink
Post by Kerry Main
Thx - had not heard of this,
Some previous threads on KSplice here:

https://groups.google.com/d/msg/comp.os.vms/ViltJzYBl1Q/6s6DNMLJAgAJ
https://groups.google.com/d/msg/comp.os.vms/tu2xX5lzdQ0/iWCss2ysIJwJ
Post by Kerry Main
but yes, imho, this is heading to point where all OS's who want to
claim "mission critical" need to be within the next 5 years or so ..
The Linux kernel has been working on and has been shipping with
infrastructure for Kpatch and some other tools for a year or so, and
which works slightly differently than does Oracle KSplice.

https://en.wikipedia.org/wiki/Kpatch

This isn't an easy thing to retrofit, and it'll be a pain in the arse
to try implementing anything like this in OpenVMS. The code just isn't
structured for this sort of thing, akin to the "fun" that can be
involved when removing physical memory or such. And adding this
likely run afoul of software compatibility, as swapping out a shareable
image that's in use is enough of a hassle already and swapping out or
patching parts of the kernel is a bigger project. Which is why I'd
expect to see VSI pursue rolling upgrades and VM migrations as their
approach for this problem, if (when?) they get around to looking at
this topic.
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2016-07-16 19:50:25 UTC
Permalink
Post by David Froble
At this point I've got to ask, do you really need the ROllsRoyce, or is the
Chevy perhaps good enough.
A full unload mechanism would be highly preferred, but doing a code
only reload also be very helpful and would significantly reduce the
number of reboots required (at least based on how a typical driver
development cycle goes for me).

That's also an OS-independent observation BTW; I haven't written a
VMS driver for a good number of years because I don't use VMS for
those kinds of low-level hardware hobbyist things anymore but my
Linux/RTOS/bare-metal observations are equally valid here because
the development cycle is pretty much OS-independent.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2016-07-17 09:01:55 UTC
Permalink
Post by Simon Clubley
Post by Chris Scheers
VAX did not support driver unload. It supported driver reload.
In particular, the data structures were not touched. Just the code was
reloaded.
If you made a change that required a data structure change, you had to
reboot.
Even with this restriction, I found driver reload worthwhile.
I knew there was something hackish about the VAX reload driver support
but I couldn't remember what it was. Thanks for the information.
Simon.
At this point I've got to ask, do you really need the RollsRoyce, or is the
Chevy perhaps good enough.
Something in the middle, i.e. something well engineered but not
ludicrously expensive. I've often though that car analogies are wrong
when we are talking about operating system quality. Maybe we should
talk about commercial trucks instead. :-)
If the data structures don't change, then re-loading the code might be good
enough for some driver development. And if you got to change the data
structures, there is always the re-boot.
Yes. In principle this is no different from a business application
which uses shared data structures (anything from in memory tables to
transaction files); you can make as many mods as you wish to one
program's code, but as soon as you change the data structures you have
to recompile and link everything which uses them.
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
Jan-Erik Soderholm
2016-07-17 09:23:03 UTC
Permalink
Post by Paul Sture
Post by Simon Clubley
Post by Chris Scheers
VAX did not support driver unload. It supported driver reload.
In particular, the data structures were not touched. Just the code was
reloaded.
If you made a change that required a data structure change, you had to
reboot.
Even with this restriction, I found driver reload worthwhile.
I knew there was something hackish about the VAX reload driver support
but I couldn't remember what it was. Thanks for the information.
Simon.
At this point I've got to ask, do you really need the RollsRoyce, or is the
Chevy perhaps good enough.
Something in the middle, i.e. something well engineered but not
ludicrously expensive. I've often though that car analogies are wrong
when we are talking about operating system quality. Maybe we should
talk about commercial trucks instead. :-)
If the data structures don't change, then re-loading the code might be good
enough for some driver development. And if you got to change the data
structures, there is always the re-boot.
Yes. In principle this is no different from a business application
which uses shared data structures (anything from in memory tables to
transaction files); you can make as many mods as you wish to one
program's code, but as soon as you change the data structures you have
to recompile and link everything which uses them.
Not 100% sure when using reletional databases (not Rdb anyway).
You can change the table layout (add new fields and do some types
of modifications to current fields) and only recomplie those
applications that uses the new fields.

If one application does "SELECT A, B, C from TABX", it doesn't
matter if there is a field D also in the database. Only apps
that uses field D has to be updated and recompiled, probably
brand new apps anyway...

I'm not sure what you ment with "transaction files". Maybe
flat datafiles between different systems such as bank
transaction files. Then yes... :-)

Jan-Erik.
Paul Sture
2016-07-17 10:16:54 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Paul Sture
Post by Simon Clubley
Post by Chris Scheers
VAX did not support driver unload. It supported driver reload.
In particular, the data structures were not touched. Just the code was
reloaded.
If you made a change that required a data structure change, you had to
reboot.
Even with this restriction, I found driver reload worthwhile.
I knew there was something hackish about the VAX reload driver support
but I couldn't remember what it was. Thanks for the information.
Simon.
At this point I've got to ask, do you really need the RollsRoyce, or is the
Chevy perhaps good enough.
Something in the middle, i.e. something well engineered but not
ludicrously expensive. I've often though that car analogies are wrong
when we are talking about operating system quality. Maybe we should
talk about commercial trucks instead. :-)
If the data structures don't change, then re-loading the code might be good
enough for some driver development. And if you got to change the data
structures, there is always the re-boot.
Yes. In principle this is no different from a business application
which uses shared data structures (anything from in memory tables to
transaction files); you can make as many mods as you wish to one
program's code, but as soon as you change the data structures you have
to recompile and link everything which uses them.
Not 100% sure when using reletional databases (not Rdb anyway).
You can change the table layout (add new fields and do some types
of modifications to current fields) and only recomplie those
applications that uses the new fields.
I deliberately avoided the word "database" :-)
Post by Jan-Erik Soderholm
If one application does "SELECT A, B, C from TABX", it doesn't
matter if there is a field D also in the database. Only apps
that uses field D has to be updated and recompiled, probably
brand new apps anyway...
Even in a relational database you have a potential problem with
fields changing in size or allowed values.

<http://www.theregister.co.uk/2016/07/13/coding_error_costs_citigroup_7m/>

"It turned out that the error was a result of how the company
introduced new alphanumeric branch codes.

When the system was introduced in the mid-1990s, the program code
filtered out any transactions that were given three-digit branch
codes from 089 to 100 and used those prefixes for testing purposes.

But in 1998, the company started using alphanumeric branch codes as
it expanded its business. Among them were the codes 10B, 10C and so
on, which the system treated as being within the excluded range, and
so their transactions were removed from any reports sent to the SEC."
Post by Jan-Erik Soderholm
I'm not sure what you ment with "transaction files". Maybe
flat datafiles between different systems such as bank
transaction files. Then yes... :-)
Yes, data that flows from one system to another. That could be something
internal such as an interface from a sales system to an accounting one.
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
Robert A. Brooks
2016-07-14 02:55:15 UTC
Permalink
Does anybody think that driver development on x86 is going to be a recurring
issue for VSI? I think so.
Perhaps, but virtual machines make this a much simpler area, and we expect that
the virtual machine will be a frequently-used platform for VMS on X86_64.

Besides, David, wouldn't you like to see some time spent implementing an
unsigned data type in BASIC.

It's on my (long) to-do list. Its existence would have made implementing my
crude disk defragger written in BASIC 20+ years ago a bit easier.
--
-- Rob
David Froble
2016-07-14 05:16:17 UTC
Permalink
Post by Robert A. Brooks
Does anybody think that driver development on x86 is going to be a recurring
issue for VSI? I think so.
Perhaps, but virtual machines make this a much simpler area, and we
expect that the virtual machine will be a frequently-used platform for
VMS on X86_64.
I'd hope you're not putting too many eggs in that basket. In such a situation,
you're hostage to the VM, and whatever it runs on ....
Post by Robert A. Brooks
Besides, David, wouldn't you like to see some time spent implementing an
unsigned data type in BASIC.
After living without it for over 40 years, and coming up with work arounds, I'm
not in pain ....

:-)
Post by Robert A. Brooks
It's on my (long) to-do list. Its existence would have made
implementing my crude disk defragger written in BASIC 20+ years ago a
bit easier.
I am looking for many great things from VSI. I'd expect an unsigned data type
Basic would be rather easy, since VMS already supports them.
Stephen Hoffman
2016-07-13 12:53:50 UTC
Permalink
Post by Craig A. Berry
A few seconds for my recent Mac laptop with an SSD drive or a Linux
instance under VirtualBox on that laptop. Minutes for any VMS system
I've ever seen, though I've never had my hands on any i2 or i4 systems
nor SSD boot volumes nor a particularly fast SAN or network boot option.
Ayup. Other systems boot faster or much faster than OpenVMS, and EFI
takes massively too long and spends far too much time chattering. An
OpenVMS on an i4-class box booting off a 3PAR SSD is very speedy and
masks many of the latent performance problems in the OpenVMS and the
local system startup, but the EFI sequence makes the whole thing
glacially slow. EFI is the waiting-for-the-TU58 experience for the
current generation of OpenVMS system managers. HDD bootstraps are
slower, and optical bootstraps are increasingly reminiscent of TK50
bootstraps.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2016-07-13 13:10:48 UTC
Permalink
-----Original Message-----
A. Berry via Info-vax
Sent: 12-Jul-16 11:22 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by John Reagan
Post by Craig A. Berry
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the roads
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either in
currency or in time and aggravation spent scrounging exotic parts. Just
as an example, the interminable time it takes an rx2660 to boot
from
DVD
Post by John Reagan
Post by Craig A. Berry
might not be quite so interminable if the USB mass storage support
had
Post by John Reagan
Post by Craig A. Berry
required a bit less wizardry to develop and debug.
In my opinion (disclosure: I've never developed a driver, but sat
near people who did), the "benefit" for driver reload was slow
booting. Back in the early VAX days, it took a while for a 780 to
boot from those RX01. I won't even discuss booting a 750 from TU58s.
I don't follow. VMS does not have unloadable drivers yet boots more
slowly than other OS's that do. If, twenty or thirty or forty years ago,
non-reloadable drivers made the boot faster, then clearly that was the
right thing to do at the time. Doesn't mean it should be done that way now.
Booting an rx2600 from a SATA DVD is not nearly as slow as booting an
rx2660 from a USB DVD. So a newer, somewhat faster machine takes much
longer to boot the same DVD only because, as far as I can tell, the boot
volume is USB-based rather than SATA (and there have been complaints
about the speed of the SATA driver as well).
Usually the USB-based DVD boot volume goes into mount verification
multiple times during the boot, which seems to be a testament to the
boot loader that it can keep going when the media from which it's
reading the boot image disappears periodically, but it's not such a
glowing recommendation of the ability of the USB driver to handle a boot
device. My now somewhat vague memory is that the USB driver was only
intended to deal with things like keyboards and mice and missile
launchers and never really tuned to move significant amounts of data
quickly.
My reason for bringing this up here is that, just maybe, if writing VMS
device drivers were easy and quick and fun, then this driver might have
seen more development than it did. If you can crash and reload the
driver 10 times a minute rather than a couple times an hour, you're
simply going to get farther in developing and testing that driver.
Post by John Reagan
Nowadays, what is the time for booting a standalone system? I'm not
talking about booting from a DVD (yep, that sucks), but booting from
a local disk.
A few seconds for my recent Mac laptop with an SSD drive or a Linux
instance under VirtualBox on that laptop. Minutes for any VMS system
I've ever seen, though I've never had my hands on any i2 or i4 systems
nor SSD boot volumes nor a particularly fast SAN or network boot option.
To quote Jim Gray (RIP - well known computer scientist) from 2006:

- Tape is Dead
- Disk is Tape
- Flash is Disk
- RAM Locality is King

Considering he wrote this 10 years ago, it's pretty impressive:
http://research.microsoft.com/~Gray/talks/Flash_Is_Good.ppt

So, this raises the question of what is the impact on boot, recovery
times and drivers when the 3D XPoint / PCM non-volatile memories are
available (apparently later this year)?

http://hexus.net/tech/news/storage/85940-micron-working-second-gen-3d-xp
oint-non-volatile-memory/



These memory technologies have same capacity as SSD, but according
to intel/Micron - up to 1,000 times less latency.

As Gretzky used to say - skate to where the puck is going to be - not
where it is now.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
John Reagan
2016-07-13 13:53:47 UTC
Permalink
Post by Johnny Billquist
-----Original Message-----
A. Berry via Info-vax
Sent: 12-Jul-16 11:22 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by John Reagan
Post by Craig A. Berry
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the
roads
Post by John Reagan
Post by Craig A. Berry
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff"
whatever
Post by John Reagan
Post by Craig A. Berry
that might be. The costs of device acquisition might be lower,
either in
Post by John Reagan
Post by Craig A. Berry
currency or in time and aggravation spent scrounging exotic parts.
Just
Post by John Reagan
Post by Craig A. Berry
as an example, the interminable time it takes an rx2660 to boot
from
DVD
Post by John Reagan
Post by Craig A. Berry
might not be quite so interminable if the USB mass storage support
had
Post by John Reagan
Post by Craig A. Berry
required a bit less wizardry to develop and debug.
In my opinion (disclosure: I've never developed a driver, but sat
near people who did), the "benefit" for driver reload was slow
booting. Back in the early VAX days, it took a while for a 780 to
boot from those RX01. I won't even discuss booting a 750 from
TU58s.
I don't follow. VMS does not have unloadable drivers yet boots more
slowly than other OS's that do. If, twenty or thirty or forty years
ago,
non-reloadable drivers made the boot faster, then clearly that was the
right thing to do at the time. Doesn't mean it should be done that way now.
Booting an rx2600 from a SATA DVD is not nearly as slow as booting an
rx2660 from a USB DVD. So a newer, somewhat faster machine takes much
longer to boot the same DVD only because, as far as I can tell, the
boot
volume is USB-based rather than SATA (and there have been complaints
about the speed of the SATA driver as well).
Usually the USB-based DVD boot volume goes into mount verification
multiple times during the boot, which seems to be a testament to the
boot loader that it can keep going when the media from which it's
reading the boot image disappears periodically, but it's not such a
glowing recommendation of the ability of the USB driver to handle a
boot
device. My now somewhat vague memory is that the USB driver was only
intended to deal with things like keyboards and mice and missile
launchers and never really tuned to move significant amounts of data
quickly.
My reason for bringing this up here is that, just maybe, if writing
VMS
device drivers were easy and quick and fun, then this driver might
have
seen more development than it did. If you can crash and reload the
driver 10 times a minute rather than a couple times an hour, you're
simply going to get farther in developing and testing that driver.
Post by John Reagan
Nowadays, what is the time for booting a standalone system? I'm not
talking about booting from a DVD (yep, that sucks), but booting from
a local disk.
A few seconds for my recent Mac laptop with an SSD drive or a Linux
instance under VirtualBox on that laptop. Minutes for any VMS system
I've ever seen, though I've never had my hands on any i2 or i4 systems
nor SSD boot volumes nor a particularly fast SAN or network boot
option.
- Tape is Dead
- Disk is Tape
- Flash is Disk
- RAM Locality is King
http://research.microsoft.com/~Gray/talks/Flash_Is_Good.ppt
So, this raises the question of what is the impact on boot, recovery
times and drivers when the 3D XPoint / PCM non-volatile memories are
available (apparently later this year)?
http://hexus.net/tech/news/storage/85940-micron-working-second-gen-3d-xp
oint-non-volatile-memory/
http://youtu.be/IWsjbqbkqh8
These memory technologies have same capacity as SSD, but according
to intel/Micron - up to 1,000 times less latency.
As Gretzky used to say - skate to where the puck is going to be - not
where it is now.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
And don't forget HPE's Memresitor and The Machine.
Kerry Main
2016-07-13 14:52:54 UTC
Permalink
-----Original Message-----
Reagan via Info-vax
Sent: 13-Jul-16 9:54 AM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Kerry Main
-----Original Message-----
Craig
Post by Kerry Main
A. Berry via Info-vax
Sent: 12-Jul-16 11:22 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
[snip]
Post by Kerry Main
- Tape is Dead
- Disk is Tape
- Flash is Disk
- RAM Locality is King
http://research.microsoft.com/~Gray/talks/Flash_Is_Good.ppt
So, this raises the question of what is the impact on boot, recovery
times and drivers when the 3D XPoint / PCM non-volatile memories are
available (apparently later this year)?
http://hexus.net/tech/news/storage/85940-micron-working-second-
gen-3d-xp
Post by Kerry Main
oint-non-volatile-memory/
http://youtu.be/IWsjbqbkqh8
These memory technologies have same capacity as SSD, but according
to intel/Micron - up to 1,000 times less latency.
As Gretzky used to say - skate to where the puck is going to be - not
where it is now.
Regards,
Kerry Main
Kerry dot main at starkgaming dot com
And don't forget HPE's Memresitor and The Machine.
Let's take this one step further -

The majority of apps today are bound together using the traditional
"N-Tier" model where the app servers and DB servers are separate
servers with each server sharded or with different data on each DB
server linked by network technologies (switches, routers, FW's, LB's
etc).

In the typical shared nothing model today, it is the App developers that
are responsible for addressing all of the HA/DT/data consistency and
network based data replication - not the OS/infr level as is the case
with
a shared disk (everything) cluster model in OpenVMS.

Keep in mind larger #'s of cores, huge and extremely fast non-volatile
memories are about to hit the market as well.

As mentioned in another thread, in relative people times:
- cpu - cache = seconds
- cpu - memory = minutes
- cpu - SSD = 2 days
- network - network (read) = 3-4 days
- network - network (write) = months

Assumes a time proven design principle of "bring data closer to the
compute core".

If one recognizes that network LAN latency is one of the biggest
bottleneck
in solutions today, (not to mention solution management of so many
separate individual systems) then is it not a reasonable assumption that

N-Tier days are numbered & due to changes in technology, it's time to go

back to 2/3 tier computing whereby the App and DB server are physically
located on the same OS instance so that all App-DB references are either

local memory or direct IO references - not very long network references?

Imho, this is where APP stacking is going to become much more critical
in the future .. yes, this is where the shared everything cluster model
that scales up first (64 cores, 1.5TB memory) then scales out as
required
should be a good fit for not only transactional models, but also
analytical
models as well.

Combine this change in industry trends with a new file system, new
TCPIP stack, new X86-64 platform, new virtualization technologies, then
put a new name on this ..

Blue oceans type stuff ..

[yes, insert standard disclaimer that there is lots to do.. everyone
gets it]

:-)

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2016-07-13 17:04:55 UTC
Permalink
Post by Kerry Main
So, this raises the question of what is the impact on boot, recovery
times and drivers when the 3D XPoint / PCM non-volatile memories are
available (apparently later this year)?
...
As Gretzky used to say - skate to where the puck is going to be - not
where it is now.
Where the puck will be? Well, let us start with some consideration of
where the puck is now. That's flash-targeted file systems (past
bus-based SSD), integrated relational databases, online patch
applications without reboots (reloadable OS giblets), online patch and
support access, loose clustering and integrated distributed LDAP
support (beyond passwords), sandboxes, disk partitioning, packaged DVCS
and (vastly improved) development tools, OO interfaces for OS APIs, and
a whole lot more... All of these are available right now, though not
(yet?) all on all systems.

Sorry VSI folks, but I just don't see y'all pushing the margins here
and pushing the boundaries and skating to where the puck will be,
skating out onto the deepest ultramarine oceans of future operating
system products. Not for a few years, at least. Y'all got more than
a little work on the foundations, before then. Building revenues,
too, but preferably in a way that does not tie your venture
inextricably into what the existing installed base wants without
consideration and without flexibility to provide more of what the
larger market wants from a server operating system.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2016-07-13 19:08:06 UTC
Permalink
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: 13-Jul-16 1:05 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Kerry Main
So, this raises the question of what is the impact on boot, recovery
times and drivers when the 3D XPoint / PCM non-volatile memories are
available (apparently later this year)?
...
As Gretzky used to say - skate to where the puck is going to be - not
where it is now.
Where the puck will be? Well, let us start with some consideration of
where the puck is now. That's flash-targeted file systems (past
bus-based SSD), integrated relational databases, online patch
applications without reboots (reloadable OS giblets), online patch and
support access, loose clustering and integrated distributed LDAP
support (beyond passwords), sandboxes, disk partitioning, packaged DVCS
and (vastly improved) development tools, OO interfaces for OS APIs, and
a whole lot more... All of these are available right now, though not
(yet?) all on all systems.
Sorry VSI folks, but I just don't see y'all pushing the margins here
and pushing the boundaries and skating to where the puck will be,
skating out onto the deepest ultramarine oceans of future operating
system products. Not for a few years, at least. Y'all got more than
a little work on the foundations, before then. Building revenues,
too, but preferably in a way that does not tie your venture
inextricably into what the existing installed base wants without
consideration and without flexibility to provide more of what the
larger market wants from a server operating system.
Yes, all of us here in c.o.v. have heard all about how much greener the
grass is on the other side.

Based on all of the statements here, It seems like the grass is getting
greener every day.

[btw, I am not part of VSI]

:-)

For a practical, real world view of the hectic life in the distributed,
stateless, multiple network tiers in a shared nothing model, check out
this presentation: (posted Feb 27, 2016)

https://www.infoq.com/presentations/data-integrity-distributed-systems?u
tm_campaign=rightbar_v2&utm_source=infoq&utm_medium=presentations_link&u
tm_content=link_text
(click on InfoQ start arrow)

While viewing, keep in mind that every server to server network update
regardless of tier or server replication has a relative IO latency
people
time of months vs minutes and days compared to memory references
and/or local IO's.

All of a sudden, the grass is not as green as you think it is.

Multiple network tier and server replication solutions that most of the
Industry has in play now is where the puck is today.

Due to changing memory / storage technology enhancements, consolidating
some of the network tiers is where the puck is going to be with next gen
systems.

[again, yes lots of work to do ..]

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2016-07-14 15:42:55 UTC
Permalink
Post by Kerry Main
Yes, all of us here in c.o.v. have heard all about how much greener the
grass is on the other side.
This isn't about "greener", it's about getting features and
capabilities that are at least competitive with other systems, and with
marketable features into OpenVMS.
Post by Kerry Main
Based on all of the statements here, It seems like the grass is getting
greener every day.
Yes, the grass is increasingly getting green over there. Particularly
given major development efforts underway, and new efforts that are
starting up. That's the nature of business.

That's the nature of small teams competing with large teams, too — VSI
has to be exceedingly careful what they pick to work on, due to
staffing and scheduling and budget.

OpenVMS competes with other server platforms, and OpenVMS attracts new
and growing businesses and new applications and new deployments, or —
as the existing apps get ported or get retired — OpenVMS ends.

It's not clear what a post on the Druid real-time analytics engine and
which is based on Apache Hadoop particularly has to do with this
thread. Did that all get ported to OpenVMS?

BTW: Shorter URL:
https://www.infoq.com/presentations/data-integrity-distributed-systems

Whether or not you're part of VSI won't change my opinions around the
competitive issues and limitations with OpenVMS, and where OpenVMS
falls short of other server platforms.
--
Pure Personal Opinion | HoffmanLabs LLC
Kerry Main
2016-07-14 19:56:32 UTC
Permalink
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: 14-Jul-16 11:43 AM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
[snip...]
It's not clear what a post on the Druid real-time analytics engine and
which is based on Apache Hadoop particularly has to do with this
thread. Did that all get ported to OpenVMS?
https://www.infoq.com/presentations/data-integrity-distributed-
systems
The point was to show how the folks on the other side of the fence are
struggling big time supporting their distributed shared nothing env. from
a HA/DT/data consistency perspective.

Keep in mind it is the distributed Application developers that take care of
HA/DT/data consistency / server replication in their application code in the
shared nothing model.

With OpenVMS, as long as they follow established practices, the primary
HA/DT/data consistency responsibility is at the OS/infr level.

Perhaps it's just me, but I would prefer to see App coders focus on code
quality and optimizations and not have to worry about the impact of a
server failure or node add to the env. or local/multi-site data replication
synchronization.
Whether or not you're part of VSI won't change my opinions around the
competitive issues and limitations with OpenVMS, and where OpenVMS
falls short of other server platforms.
No one here is saying OpenVMS does not have features to add or improve
or optimize.

However, as noted in the above link, the same is true for the other platforms.

You are painting a rosy picture for the other platforms as if they are not also
struggling to support and/or maintain their environments.

Those of us who have been in the DC Operations trenches know this very
well. Heck, last year I was doing a major DC migration project and their
mission critical SOA environment (Solaris/Oracle/Linux) was so messed up
that there was really only 1 person who could restart their environment
from scratch properly. It was a "call Jxxx" whenever something crashed
and they had to restart their SOA/Oracle servers.

And this is not a unique example.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
David Froble
2016-07-15 03:51:49 UTC
Permalink
Post by Kerry Main
-----Original Message-----
Stephen Hoffman via Info-vax
Sent: 14-Jul-16 11:43 AM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
[snip...]
It's not clear what a post on the Druid real-time analytics engine and
which is based on Apache Hadoop particularly has to do with this
thread. Did that all get ported to OpenVMS?
https://www.infoq.com/presentations/data-integrity-distributed-
systems
The point was to show how the folks on the other side of the fence are
struggling big time supporting their distributed shared nothing env. from
a HA/DT/data consistency perspective.
Keep in mind it is the distributed Application developers that take care of
HA/DT/data consistency / server replication in their application code in the
shared nothing model.
With OpenVMS, as long as they follow established practices, the primary
HA/DT/data consistency responsibility is at the OS/infr level.
Perhaps it's just me, but I would prefer to see App coders focus on code
quality and optimizations and not have to worry about the impact of a
server failure or node add to the env. or local/multi-site data replication
synchronization.
Whether or not you're part of VSI won't change my opinions around the
competitive issues and limitations with OpenVMS, and where OpenVMS
falls short of other server platforms.
No one here is saying OpenVMS does not have features to add or improve
or optimize.
However, as noted in the above link, the same is true for the other platforms.
You are painting a rosy picture for the other platforms as if they are not also
struggling to support and/or maintain their environments.
Those of us who have been in the DC Operations trenches know this very
well. Heck, last year I was doing a major DC migration project and their
mission critical SOA environment (Solaris/Oracle/Linux) was so messed up
that there was really only 1 person who could restart their environment
from scratch properly. It was a "call Jxxx" whenever something crashed
and they had to restart their SOA/Oracle servers.
Which then begets the question, what happens when the beer truck gets Jxxx ???
Kerry Main
2016-07-15 11:55:56 UTC
Permalink
-----Original Message-----
David Froble via Info-vax
Sent: 14-Jul-16 11:52 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
[snip...]
Post by Kerry Main
well. Heck, last year I was doing a major DC migration project and their
mission critical SOA environment (Solaris/Oracle/Linux) was so
messed
up
Post by Kerry Main
that there was really only 1 person who could restart their
environment
Post by Kerry Main
from scratch properly. It was a "call Jxxx" whenever something crashed
and they had to restart their SOA/Oracle servers.
Which then begets the question, what happens when the beer truck gets Jxxx ???
Question came up all the time. I raised it and even formally documented
it
as a significant risk for our project.

I just could not believe this mission critical app startup was not
automated,
but apparently "there are too many factors to consider" ...

Part of challenge was that other groups had similar issues with "hero"
resources.

As the old saying goes " You can lead a horse to water .."

Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Simon Clubley
2016-07-13 19:16:30 UTC
Permalink
Post by Stephen Hoffman
Where the puck will be? Well, let us start with some consideration of
where the puck is now. That's flash-targeted file systems (past
bus-based SSD), integrated relational databases, online patch
applications without reboots (reloadable OS giblets), online patch and
support access, loose clustering and integrated distributed LDAP
support (beyond passwords), sandboxes, disk partitioning, packaged DVCS
and (vastly improved) development tools, OO interfaces for OS APIs, and
a whole lot more... All of these are available right now, though not
(yet?) all on all systems.
I just hope those new development tools don't turn out to be massive
resource hogs on VMS however...

I've just found out over the last week how long it now takes to build
the current version of LLVM plus the current versions of it's core
projects (clang/libcxx/lld/lldb, etc) from source... :-(

To be fair, the core libraries were not _too_ bad; it's the projects
which seem to be the resource hogs.

I also don't like how what should be a generic project (it's a compiler
toolkit for goodness sake) has had references to things only found
in the latest versions of Linux inserted into it when building on Linux.
Once again, that appears to be more in the projects than in the core
libraries however.

[I've actually looked around for a more lightweight but modular compiler
toolkit over the last week but didn't find anything else suitable.]

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Stephen Hoffman
2016-07-13 12:43:44 UTC
Permalink
Post by John Reagan
In my opinion (disclosure: I've never developed a driver, but sat near
people who did), the "benefit" for driver reload was slow booting.
Back in the early VAX days, it took a while for a 780 to boot from
those RX01. I won't even discuss booting a 750 from TU58s. Nowadays,
what is the time for booting a standalone system? I'm not talking
about booting from a DVD (yep, that sucks), but booting from a local
disk.
Having developed and worked on more than a few drivers, that's part of
it. Maintaining your mental context and system context is another
factor. Faster system reboots do mean you no longer have time to go
get a snack at the cafeteria waiting for the box to run its self-tests
— though EFI approaches VAX levels of slow, if you have to power-cycle
to reset the hardware — but you still have to log in and re-establish
your testing environment.

The other factor here pointing toward unloadable or reloadable drivers
is online upgrades. Some of the common products and patches involve
drivers or execlets, and that means a production reboot.

Linux boxes can increasingly perform these upgrades online, and that
platform is one of the major competitors for OpenVMS.

OpenVMS seeks to solve this upgrade via clustering and rolling
upgrades, but then the system infrastructure and end-user and developer
documentation for setting that up and maintaining it is...
comparatively weak. Cluster failover and rolling upgrades are not as
easy as they should be. And clustering is Oracle-level expensive,
which means more than a few developers and end-user sites will avoid
that dependency.

Which means the sites have to reboot to patch, which means either
OpenVMS is down-revision and insecure, or there are outages, and
OpenVMS and the applications usually get blamed for that.

But this is one of many areas where OpenVMS has room for improvement,
reloadable drivers and easier upgrades are likely not a short-term
revenue opportunity, and VSI is clearly rather busy with the port...
--
Pure Personal Opinion | HoffmanLabs LLC
Chris Scheers
2016-07-14 02:31:25 UTC
Permalink
Post by John Reagan
Post by Craig A. Berry
Post by Jan-Erik Soderholm
Could VSI be helped by unloadable drivers? Maybe... But I do not
see it as a major priority amongst VSIs customers.
That makes no sense at all. That's like saying that paving the roads
helps the highway department but does nothing for the motorists. If
driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either in
currency or in time and aggravation spent scrounging exotic parts. Just
as an example, the interminable time it takes an rx2660 to boot from DVD
might not be quite so interminable if the USB mass storage support had
required a bit less wizardry to develop and debug.
In my opinion (disclosure: I've never developed a driver, but sat near people who did), the "benefit" for driver reload was slow booting. Back in the early VAX days, it took a while for a 780 to boot from those RX01. I won't even discuss booting a 750 from TU58s. Nowadays, what is the time for booting a standalone system? I'm not talking about booting from a DVD (yep, that sucks), but booting from a local disk.
When I did driver work on a 750 (circa VMS 4.7), it took about 20
minutes after a crash to get to the point where I could look at the
crash log to see what happened.

This was a 750 with the ROMs to directly boot an RA60.

When I did driver work on a VAXstation 4000-60 (Turbochannel no less!),
I used the fast boot options in both the BIOS and VMS and went from
power on to logged in with DECwindows in about 40 seconds.

I found the ability to reload a driver extremely helpful. There are
times you can't use it, but most driver changes can be tested immediately.

Of course, your driver must be written to correctly support this. So
there are additional requirements on the driver, too.
--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ***@applied-synergy.com
Fax: 817-237-3074
Stephen Hoffman
2016-07-13 12:32:21 UTC
Permalink
Post by Craig A. Berry
Could VSI be helped by unloadable drivers? Maybe... But I do not see it
as a major priority amongst VSIs customers.
That makes no sense at all.
If you're running bespoke non-driver applications using Oracle
databases on AlphaServer DS20 configurations, it makes sense.
Post by Craig A. Berry
That's like saying that paving the roads helps the highway department
but does nothing for the motorists. If driver development were easier,
there just might be more or better drivers, thus enabling more things
to plug in and "do stuff" whatever that might be. The costs of device
acquisition might be lower, either in currency or in time and
aggravation spent scrounging exotic parts.
Ayup. But most sites aren't doing device driver work, so that's not
something they'll encounter, and — because the driver work all happens
elsewhere — not something they'll usually consider. The same approach
can arise around integrated databases — some sites have Oracle and use
that. But integrated products and layered products and third-party
products may not want to add that licensing cost and that dependency.
Integrating a common database was one of the motivations around RMS
back in the 1970s too, but I digress.

Reloadable drivers would make for easier upgrades with less application
disruption, though.
Post by Craig A. Berry
Just as an example, the interminable time it takes an rx2660 to boot
from DVD might not be quite so interminable if the USB mass storage
support had required a bit less wizardry to develop and debug.
rx2800 i4 DVD boot is glacial, too. Optical needs to die as a distro.
The EFI shenanigans are slow and complex and far too arcane too, but I
digress. Host-based InfoServer boot was pretty speedy, though I've
not run a comparison of that and DVD bootstraps on the same rx2800 i4
configuration.
--
Pure Personal Opinion | HoffmanLabs LLC
d***@gmail.com
2016-07-13 21:21:42 UTC
Permalink
Post by Stephen Hoffman
Host-based InfoServer boot was pretty speedy, though I've
not run a comparison of that and DVD bootstraps on the same rx2800 i4
configuration.
--
Pure Personal Opinion | HoffmanLabs LLC
The I4 Blades need a firmware update to prevent glacial download from the InfoServer and V8.4-2 for a couple of LOM-related boot fixes but network upgrade works just fine. The 2800 I4s will network boot V8.4-1H1.

My personal cluster contains a 2600 (fast to the boot menu), a 3600 (only slightly slower) and an I4 Blade (slowest to get to the boot menu, fastest to actually install.) I upgrade one or both of the older machines from the InfoServer weekly. All three are way faster than DVD installation.

Overall boot simplification is in the works.
Kerry Main
2016-07-14 01:05:22 UTC
Permalink
-----Original Message-----
dgordonatvsi--- via Info-vax
Sent: 13-Jul-16 5:22 PM
Subject: Re: [Info-vax] UNIX/Linux features I wish VMS had
Post by Stephen Hoffman
Host-based InfoServer boot was pretty speedy, though I've
not run a comparison of that and DVD bootstraps on the same rx2800 i4
configuration.
--
Pure Personal Opinion | HoffmanLabs LLC
The I4 Blades need a firmware update to prevent glacial download from
the InfoServer and V8.4-2 for a couple of LOM-related boot fixes but
network upgrade works just fine. The 2800 I4s will network boot V8.4-
1H1.
My personal cluster contains a 2600 (fast to the boot menu), a 3600
(only
slightly slower) and an I4 Blade (slowest to get to the boot menu,
fastest
to actually install.) I upgrade one or both of the older machines
from the
InfoServer weekly. All three are way faster than DVD installation.
Overall boot simplification is in the works.
Good stuff ..

Btw, an interesting feature with the new HPE Synergy Blades (X86-64
only) is HW based OS provisioning:

HPE Synergy Image Streamer:



Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Stephen Hoffman
2016-07-14 15:49:17 UTC
Permalink
Post by d***@gmail.com
Host-based InfoServer boot was pretty speedy, though I've not run a
comparison of that and DVD bootstraps on the same rx2800 i4
configuration.
The I4 Blades need a firmware update to prevent glacial download from
the InfoServer and V8.4-2 for a couple of LOM-related boot fixes but
network upgrade works just fine.
BTW: Just went looking for the VSI OpenVMS IA64 Integrity firmware
recommendations yesterday. Came up empty at both HPE and VSI.
Post by d***@gmail.com
Overall boot simplification is in the works.
Related to the memory disk boot that's used for some configurations?
Or something new?

At some far future date, I'd hope for an in-(non-volatile-)memory,
reentrant-boot version of OpenVMS. But we're not there yet.

If you haven't already done so as part of this, have a look at the
caching bootstraps used on Linux and macOS.
--
Pure Personal Opinion | HoffmanLabs LLC
Paul Sture
2016-07-14 05:44:00 UTC
Permalink
If driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either
in currency or in time and aggravation spent scrounging exotic parts.
Agreed.
Just as an example, the interminable time it takes an rx2660 to boot
from DVD might not be quite so interminable if the USB mass storage
support had required a bit less wizardry to develop and debug.
I'm seeing this at the moment using a (non-VMS) bootloader which
delivers stuff at USB 1 speeds. Almost 10 minutes to boot, a good deal
of which is coping ~270MB into RAM. The same operation in a VM using an
emulated SCSI disk as the boot device is approximately 45 seconds.

Anything which makes it easier for driver developers to improve that
situation gets my vote.

This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
--
When you have eliminated the JavaScript, whatever remains must be an
empty page. Enable JavaScript to see Google Maps. -- Google Maps

Uh? -- me
John E. Malmberg
2016-07-14 12:51:04 UTC
Permalink
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.

And/Or you can use a CI tool like Jenkins to control tests on a VMS box
from a Linux box. Not too hard to set up with shared NFS storage.
See screen shot at https://sourceforge.net/projects/gnv/

Gnu Awk is being built directly from the 4.1-stable branch of the git
repository every time any developer makes a commit.

The rest are triggered by check-ins into the GNV mercurial repositories.

Regards,
-John
***@qsl.net_work
Paul Sture
2016-07-14 13:38:09 UTC
Permalink
Post by John E. Malmberg
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.
Sorry I wasn't clear enough here. I was thinking of the ability to
test drivers via Load and Unload functionality. Something like

* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot

I know one could make things restartable so that testing continues after
a reboot, but that means any other work a machine is performing gets
interrupted and maybe clobbered.
Post by John E. Malmberg
And/Or you can use a CI tool like Jenkins to control tests on a VMS box
from a Linux box. Not too hard to set up with shared NFS storage.
See screen shot at https://sourceforge.net/projects/gnv/
Did you resolve the problem(s) you were having with NFS not seeing files
there?
Post by John E. Malmberg
Gnu Awk is being built directly from the 4.1-stable branch of the git
repository every time any developer makes a commit.
The rest are triggered by check-ins into the GNV mercurial repositories.
That solves something I didn't like about the examples I saw in a
SaltStack demo. I wasn't happy that a mere modification of an Apache
config file (complete with any typos), for example, could trigger an
Apache restart. The concept of a commit or check-in triggering an
action sounds much better.

Version control is useful for system configuration files as well as
code :-)
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
John E. Malmberg
2016-07-14 23:09:46 UTC
Permalink
Post by Paul Sture
Post by John E. Malmberg
And/Or you can use a CI tool like Jenkins to control tests on a VMS box
from a Linux box. Not too hard to set up with shared NFS storage.
See screen shot at https://sourceforge.net/projects/gnv/
Did you resolve the problem(s) you were having with NFS not seeing files
there?
I updated the encompasserve note on that. I set the IA64 up as an NTP
client, and that has possibly made things better. I seem to be seeing
it less.

Also Jenkins is deleting and recreating the directory immediately before
making the ssh connection. I am going to look at a less aggressive
cleanup to see if that helps.

Regards,
-John
***@qsl.net_work
Bob Koehler
2016-07-18 14:59:00 UTC
Permalink
Post by Paul Sture
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
What the heck is a "driver crash"?

I've seen images crash, processes crash, and OS's crash. Never just
a driver.
David Froble
2016-07-18 15:21:38 UTC
Permalink
Post by Bob Koehler
Post by Paul Sture
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
What the heck is a "driver crash"?
I've seen images crash, processes crash, and OS's crash. Never just
a driver.
Probably because when one does it takes the OS with it ???

V***@SendSpamHere.ORG
2016-07-14 16:30:53 UTC
Permalink
Post by Paul Sture
Post by John E. Malmberg
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.
Sorry I wasn't clear enough here. I was thinking of the ability to
test drivers via Load and Unload functionality. Something like
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
If you crask, you won't need to unload and reload!!!
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Paul Sture
2016-07-14 18:32:22 UTC
Permalink
Post by Paul Sture
Post by John E. Malmberg
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.
Sorry I wasn't clear enough here. I was thinking of the ability to
test drivers via Load and Unload functionality. Something like
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
If you crash, you won't need to unload and reload!!!
Bad choice of word. How about "something gone wrong which only a
fresh start will fix"?
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
V***@SendSpamHere.ORG
2016-07-14 18:40:34 UTC
Permalink
Post by Paul Sture
Post by Paul Sture
Post by John E. Malmberg
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.
Sorry I wasn't clear enough here. I was thinking of the ability to
test drivers via Load and Unload functionality. Something like
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
If you crash, you won't need to unload and reload!!!
Bad choice of word. How about "something gone wrong which only a
fresh start will fix"?
I'd prefer the crash. I can then analyze the crash dump and see where "something
has gone wrong." I've been doing this for a quarter+ of a century; I've seen a
lot of crash dumps.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Paul Sture
2016-07-16 18:24:05 UTC
Permalink
Post by V***@SendSpamHere.ORG
Post by Paul Sture
Post by Paul Sture
Post by John E. Malmberg
Post by Paul Sture
This isn't just about driver developers themselves. I for one would be
more than willing to get involved in testing if I could simply script a
bunch of tests and leave them running without intervention.
DecSet includes DEC Test for doing that.
Sorry I wasn't clear enough here. I was thinking of the ability to
test drivers via Load and Unload functionality. Something like
* No driver crash -> carry on testing
* Driver crash -> unload and reload, carry on testing without a
reboot
If you crash, you won't need to unload and reload!!!
Bad choice of word. How about "something gone wrong which only a
fresh start will fix"?
I'd prefer the crash. I can then analyze the crash dump and see where "something
has gone wrong." I've been doing this for a quarter+ of a century; I've seen a
lot of crash dumps.
So you more than most would benefit from the ability to get a dump and
reload a driver without having to wait for a reboot :-)

Yes, as part of a driver unload/reload I would envisage some form of
dump snapshot feature. Might be tricky, but we already know that.
--
A software-enabled, network connected, crowd funded, smart toaster is,
when all is said and done, still just a toaster. -- Elad Gil
Stephen Hoffman
2016-07-18 13:09:19 UTC
Permalink
Post by Paul Sture
So you more than most would benefit from the ability to get a dump and
reload a driver without having to wait for a reboot :-)
Yes, as part of a driver unload/reload I would envisage some form of
dump snapshot feature. Might be tricky, but we already know that.
The existing driver debug and driver logging support is weak and
arcane, but it works.

If the remote access capabilities and the system code debugger are
updated for virtual machine support as part of x86-64 port, then the
basic abilities for local debugging will be present; the VM guest can
or will crash, though the other VMs remain. It wouldn't surprise me to
learn this is part of the porting plan, if it's not already available.

Getting OpenVMS to broadcast its outages and crashes to a central
server would be useful both for local users and for VSI. This logging
can happen now with the CLUE-related bits but it's not nearly as
integrated nor as central to how OpenVMS works and is maintained as it
should be, and it's also feasible to roll your own crash logging.
{Insert "made from scratch" joke}



Typo in the debugger manual: "See the OVMS_ALPHA_SYS_ANALYS_TLS_MAN for
information on debugging operating system code." Whoops.
--
Pure Personal Opinion | HoffmanLabs LLC
Craig A. Berry
2016-07-16 02:45:59 UTC
Permalink
Post by Paul Sture
If driver development were easier, there just might be more or better
drivers, thus enabling more things to plug in and "do stuff" whatever
that might be. The costs of device acquisition might be lower, either
in currency or in time and aggravation spent scrounging exotic parts.
Agreed.
Just as an example, the interminable time it takes an rx2660 to boot
from DVD might not be quite so interminable if the USB mass storage
support had required a bit less wizardry to develop and debug.
I'm seeing this at the moment using a (non-VMS) bootloader which
delivers stuff at USB 1 speeds. Almost 10 minutes to boot, a good deal
of which is coping ~270MB into RAM. The same operation in a VM using an
emulated SCSI disk as the boot device is approximately 45 seconds.
Anything which makes it easier for driver developers to improve that
situation gets my vote.
Indeed. I had the opportunity today to measure the time from choosing
"Internal Bootable DVD" from the boot options menu to the seeing the VMS
installer menu appear. Here's what I got in minutes and seconds:

rx2600/DQDRIVER: 4:15
rx2660/DNDRIVER: 8:24

So for the newer, nominally faster system with a USB-based DVD drive, it
takes almost exactly twice as long to boot the same DVD as it does for
the older, slower system with the IDE-based DVD drive.
Simon Clubley
2016-07-12 19:16:47 UTC
Permalink
Post by Paul Sture
Post by Johnny Billquist
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
Well for one thing, you can reload subsystems (after an update for
example) without needing a reboot to reload their drivers.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Paul Sture
2016-07-12 21:06:55 UTC
Permalink
Post by Simon Clubley
Post by Paul Sture
Post by Johnny Billquist
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
Well for one thing, you can reload subsystems (after an update for
example) without needing a reboot to reload their drivers.
That's probably where I noticed its absence with VMS. On a PDP-11
unloading an unneeded driver was also a quick way to free some
memory, useful if things were tight.
--
When you have eliminated the JavaScript, whatever remains must be an
empty page. Enable JavaScript to see Google Maps. -- Google Maps

Uh? -- me
Stephen Hoffman
2016-07-12 20:33:44 UTC
Permalink
Post by Paul Sture
Post by Johnny Billquist
Post by Stephen Hoffman
The ability to unmap and reload shareable images would be nice, but
that — much like unloading and reloading device drivers — isn't
something I'd expect anytime soon.
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
NumberOfTimesThatJohnnyWasRemindedThatOpenVMSIsNotRSX++
Post by Paul Sture
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
In recent years, layered product software upgrades and driver patches
(preferably) without reboots, mostly. For uptime, for those folks
that haven't yet dispensed with the brute-force single-server
application designs. For driver development, recent OpenVMS boxes tend
to boot fast enough that this is less of a time sink than it once was,
and at least some of the associated debugging and mental context can
(usually) be maintained through the use of the System Code Debugger.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2016-07-15 19:18:33 UTC
Permalink
Post by Stephen Hoffman
Post by Paul Sture
Post by Johnny Billquist
Post by Stephen Hoffman
The ability to unmap and reload shareable images would be nice, but
that — much like unloading and reloading device drivers — isn't
something I'd expect anytime soon.
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.
Is this really the case? You cannot unload and reload device drivers
in VMS? Has that always been the case?
It sounds surprising, considering that RSX can...
NumberOfTimesThatJohnnyWasRemindedThatOpenVMSIsNotRSX++
Thanks for the reminder.
I'll also add:

NumberOfThingsThatRSXCanDoThatVMSCant++

:-)

Considering that the internal structures and design are so similar, it
should not have been that difficult to carry this functionalith forward
into VMS, which is more what I was thinking.
Post by Stephen Hoffman
Post by Paul Sture
It's a feature I certainly missed in VMS after using it in RT-11 and RSX.
However, I can no longer remember _why_ I found it such a desirable
feature. I know I did, but the why escapes me.
In recent years, layered product software upgrades and driver patches
(preferably) without reboots, mostly. For uptime, for those folks
that haven't yet dispensed with the brute-force single-server
application designs. For driver development, recent OpenVMS boxes tend
to boot fast enough that this is less of a time sink than it once was,
and at least some of the associated debugging and mental context can
(usually) be maintained through the use of the System Code Debugger.
There are plenty of times and examples where it can be useful.

My TCP/IP for RSX is all in device drivers, and I've often spotted some
thing I wanted to fix, and just unloaded and reloaded the driver, to get
that thing done, without having to reboot. Both much faster, and much nicer.

Johnny
Simon Clubley
2016-07-12 19:13:46 UTC
Permalink
Post by Johnny Billquist
Maybe hijacking the thread here, but I've seen the reference about
device driver unloading/loading in VMS several times now, suggesting
that it's not possible.
Is this really the case? You cannot unload and reload device drivers in
VMS? Has that always been the case?
It's been the case from Alpha onwards. VAX apparently had some sort
of hackish basic reload mechanism which apparently worked most of
the time but that was dropped during the VAX to Alpha transition.

However, I've no experience with writing drivers for VAX; I've only
ever developed a driver on Alpha and it was a _really_ painful and
slow experience compared to writing one for Linux.
Post by Johnny Billquist
It sounds surprising, considering that RSX can...
Tell me about it...

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
hb
2016-07-12 18:38:31 UTC
Permalink
Post by Stephen Hoffman
Jacketing the calls would allow the target sharables to be mapped out,
but then you're headed toward rolling your own RPC / X / microservice
scheme.
Showing my native tongue: what is meant with "mapped out" in this
context? Unmap? Jacketing the calls does not guarantee that the
shareable image is "freshly" activated. It can only guarantee that
lib$fis will see this image for the first time. When lib$fis requests
activation of the image, the image activator may find it in its list of
already activated images and just return the requested info. (And then
lib$fis will read the GST from the image file to determine/calculate the
VA of whatever the jacket wanted to be lib$fis-ed.)
Stephen Hoffman
2016-07-12 20:11:24 UTC
Permalink
Post by hb
Post by Stephen Hoffman
Jacketing the calls would allow the target sharables to be mapped out,
but then you're headed toward rolling your own RPC / X / microservice
scheme.
Showing my native tongue: what is meant with "mapped out" in this
context? Unmap?
Deleted from virtual address space.
Post by hb
Jacketing the calls does not guarantee that the shareable image is
"freshly" activated.
The mechanism used to unmap the shareables in the cases I've
implemented was (grumble) process-based rundown using one or both of
shared memory or network connections for communicatons. Ugly, but
functional. Hence the references to RPC and microservices. Which can
freshly activate the shareable the each time, as it's a process. I'd
likely structure the application code differently these days, but
that's another discussion.
--
Pure Personal Opinion | HoffmanLabs LLC
m***@gmail.com
2016-07-13 08:12:29 UTC
Permalink
Post by hb
Post by m***@gmail.com
In practice it meant that in the calling routine there was a call to
X that would go to a local interface routine called X, which would
take the arguments passed to it and use them to call the real X in a
shareable image.
"local" means local to the main image, statically linked.
Just curious, what was the reason(s) - besides the monitoring, that
probably means intercepting the calls - to implement such a dynamic
loader in your main image? As you describe, at link time of the main
image you already knew the names of the routines.
Obviously you didn't have to maintain the order of the entries in the
symbol vector of the shareable image. But in one or another way you had
to define symbol vector entries, anyway. OK, I know that there are
ways/tools/... to "export everything", which is not recommended for VMS.
And as you probably know with lib$fis, there is no check for GSMATCH
(SYSTEM-F-SHRIDMISMAT).
Or was the reason to avoid overhead like image relocation, fixups for
all/many shareable images your main image would link against?
The reason was pretty simple. It was a large suite of programs many of which could do a lot of different things and updates/enhancements were very common.

Monolithic images would have been messy because changes to any commonly called routine would have meant relinking everything that called that routine, then testing and handing over every one of those relinked modules to the production system. Monolithic images also would have meant very large images, slow activation if not installed and very memory hungry. Activate-on-demand avoided those issues.
Kerry Main
2016-07-10 07:26:03 UTC
Permalink
-- Pyffle HQ -=- London, England -=- http://pyffle.com
Loading...