Discussion:
Large mailboxes
(too old to reply)
Marc Van Dyck
2020-11-26 16:30:29 UTC
Permalink
Unless I mis-read it, the OpenVMS documentation does not state any size
limitation for permanent mailboxes. There is apparently just a
limitation on the size of each message, but the number of messages can
apparently be arbitrarily high. I have made a call to $CREMBX to
create a mailbox for 1.000.000 messages of 200 bytes each, and VMS did
not complain.

I understand from a Digital Technical Journal article that mailbox
space
is not reserved at mailbox creation time, but allocated each time a
new message is dropped in the mailbox. This I have been able to verify
by loading a mailbox and see the values in $SHOW MEM/POOL decrease
accordingly (loading chunks of 10.000 messages at a time...).

The same documentation also says that it is possible to obtain the
number of messages stored in a mailbox with a call to $GETJPI,
specifying the item DVI$_DEVDEPEND. This call returns a longword
of which only the two last bytes are significant, so the maximum
number of outstanding messages can be 65535.

I have been able to verify that too, with the same test as above,
the number of outstanding messages growing steadily till 60.000 and
then dropping back to 4000 or so after the next chunk of 10.000
messages
were loaded.

So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?

Many thanks in advance for your help !
--
Marc Van Dyck
Arne Vajhøj
2020-11-26 18:27:25 UTC
Permalink
Post by Marc Van Dyck
Unless I mis-read it, the OpenVMS documentation does not state any size
limitation for permanent mailboxes. There is apparently just a
limitation on the size of each message, but the number of messages can
apparently be arbitrarily high. I have made a call to $CREMBX to
create a mailbox for 1.000.000 messages of 200 bytes each, and VMS did
not complain.
You mean you tried maxmsg=200 and bufquo=200_000_000?
Post by Marc Van Dyck
I understand from a Digital Technical Journal article that mailbox space
is not reserved at mailbox creation time, but allocated each time a
new message is dropped in the mailbox. This I have been able to verify
by loading a mailbox and see the values in $SHOW MEM/POOL decrease
accordingly (loading chunks of 10.000 messages at a time...).
The same documentation also says that it is possible to obtain the
number of messages stored in a mailbox with a call to $GETJPI,
specifying the item DVI$_DEVDEPEND. This call returns a longword
of which only the two last bytes are significant, so the maximum
number of outstanding messages can be 65535.
SYS$QIO(W) IO$_SENSEMODE IOSB also only have 16 bit for
number messages in mailbox.
Post by Marc Van Dyck
I have been able to verify that too, with the same test as above,
the number of outstanding messages growing steadily till 60.000 and
then dropping back to 4000 or so after the next chunk of 10.000 messages
were loaded.
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.

But it seems likely that noone imagined it being a problem.

It uses non-paged pool. How big was available non-paged pool on
VAX systems?

My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.

Arne
Marc Van Dyck
2020-11-27 17:11:29 UTC
Permalink
Post by Arne Vajhøj
You mean you tried maxmsg=200 and bufquo=200_000_000?
Yes. And I loaded more than 100.000 messages in it and the system never
complained, always got SS$_NORMAL back...
--
Marc Van Dyck
Arne Vajhøj
2020-11-27 22:41:26 UTC
Permalink
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.

If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).

Arne
Jan-Erik Söderholm
2020-11-28 02:00:06 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Arne
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.

If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
Arne Vajhøj
2020-11-28 02:47:28 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
Yes. That was sort of my point.

But if the writer is not blocking and the reader is
slower than the writer then some messages can queue up.

Arne
Jan-Erik Söderholm
2020-11-28 13:22:30 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
Yes. That was sort of my point.
But if the writer is not blocking and the reader is
slower than the writer then some messages can queue up.
Arne
Right. But now, I didn't really understood the original question.
I thought that, in a normal setup, and if the "reader" stops reading
from the queue, the writer can rather quickly get into RWMBX (I think
that is the state) where the process i blocked due to a "full mailbox".
And that is *way* before you reach 64K messages.

Where the message is non-critical and a lost message is no big deal, we
write using IO$M_NORSWAIT so that the writer does not go int RWMBX if the
mailbox device is "full". And we also write using IO$M_NOW.

IO$M_NOW — Completes the I/O operation immediately without waiting for
another process to read the mailbox message.

IO$M_NORSWAIT — If the mailbox is full, the I/O operation fails with a
status return of SS$_MBFULL rather than placing the process in resource
wait mode. (We just ignore the SS$_MBFULL return code...)

Jan-Erik.
Arne Vajhøj
2020-11-28 23:41:34 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
Yes. That was sort of my point.
But if the writer is not blocking and the reader is
slower than the writer then some messages can queue up.
Right. But now, I didn't really understood the original question.
I thought that, in a normal setup, and if the "reader" stops reading
from the queue, the writer can rather quickly get into RWMBX (I think
that is the state) where the process i blocked due to a "full mailbox".
And that is *way* before you reach 64K messages.
Yes - that is the normal scenario. But Marc had configured his
mailbox for a million messages.

Arne
Dave Froble
2020-11-28 03:08:19 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Arne
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
That is how I use them. Trust the mailbox to hold data, and what
happens when there is a system crash?
Post by Jan-Erik Söderholm
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
For that type of volume, perhaps a database?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Jan-Erik Söderholm
2020-11-28 13:09:53 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
I don't know.
But it seems likely that noone imagined it being a problem.
It uses non-paged pool. How big was available non-paged pool on
VAX systems?
My guess is that available non-paged pool divided by a
normal message size would fit into 16 bit.
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Arne
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
That is how I use them.  Trust the mailbox to hold data, and what happens
when there is a system crash?
Post by Jan-Erik Söderholm
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
For that type of volume, perhaps a database?
I thought more in a line of a messages queuing tool that have
persistent storage for the queue content. There are many, two
older ones are DEC/MessageQ and IBM/MQ. And there are a number
of open source tools today that serve the same purpose.
Arne Vajhøj
2020-11-28 18:55:32 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Dave Froble
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
For that type of volume, perhaps a database?
Something is needed on top of a database to
mimic mailbox / message queue.
Post by Jan-Erik Söderholm
I thought more in a line of a messages queuing tool that have
persistent storage for the queue content. There are many, two
older ones are DEC/MessageQ and IBM/MQ. And there are a number
of open source tools today that serve the same purpose.
Yes.

ActiveMQ/ArtemisMQ comes to mind.

They can be configured to use a database for persistence.

Arne
Dave Froble
2020-11-28 20:07:31 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by Dave Froble
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Isn't the normal way to use a VMS mailbox as an on-line interface
between one (or more) senders and one recevier? That is, the
mailbox as such is never intended to "store" anything apart from
a very short time during the transmission.
If the intention is to buffer 10000's or 100000's of messages,
VMS mailboxes looks as the wrong tool from the toolbox...
For that type of volume, perhaps a database?
Something is needed on top of a database to
mimic mailbox / message queue.
Ya know, I didn't specify some generic database product. One might
consider, from prior discussions, that anything storing data could be
called a database, right?

I have a maybe 40 year old product that is a messaging system, with it's
own custom database, designed for the product. Persistent storage for a
massaging system, as Jan-Erik points out below, can be sort of important.
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
I thought more in a line of a messages queuing tool that have
persistent storage for the queue content. There are many, two
older ones are DEC/MessageQ and IBM/MQ. And there are a number
of open source tools today that serve the same purpose.
Yes.
ActiveMQ/ArtemisMQ comes to mind.
They can be configured to use a database for persistence.
Arne
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Marc Van Dyck
2020-11-28 17:37:37 UTC
Permalink
Post by Arne Vajhøj
I could add that as a rule of thumb I would
only use VMS mailboxes (or Windows pipes or
*nix unix sockets) to buffer hundreds or a
few thousands of messages.
If I needed hundreds of thousands or
millions I would look for a message queue
(and if I needed billions I would look at
Kafka).
Arne
Long story... Do you remember DECps, the polycenter performance
management product ? Well, it is still around, and we are still using
it. CA ported it to Itanium 12 years ago, but didn't do any functional
update since they received it from DEC. There are large parts of it
that can't be used anymore because it has no idea about what current
hardware can do.

So now we think it's time to replace it, for those two reasons :
- it is functionally obsolete (doesn't know anything about TCP/IP and
fibre channel, for example)
- it will probably never be ported to X86

We use it for the following purposes:
- Display system performance in real-time (dashboard)
- Publish performance graphs on a web site
- Deep dive into performance related issues (with the Motif interface)
- Long term capacity planning (export data to Excel via CSV files)

The last 3 uses can be easily taken over by Perfdat, but not the first
one. Perfdat locks the file it uses to store collected data, so there
is
no way to exploit them in real time. T4 and TDC do the same. So we
decided to roll our own...

Collecting the data we need is not difficult, either from a DCL loop,
or
from a program issuing $GETRMI calls. To display them, we're going to
use an in-house developped web service that can store and display any
metric that one cares to throw at it. The interface to send the data is
cURL, which under VMS has a tendency to get stuck when used
intensively.
So we'll decouple the collector and the sender, and put a large mailbox
between the two. If the mailbox is large enough, we can tolerate a
stall
of the sender for several minutes, and it will catch up with the
mailbox
contents very quickly once restarted.

This kind of data does not need to survive a reboot (there will always
be perfdat to look at what happened anyway) and a mailbox is a cheap
and easy to use buffer for this kind of purpose. We have plenty of
memory to dedicate to that. Probably not what was in the head of people
who designed that 40 years ago, but I don't see why it would not work.
If there is really a hard limit of 64k entries, we'll just put in
some mailboxes in parallel. My question was basically to determine
whether I need to do that or not.
--
Marc Van Dyck
Stephen Hoffman
2020-11-28 18:40:21 UTC
Permalink
Post by Marc Van Dyck
Collecting the data we need is not difficult, either from a DCL loop,
or from a program issuing $GETRMI calls. To display them, we're going
to use an in-house developped web service that can store and display any
metric that one cares to throw at it. The interface to send the data is
cURL, which under VMS has a tendency to get stuck when used
intensively. So we'll decouple the collector and the sender, and put a
large mailbox between the two. If the mailbox is large enough, we can
tolerate a stall of the sender for several minutes, and it will catch
up with the mailbox contents very quickly once restarted.
This kind of data does not need to survive a reboot (there will always
be perfdat to look at what happened anyway) and a mailbox is a cheap
and easy to use buffer for this kind of purpose....
A couple of different approaches...

Have the collector send a UDP multicast, and have the dashboard queue
requests for and receive and process that data. That entirely decouples
the sender and receiver. And the use of the multicast inherently
discards old data.

A queue whether through a mailbox or otherwise is problematic approach
here both as it preserves previously-sent data, and as it preserves old
data in preference to new.

I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.

An alternative approach is to have the web dashboard query the
collector either as needed or periodically, which also avoids the
unnecessary and problematic queuing. That could be an HTTP or HTTPS GET
to the collector (REST, et al), with approaches ranging from a blob of
data or JSON or XML issued directly by the collector to using Tomcat or
Python with Twisted or Tornado. Python also has extensions allowing
system service access, including $getrmi.
https://wiki.vmssoftware.com/VMS_specific_Python_modules

HPE went heavily toward Redfish / DMTF for these and related tasks
(though that work largely post-dates HPE interest in OpenVMS
enhancements), so there are parallels to this approach:
https://www.hpe.com/us/en/servers/restful-api.html — which can allow
extending the display to other collectors. https://github.com/DMTF and
https://dmtf.github.io/python-redfish-utility/#overview etc — and it
means there are potentially other tools and dashboard apps that can be
integrated.

An approach using message queuing (MQ, Mosquitto, AMQP, etc) and DEC
RTR tools and frameworks is the follow-ons to the frameworks that
various of us were developing years ago, but again this particular case
is one where queuing the traffic is a problem. Where dropping (old)
messages here is appropriate. And message queues don't (typically) do
that. OpenVMS mailboxes definitely don't. From the oldest of the
dashboards we were doing many years ago, that dashboard was listening
for periodically-sent Ethernet multicast frames, and more recently that
dashboard approach would be UDP/DTLS or ilk, or an approach based on
REST polling or ilk.

DCL stinks at handling IP, more recently also stinks at handling TLS,
and DTLS, and knows zilch about REST and JSON and other data
structures. Which has all been a longstanding issue. Python and some
other tools are vastly better at that and related tasks.

TL;DR: I'd not use a mailbox nor queue here, but rather a datagram or
maybe polling. That also gets curl out of the path. And can be
extensible.
--
Pure Personal Opinion | HoffmanLabs LLC
Dave Froble
2020-11-28 20:41:26 UTC
Permalink
This sounds like fun. Yes, some people have a weird concept of "fun".

The first suggestion I'd make is keep the design/requirements separate,
and completed first. Don't throw in "I can use this" at the early stage.
Post by Stephen Hoffman
Post by Marc Van Dyck
Collecting the data we need is not difficult, either from a DCL loop,
or from a program issuing $GETRMI calls.
I'd go for some performance at this step. A decent language, such as Basic.

:-)
Post by Stephen Hoffman
Post by Marc Van Dyck
To display them, we're going
to use an in-house developped web service that can store and display any
metric that one cares to throw at it.
The display could be real time, historical, or both. Good arguments for
both.
Post by Stephen Hoffman
Post by Marc Van Dyck
The interface to send the data
is cURL, which under VMS has a tendency to get stuck when used
intensively.
I'd consider sockets. (At one time I'd consider DECnet, but not now.)
TCP/IP on VMS will get better, or, quite possibly there will be no VMS.
So perhaps a fairly safe gamble.
Post by Stephen Hoffman
Post by Marc Van Dyck
So we'll decouple the collector and the sender, and put a
large mailbox between the two. If the mailbox is large enough, we can
tolerate a stall of the sender for several minutes, and it will catch
up with the mailbox contents very quickly once restarted.
This kind of data does not need to survive a reboot (there will always
be perfdat to look at what happened anyway) and a mailbox is a cheap
and easy to use buffer for this kind of purpose....
But if storing the data happens at an early stage, then less might be lost.
Post by Stephen Hoffman
A couple of different approaches...
Have the collector send a UDP multicast, and have the dashboard queue
requests for and receive and process that data. That entirely decouples
the sender and receiver. And the use of the multicast inherently
discards old data.
A queue whether through a mailbox or otherwise is problematic approach
here both as it preserves previously-sent data, and as it preserves old
data in preference to new.
I'd agree, VMS mailboxes would not be my choice for this task.
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs. There are sockets at the bottom of any of them. Just
forget the top heavy junk and use sockets.
Post by Stephen Hoffman
An alternative approach is to have the web dashboard query the collector
either as needed or periodically, which also avoids the unnecessary and
problematic queuing. That could be an HTTP or HTTPS GET to the collector
(REST, et al), with approaches ranging from a blob of data or JSON or
XML issued directly by the collector to using Tomcat or Python with
Twisted or Tornado. Python also has extensions allowing system service
access, including $getrmi.
https://wiki.vmssoftware.com/VMS_specific_Python_modules
HPE went heavily toward Redfish / DMTF for these and related tasks
(though that work largely post-dates HPE interest in OpenVMS
https://www.hpe.com/us/en/servers/restful-api.html — which can allow
extending the display to other collectors. https://github.com/DMTF and
https://dmtf.github.io/python-redfish-utility/#overview etc — and it
means there are potentially other tools and dashboard apps that can be
integrated.
An approach using message queuing (MQ, Mosquitto, AMQP, etc) and DEC RTR
tools and frameworks is the follow-ons to the frameworks that various of
us were developing years ago, but again this particular case is one
where queuing the traffic is a problem. Where dropping (old) messages
here is appropriate. And message queues don't (typically) do that.
OpenVMS mailboxes definitely don't. From the oldest of the dashboards we
were doing many years ago, that dashboard was listening for
periodically-sent Ethernet multicast frames, and more recently that
dashboard approach would be UDP/DTLS or ilk, or an approach based on
REST polling or ilk.
DCL stinks at handling IP, more recently also stinks at handling TLS,
and DTLS, and knows zilch about REST and JSON and other data structures.
Which has all been a longstanding issue. Python and some other tools are
vastly better at that and related tasks.
TL;DR: I'd not use a mailbox nor queue here, but rather a datagram or
maybe polling. That also gets curl out of the path. And can be extensible.
Pretty much agree.

#1 concept. Keep it modular. That way there can be multiple collection
methods, multiple data storage methods, multiple display methods. Much
easier to replace or upgrade specific modules than the whole system.

Do you envision most of the application residing and executing on the
running VMS node(s), or gathering the data and getting it to alternate
node(s) as quickly as possible? Running the application will itself be
a load on the running system. I see you're looking to use browsers for
display. Will the server be on the running system? Lots of options.

Are you looking to keep your project closely held, or possibly
collaborate with those who might already have some stuff that can be
used, or modified to use?
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2020-11-28 23:45:12 UTC
Permalink
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs.  There are sockets at the bottom of any of them.  Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.

Arne
Dave Froble
2020-11-29 01:49:32 UTC
Permalink
Post by Arne Vajhøj
Post by Dave Froble
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs. There are sockets at the bottom of any of them. Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.
Totally disagree.

I've seen some of the stuff needed to use higher level stuff. Lots and
lots of coding to meet the requirements. They are for those who don't
know how to use sockets, which is rather easy.

YMMV, but, I get done much quicker if I skip all the overhead coding.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2020-11-29 03:26:29 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs.  There are sockets at the bottom of any of them.  Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.
Totally disagree.
I've seen some of the stuff needed to use higher level stuff.  Lots and
lots of coding to meet the requirements.  They are for those who don't
know how to use sockets, which is rather easy.
YMMV, but, I get done much quicker if I skip all the overhead coding.
I am not convinced.

Let me make an example.

Task getting the page at https://www.google.com/

In VB.NET you do:

Dim wc As WebClient = New WebClient
Dim s As String = wc.DownloadString("https://www.google.com/")

How many lines does it take to do that using socket API??

Arne
Arne Vajhøj
2020-11-29 03:47:52 UTC
Permalink
Post by Arne Vajhøj
Post by Dave Froble
Post by Arne Vajhøj
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs.  There are sockets at the bottom of any of them.  Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.
Totally disagree.
I've seen some of the stuff needed to use higher level stuff.  Lots
and lots of coding to meet the requirements.  They are for those who
don't know how to use sockets, which is rather easy.
YMMV, but, I get done much quicker if I skip all the overhead coding.
I am not convinced.
Let me make an example.
Task getting the page at https://www.google.com/
Dim wc As WebClient =  New WebClient
Dim s As String = wc.DownloadString("https://www.google.com/")
How many lines does it take to do that using socket API??
Non-HTTP require a few more lines, but still not bad - connect
to localhost port 12345 and write text:

Using tc As New TcpClient("localhost", 12345)
Using nstm As NetworkStream = tc.GetStream()
Using sw As New StreamWriter(nstm)
sw.WriteLine("This is a test")
End Using
End Using
End Using

Arne
Arne Vajhøj
2020-11-29 19:24:23 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Dave Froble
Post by Arne Vajhøj
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs.  There are sockets at the bottom of any of them.
Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.
Totally disagree.
I've seen some of the stuff needed to use higher level stuff.  Lots
and lots of coding to meet the requirements.  They are for those who
don't know how to use sockets, which is rather easy.
YMMV, but, I get done much quicker if I skip all the overhead coding.
I am not convinced.
Let me make an example.
Task getting the page at https://www.google.com/
Dim wc As WebClient =  New WebClient
Dim s As String = wc.DownloadString("https://www.google.com/")
How many lines does it take to do that using socket API??
Non-HTTP require a few more lines, but still not bad - connect
        Using tc As New TcpClient("localhost", 12345)
            Using nstm As NetworkStream = tc.GetStream()
                Using sw As New StreamWriter(nstm)
                    sw.WriteLine("This is a test")
                End Using
            End Using
        End Using
Arne, understand, what you're using has a whole bunch of coding behind
that you're not counting.
Of course.

But the point is that the vendor (in this case Microsoft)
has developed it and provide ongoing maintenance. The
companies writing the code does not have to.
  Well, I've got a whole bunch of coding that I
reuse, so that's pretty much a wash.  Just because I've written it
doesn't make it less useful than what someone else has written.  In my
mind it's more so.
The difference is that you paid for the development and you pay
for maintenance of your own libraries.
Nor were we discussing WEENDOZE and VB.  Let's keep it to VMS maybe.
We were discussing that high level API's are missing on VMS.

I showed an example.

I could not use a VMS example for something missing on VMS.
And one more thing.  My code does things the way I think they should be
done, not what someone else thinks should happen.
Yes.

But if somebody else is going to take over maintenance of your
code then it may be much easier to do the handover if it
uses standard libraries not custom libraries.

There must be 10-20 million developers world wide that
knows the WebClient and TcpClient classes.

Arne
Phillip Helbig (undress to reply)
2020-11-29 07:21:03 UTC
Permalink
Post by Arne Vajhøj
Post by Dave Froble
YMMV, but, I get done much quicker if I skip all the overhead coding.
I am not convinced.
Let me make an example.
Task getting the page at https://www.google.com/
Dim wc As WebClient = New WebClient
Dim s As String = wc.DownloadString("https://www.google.com/")
How many lines does it take to do that using socket API??
$ lynx -dump https://www.google.com/

:-)

Of course, with higher-level stuff one has to take into account the
overhead of setting it up and so on, but if one has done that anyway for
other reasons, then it does make sense to count only the incremental
effort involved.
Arne Vajhøj
2020-11-29 19:11:01 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Arne Vajhøj
Post by Dave Froble
YMMV, but, I get done much quicker if I skip all the overhead coding.
I am not convinced.
Let me make an example.
Task getting the page at https://www.google.com/
Dim wc As WebClient = New WebClient
Dim s As String = wc.DownloadString("https://www.google.com/")
How many lines does it take to do that using socket API??
$ lynx -dump https://www.google.com/
:-)
:-)

Lynx is not an API.
Post by Phillip Helbig (undress to reply)
Of course, with higher-level stuff one has to take into account the
overhead of setting it up and so on, but if one has done that anyway for
other reasons, then it does make sense to count only the incremental
effort involved.
In the example above there is really no setup. WebClient is just
a class that you use. No config files.

Arne
Marc Van Dyck
2020-11-29 11:15:36 UTC
Permalink
Post by Arne Vajhøj
Post by Stephen Hoffman
I'd use DTLS here for sensitive data, though the available TCP/IP,
UDP/IP, TLS, and DTLS APIs on OpenVMS are less than friendly.
Screw the APIs.  There are sockets at the bottom of any of them.  Just
forget the top heavy junk and use sockets.
I suspect he wants a higher level API for TCP/IP communication
for the same reason you use Basic and not Macro-32. You get more
functionality implemented per hour.
Arne
Have to do with limited resources. And remember, the display tool is
something I have no control on. So it will be cURL and DCL because I
have no other choice.
--
Marc Van Dyck
Marc Van Dyck
2020-11-29 11:13:11 UTC
Permalink
Post by Dave Froble
I'd go for some performance at this step. A decent language, such as Basic.
When it will be possible, yes. Pascal or C, but no Basic, though. But
I'm not going to embark in the programming complexity required to
obtain traffic counters on HBA and NIC interfaces. I'm not even sure
this is documented. But for what I can get from $GETRMI, fine.
--
Marc Van Dyck
Marc Van Dyck
2020-11-29 11:20:12 UTC
Permalink
Post by Dave Froble
The display could be real time, historical, or both. Good arguments for
both.
Post by Marc Van Dyck
The interface to send the data
is cURL, which under VMS has a tendency to get stuck when used
intensively.
I'd consider sockets. (At one time I'd consider DECnet, but not now.) TCP/IP
on VMS will get better, or, quite possibly there will be no VMS. So perhaps a
fairly safe gamble.
The display tool that we will use is highly configurable, once the
data are loaded you can basically display them any way you want.
Historical, histogram, lines, pie charts, anything. It's based on
OpenTSDB. But I have no control on that, it's maintained in another
part of the company. They have a published interface and I have to use
it as it is. There are thousand other systems in the company using
that service so they are not going to change the specs for me.
--
Marc Van Dyck
Galen
2020-11-29 15:51:22 UTC
Permalink
Post by Arne Vajhøj
(and if I needed billions I would look at
Kafka).
Hmm. I haven’t read much of his work (“The Castle”, “The Metamorphosis”, “In The Penal Colony”, and probably a few other short pieces), but I don’t recall such a voluminous flow of messages being a theme. But it might be amusing to sketch out or even write something short and Kafka-esque around such an idea.
Phillip Helbig (undress to reply)
2020-11-29 16:32:11 UTC
Permalink
(and if I needed billions I would look at Kafka).
Hmm. I haven't read much of his work (\_The Castle_/, \_The
Metamorphosis_/, \_In The Penal Colony_/, and probably a few other short
pieces), but I don't recall such a voluminous flow of messages being a
theme. But it might be amusing to sketch out or even write something
short and Kafka-esque around such an idea.
Not sure who intended and who gets what, but Kafka is some sort of
newfangled messaging system or whatever. Not sure of the origin of the
name. (Python refers to Monty.)
Bill Gunshannon
2020-11-29 16:53:57 UTC
Permalink
Post by Phillip Helbig (undress to reply)
(Python refers to Monty.)
That explains a lot. :-)

bill
David Jones
2020-11-29 17:00:46 UTC
Permalink
... (Python refers to Monty.)
As in "I came here for an argument."?
Chris Townley
2020-11-29 17:27:59 UTC
Permalink
Post by David Jones
... (Python refers to Monty.)
As in "I came here for an argument."?
No you didn't

;)

Chris
Phillip Helbig (undress to reply)
2020-11-29 17:49:06 UTC
Permalink
Post by David Jones
... (Python refers to Monty.)
As in "I came here for an argument."?
Or something completely different.

Don't mention the dead-parrott sketch in reference to VMS, VAX, Alpha,
or Itanium. :-)
Phillip Helbig (undress to reply)
2020-11-29 19:37:57 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by David Jones
... (Python refers to Monty.)
As in "I came here for an argument."?
Or something completely different.
Don't mention the dead-parrott sketch in reference to VMS, VAX, Alpha,
or Itanium. :-)
Of course, VMS referenced Monty Python long ago:

$ WRITE SYS$OUTPUT F$MESSAGE(2930)
Phillip Helbig (undress to reply)
2020-11-29 17:52:30 UTC
Permalink
Post by David Jones
... (Python refers to Monty.)
As in "I came here for an argument."?
Somewhere on the internet is "if Klingons wrote software", where one can
read that their subroutines don't have parameters, they have
arguments---and always win them. Also, Klingon software is not
released---it escapes.
Galen
2020-11-29 17:29:47 UTC
Permalink
Not sure who intended.
Sorry, that was meant in humor, and perhaps should have had a laughing emoji. It is as an allusion (perhaps a bit too obscure) to Franz Kafka, Bohemian novelist and short-story writer (1883–1924), whose name has entered the English language as th adjective “Kafkasque”. I probably shouldn’t take up any more bandwidth here on the subject.
Phillip Helbig (undress to reply)
2020-11-29 17:51:14 UTC
Permalink
Post by Galen
Not sure who intended.
Sorry, that was meant in humor, and perhaps should have had a laughing
emoji. It is as an allusion (perhaps a bit too obscure) to Franz Kafka,
Bohemian novelist and short-story writer (1883-1924), whose name has
entered the English language as th adjective \_Kafkasque_/. I probably
shouldn't take up any more bandwidth here on the subject.
I think that most people here have an idea who Kafka was. On the other
hand, most probably haven't heard of, and fewer still worked with, Kafka
software. Considering the connotations, is that a good name? There is
also a software company called SOS.
Arne Vajhøj
2020-11-29 18:27:50 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Galen
Not sure who intended.
Sorry, that was meant in humor, and perhaps should have had a laughing
emoji. It is as an allusion (perhaps a bit too obscure) to Franz Kafka,
Bohemian novelist and short-story writer (1883-1924), whose name has
entered the English language as th adjective \_Kafkasque_/. I probably
shouldn't take up any more bandwidth here on the subject.
I think that most people here have an idea who Kafka was. On the other
hand, most probably haven't heard of, and fewer still worked with, Kafka
software.
Here? Probably not so many.

But it is actually widely used.

It is an Apache project.

It originate at LinkedIn.

LinkedIn still use Kafka They have 100 Kafka clusters with
4000 servers processing 7 trillion messages per day.
Post by Phillip Helbig (undress to reply)
Considering the connotations, is that a good name?
Per wikipedia:

<quote>
Jay Kreps chose to name the software after the author Franz Kafka
because it is "a system optimized for writing", and he liked Kafka's work.
</quote>

Arne
Arne Vajhøj
2020-11-29 20:15:35 UTC
Permalink
Post by Arne Vajhøj
Post by Galen
Not sure who intended.
Sorry, that was meant in humor, and perhaps should have had a laughing
emoji.  It is as an allusion (perhaps a bit too obscure) to Franz Kafka,
Bohemian novelist and short-story writer (1883-1924), whose name has
entered the English language as th adjective \_Kafkasque_/. I probably
shouldn't take up any more bandwidth here on the subject.
I think that most people here have an idea who Kafka was.  On the other
hand, most probably haven't heard of, and fewer still worked with, Kafka
software.
Here? Probably not so many.
But it is actually widely used.
It is an Apache project.
It originate at LinkedIn.
LinkedIn still use Kafka They have 100 Kafka clusters with
4000 servers processing 7 trillion messages per day.
Other big Kafka users are:

Pinterest - 2000 servers processing 800 billion
messages totalling 1.2 PB per day.

Netflix - 36 clusters with 4000 servers
processing 700 billion messages per day.

Arne
Jean-François Piéronne
2020-11-29 18:19:52 UTC
Permalink
Post by Phillip Helbig (undress to reply)
(and if I needed billions I would look at Kafka).
Hmm. I haven't read much of his work (\_The Castle_/, \_The
Metamorphosis_/, \_In The Penal Colony_/, and probably a few other short
pieces), but I don't recall such a voluminous flow of messages being a
theme. But it might be amusing to sketch out or even write something
short and Kafka-esque around such an idea.
Not sure who intended and who gets what, but Kafka is some sort of
newfangled messaging system or whatever. Not sure of the origin of the
name. (Python refers to Monty.)
I know OpenVMS sites which are using RabbitMQ for years and are very
happy with it.

JFP
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Arne Vajhøj
2020-11-29 19:18:40 UTC
Permalink
Post by Jean-François Piéronne
Post by Phillip Helbig (undress to reply)
(and if I needed billions I would look at Kafka).
Hmm. I haven't read much of his work (\_The Castle_/, \_The
Metamorphosis_/, \_In The Penal Colony_/, and probably a few other short
pieces), but I don't recall such a voluminous flow of messages being a
theme. But it might be amusing to sketch out or even write something
short and Kafka-esque around such an idea.
Not sure who intended and who gets what, but Kafka is some sort of
newfangled messaging system or whatever. Not sure of the origin of the
name. (Python refers to Monty.)
I know OpenVMS sites which are using RabbitMQ for years and are very
happy with it.
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).

RabbitMQ is probably the most popular non-Java
open source message queue.

Arne
Jean-François Piéronne
2020-11-30 06:37:53 UTC
Permalink
Le 29/11/2020 à 20:18, Arne Vajhøj a écrit :
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.

JF
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Jean-François Piéronne
2020-11-30 06:46:24 UTC
Permalink
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
And for those who want to take a look at the difference between the two:

https://tanzu.vmware.com/developer/blog/understanding-the-differences-between-rabbitmq-vs-kafka/
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Jan-Erik Söderholm
2020-11-30 09:35:35 UTC
Permalink
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)

Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?

I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
Jean-François Piéronne
2020-11-30 13:17:52 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.

Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
Python module:
https://foss.vmsgenerations.org/openvms/python/modules/pika

JFP
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Jan-Erik Söderholm
2020-11-30 14:27:42 UTC
Permalink
Post by Jean-François Piéronne
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.
Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
https://foss.vmsgenerations.org/openvms/python/modules/pika
JFP
OK.
I have no idea if RabbitMQ is used within my client environment.
They have IBM MQ/WebSphere as their main messaging tool.

To have something simple to replace/complement our MBX based
IPC right now, I'd not like to involve any Linux in the picture.
It has to be VMS only if it is going to be used for VMS only.
Jean-François Piéronne
2020-11-30 14:44:28 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.
Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
https://foss.vmsgenerations.org/openvms/python/modules/pika
JFP
OK.
I have no idea if RabbitMQ is used within my client environment.
They have IBM MQ/WebSphere as their main messaging tool.
To have something simple to replace/complement our MBX based
IPC right now, I'd not like to involve any Linux in the picture.
It has to be VMS only if it is going to be used for VMS only.
Brett Cameron has done, a few years ago, a port of RabbitMQ server.
Never try it, maybe you can ask him.

JFP
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Marc Van Dyck
2020-11-30 17:12:08 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.
Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
https://foss.vmsgenerations.org/openvms/python/modules/pika
JFP
OK.
I have no idea if RabbitMQ is used within my client environment.
They have IBM MQ/WebSphere as their main messaging tool.
To have something simple to replace/complement our MBX based
IPC right now, I'd not like to involve any Linux in the picture.
It has to be VMS only if it is going to be used for VMS only.
If you have to interact with an IBM MQ environment, you migh also
consider using PAHO_C, which is supported and distributed by VSI,
and happily interacts with an IBM MQ server using the MQTT protocol.
We use it, under heavy loads, and it works perfectly. Be sure to use
the last version.
--
Marc Van Dyck
Jan-Erik Söderholm
2020-11-30 17:40:58 UTC
Permalink
Post by Marc Van Dyck
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.
Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
https://foss.vmsgenerations.org/openvms/python/modules/pika
JFP
OK.
I have no idea if RabbitMQ is used within my client environment.
They have IBM MQ/WebSphere as their main messaging tool.
To have something simple to replace/complement our MBX based
IPC right now, I'd not like to involve any Linux in the picture.
It has to be VMS only if it is going to be used for VMS only.
If you have to interact with an IBM MQ environment, you migh also
consider using PAHO_C, which is supported and distributed by VSI,
and happily interacts with an IBM MQ server using the MQTT protocol.
We use it, under heavy loads, and it works perfectly. Be sure to use
the last version.
We use the IBM MQ clkient for VMS. From Cobol and (now) from Python.
Marc Van Dyck
2020-12-01 17:31:59 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Marc Van Dyck
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
JF
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
There is a old port of the server, but my best advice is to deploy the
server part on Linux, this is what all sites I knew have done.
The last I have installed used a cluster of 3 RabbitMQ nodes and 2
ha-proxy front-end.
Client part, no problem, including on old VMS version.
https://foss.vmsgenerations.org/openvms/libraries/rabbitmq-c
https://foss.vmsgenerations.org/openvms/wasd/rmqplus
https://foss.vmsgenerations.org/openvms/python/modules/pika
JFP
OK.
I have no idea if RabbitMQ is used within my client environment.
They have IBM MQ/WebSphere as their main messaging tool.
To have something simple to replace/complement our MBX based
IPC right now, I'd not like to involve any Linux in the picture.
It has to be VMS only if it is going to be used for VMS only.
If you have to interact with an IBM MQ environment, you migh also
consider using PAHO_C, which is supported and distributed by VSI,
and happily interacts with an IBM MQ server using the MQTT protocol.
We use it, under heavy loads, and it works perfectly. Be sure to use
the last version.
We use the IBM MQ clkient for VMS. From Cobol and (now) from Python.
We were too but IBM stopped developping that so we're stuck with an
old version and without support...
--
Marc Van Dyck
Arne Vajhøj
2020-11-30 14:08:53 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
I believe VSI has released the following.

Itanium:
* librdkafka 0.9-5 [C client]
* RabbitMQ C Client 2.6-0
* ActiveMQ 5.15.0A [I assume server + Java client]

Alpha:
* RabbitMQ C Client 2.6-0

From old 2019 roadmap.

Client wise then there are lots of additional stuff available as
RabbitMQ and ActiveMQ both speak standard protocols: AMQP and STOMP.

Arne
Jan-Erik Söderholm
2020-11-30 15:01:19 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
I believe VSI has released the following.
* librdkafka 0.9-5 [C client]
* RabbitMQ C Client 2.6-0
* ActiveMQ 5.15.0A [I assume server + Java client]
* RabbitMQ C Client 2.6-0
From old 2019 roadmap.
Client wise then there are lots of additional stuff available as
RabbitMQ and ActiveMQ both speak standard protocols: AMQP and STOMP.
Arne
OK, have to look at these:

https://vmssoftware.com/products/librabbitmq/
http://rabbitmqonopenvs.blogspot.com/

Thanks!

Jan-Erik.
Arne Vajhøj
2020-11-30 15:29:15 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Arne Vajhøj
Post by Jan-Erik Söderholm
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Now, without reading through a lot of docs, can one get
a quick answer to an easy question? :-)
Is the VMS port of RabbitMQ both server and client?
Or does the VMS port need an "external" RabbitMQ server?
I mean, if I'd like to look at a solution with RabbitMQ
to raplace our current VMS/mailbox IPC setup, can that be
done locally within our VMS Alpha systems?
I believe VSI has released the following.
* librdkafka 0.9-5 [C client]
* RabbitMQ C Client 2.6-0
* ActiveMQ 5.15.0A [I assume server + Java client]
* RabbitMQ C Client 2.6-0
 From old 2019 roadmap.
Client wise then there are lots of additional stuff available as
RabbitMQ and ActiveMQ both speak standard protocols: AMQP and STOMP.
https://vmssoftware.com/products/librabbitmq/
http://rabbitmqonopenvs.blogspot.com/
For native languages you would want to use librabbitmq as
client library.

But for non-native languages (Python, Java, PHP etc.) then
I would go for a pure library implementing AMQP or STOMP
directly on sockets if such is available and not use a
C library. Less dependencies.

Arne
Arne Vajhøj
2020-11-30 15:37:01 UTC
Permalink
Post by Jan-Erik Söderholm
https://vmssoftware.com/products/librabbitmq/
BTW that page sound very confusing to me.

Unless VSI has done something unusual then:
* the RabbitMQ server (broker) is written in Erlang
* the librabbitmq client is written in C

Arne
Arne Vajhøj
2020-11-30 13:53:36 UTC
Permalink
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Not that much.

Kafka is written in Java and Scala.

But there are client libraries available in:

C/C++
.NET = C# + VB.NET
Java = Java + Scala + Kotlin + Groovy + Clojure
PHP
Swift
Free Pascal
Go
Rust
Erlang
Python
Ruby
Perl
JavaScript

That covers a lot (even though no Fortran and no Cobol).

Arne
Jean-François Piéronne
2020-11-30 14:32:54 UTC
Permalink
Post by Arne Vajhøj
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Not that much.
Post by Jean-François Piéronne
Kafka is written in Java and Scala.
Correct, but Kafka is still Java oriented, especially if you want to use
all functionalities.
Post by Arne Vajhøj
C/C++
.NET = C# + VB.NET
Java = Java + Scala + Kotlin + Groovy + Clojure
PHP
Swift
Free Pascal
Go
Rust
Erlang
Python
Ruby
Perl
JavaScript
That covers a lot (even though no Fortran and no Cobol).
A few years ago I have worked on the port of the Python Kafka client
module. It was a WIP.
Under Kafka lot of logic is not server side but client side, so this
made client libraries more complex than RabbitMQ one.

Any VMS sites using Kafka ? Do you have deploy it ? I'm very interested
in any reference, time to time I have some customers who ask me if such
references exist. I know none.

Kafka is a good (very) product, and has some very interesting feature,
for example the ability to query the logs, but for most of the customers
the routing protocol in RabbitMQ using exchange is very useful. And
performance are very good.

Both products have pro/cons.
Just what I have learned last 15 to 20 years using RabbitMQ with OpenVMS
applications.

JF
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Arne Vajhøj
2020-12-01 02:22:59 UTC
Permalink
Post by Jean-François Piéronne
Post by Arne Vajhøj
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Not that much.
Post by Jean-François Piéronne
Kafka is written in Java and Scala.
Correct, but Kafka is still Java oriented, especially if you want to use
all functionalities.
Producer and consumer API's are very similar across all
languages.

Not all languages has implemented the streams API yet.
Besides Java then I believe .NET and JavaScript has,
while C/C++, PHP and Python are still waiting for it,
but if someone was willing to do the work, then it
should be possible. There should not be anything
Java specific in it.
Post by Jean-François Piéronne
Post by Arne Vajhøj
C/C++
.NET = C# + VB.NET
Java = Java + Scala + Kotlin + Groovy + Clojure
PHP
Swift
Free Pascal
Go
Rust
Erlang
Python
Ruby
Perl
JavaScript
That covers a lot (even though no Fortran and no Cobol).
A few years ago I have worked on the port of the Python Kafka client
module. It was a WIP.
Under Kafka lot of logic is not server side but client side, so this
made client libraries more complex than RabbitMQ one.
Producer does some batching/buffering and consumer does some async
and both has to support clustering

Given how Kafka is used then those are probably must haves.
Post by Jean-François Piéronne
Any VMS sites using Kafka ? Do you have deploy it ? I'm very interested
in any reference, time to time I have some customers who ask me if such
references exist. I know none.
I am not aware of anyone using Kafka from VMS.

I would expect the Java client to run as is, so anyone
could be using that.

Native, PHP and Python depend on librdkafka. But given that
VSI has spent money porting that to VMS Itanium, then it
seems likely that someone is using it, because I don't
think VSI would have ported unless someone asked for it.

I just don't know who.

Arne
Jean-François Piéronne
2020-12-02 11:22:26 UTC
Permalink
Post by Arne Vajhøj
Post by Jean-François Piéronne
Post by Arne Vajhøj
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Not that much.
Post by Jean-François Piéronne
Kafka is written in Java and Scala.
Correct, but Kafka is still Java oriented, especially if you want to use
all functionalities.
Producer and consumer API's are very similar across all
languages.
Not all languages has implemented the streams API yet.
Exactly what I have said...
Post by Arne Vajhøj
Besides Java then I believe .NET and JavaScript has,
You believe or you have use...
Post by Arne Vajhøj
while C/C++, PHP and Python are still waiting for it,
but if someone was willing to do the work, then it
should be possible. There should not be anything
Java specific in it.
[snip]

Nothing specific except that if you want all features only the Java
client library have them, and a few you believe...


JF
--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Arne Vajhøj
2020-12-02 15:31:49 UTC
Permalink
Post by Jean-François Piéronne
Post by Arne Vajhøj
Post by Jean-François Piéronne
Post by Arne Vajhøj
Post by Jean-François Piéronne
[snip]
Post by Arne Vajhøj
RabbitMQ is a fine traditional message queue (Kafka is not
a traditional message queue - it is often called a
message stream processing service).
RabbitMQ is probably the most popular non-Java
open source message queue.
Correct, and is language agnostic. Kafka is Java oriented.
Not that much.
Post by Jean-François Piéronne
Kafka is written in Java and Scala.
Correct, but Kafka is still Java oriented, especially if you want to use
all functionalities.
Producer and consumer API's are very similar across all
languages.
Not all languages has implemented the streams API yet.
Exactly what I have said...
I do not consider the fcat that not all languages having
implemented an API to be "exactly" the same as being oriented
towards one of those languages that has.
Post by Jean-François Piéronne
Post by Arne Vajhøj
Besides Java then I believe .NET and JavaScript has,
You believe or you have use...
Believe I guess.

They are supposedly available via those languages
main package manager:

https://www.nuget.org/packages/Streamiz.Kafka.Net/

https://www.npmjs.com/package/kafka-streams

I have not looked at the code or used the software, so
it is theoretically possible that these packages are hoaxes
uploaded by drunk students or malware uploaded by foreign
intelligence services or something else.

But I believe they are OK based on where they
are available.

Arne
Arne Vajhøj
2020-11-29 18:20:56 UTC
Permalink
Post by Galen
(and if I needed billions I would look at Kafka).
Hmm. I haven’t read much of his work (“The Castle”, “The
Metamorphosis”, “In The Penal Colony”, and probably a few other short
pieces), but I don’t recall such a voluminous flow of messages being
a theme. But it might be amusing to sketch out or even write
something short and Kafka-esque around such an idea.
:-)

Apache Kafka

Arne
Dave Froble
2020-11-26 22:55:14 UTC
Permalink
Post by Marc Van Dyck
Unless I mis-read it, the OpenVMS documentation does not state any size
limitation for permanent mailboxes. There is apparently just a
limitation on the size of each message, but the number of messages can
apparently be arbitrarily high. I have made a call to $CREMBX to
create a mailbox for 1.000.000 messages of 200 bytes each, and VMS did
not complain.
I understand from a Digital Technical Journal article that mailbox space
is not reserved at mailbox creation time, but allocated each time a
new message is dropped in the mailbox. This I have been able to verify
by loading a mailbox and see the values in $SHOW MEM/POOL decrease
accordingly (loading chunks of 10.000 messages at a time...).
The same documentation also says that it is possible to obtain the
number of messages stored in a mailbox with a call to $GETJPI,
specifying the item DVI$_DEVDEPEND. This call returns a longword
of which only the two last bytes are significant, so the maximum
number of outstanding messages can be 65535.
Which most likely seemed a huge number, back in 1978.

What can be reported, and what can exist, just might not be the same.

Run a simple test. Store the messages, each one unique, then read them
back insuring each message was stored and forwarded. Might be interesting.
Post by Marc Van Dyck
I have been able to verify that too, with the same test as above,
the number of outstanding messages growing steadily till 60.000 and
then dropping back to 4000 or so after the next chunk of 10.000 messages
were loaded.
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more
than 64k messages ?
What was the max memory on an 11/780? Something like 8 MB? Yes, it was
a virtual memory machine. But how many messages might the devs have
anticipated, back then?

There just might be a word integer storing the # of messages, or, maybe
VMS doesn't care, and it's some type of list.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Marc Van Dyck
2020-11-27 17:09:18 UTC
Permalink
Post by Dave Froble
Which most likely seemed a huge number, back in 1978.
What can be reported, and what can exist, just might not be the same.
Run a simple test. Store the messages, each one unique, then read them back
insuring each message was stored and forwarded. Might be interesting.
Yes I will do that next monday and report the results here.
Post by Dave Froble
What was the max memory on an 11/780? Something like 8 MB? Yes, it was a
virtual memory machine. But how many messages might the devs have
anticipated, back then?
There just might be a word integer storing the # of messages, or, maybe VMS
doesn't care, and it's some type of list.
That's what I remembered from the Digital Technical Journal article I
mentioned.

BTW, the first VAX I worked with had 1.5 MB of DEC memory and another
MB from a less expensive third party company. This is the kind of
figure that lets you figure how far we are coming from, and how fast
it has evolved. Today it's the memory size you get in a wristwatch...
--
Marc Van Dyck
Phillip Helbig (undress to reply)
2020-11-27 17:37:14 UTC
Permalink
Post by Marc Van Dyck
BTW, the first VAX I worked with had 1.5 MB of DEC memory and another
MB from a less expensive third party company. This is the kind of
figure that lets you figure how far we are coming from, and how fast
it has evolved. Today it's the memory size you get in a wristwatch...
I remember, about 15 years ago, unpacking a mobile phone with 32 MB and
noticing that I had some VAXen with less memory.

About 5 years ago, my son, learning to be a tailor (no, not VMSTAILOR),
said that 8 GB RAM was the minimum he needed in a laptop.

About 25 years ago, we had DM 150,000 to spend on workstations (and went
with DEC after benchmarking our own applications) and calculated 100 DM
per MB for RAM.

I've been given hardware that cost 50,000---100,000 (pounds, franks,
dollars, marks, euros---doesn't really matter much) new after only about
5 years or so.

And a VMS cluster used to be in the Top-500 list.
Richard Loken
2020-11-27 17:32:56 UTC
Permalink
Post by Marc Van Dyck
BTW, the first VAX I worked with had 1.5 MB of DEC memory and another
MB from a less expensive third party company. This is the kind of
figure that lets you figure how far we are coming from, and how fast
it has evolved. Today it's the memory size you get in a wristwatch...
My employer got a VAX11/780 in 1980 which had AFAIR 1.5Mbyte. A year or
two later they replaced the whole module with a 4Mbyte module - I saw
the 1.5Mbyte before it was taken away, it was the size of a 1980s era
microwave oven.
--
Richard Loken VE6BSV : "...underneath those tuques we wear,
Athabasca, Alberta Canada : our heads are naked!"
** ***@telus.net ** : - Arthur Black
Bill Gunshannon
2020-11-27 18:05:27 UTC
Permalink
Post by Marc Van Dyck
BTW, the first VAX I worked with had 1.5 MB of DEC memory and another
MB from a less expensive third party company. This is the kind of
figure that lets you figure how far we are coming from, and how fast
it has evolved. Today it's the memory size you get in a wristwatch...
My employer got a VAX11/780 in 1980 which had AFAIR 1.5Mbyte.  A year or
two later they replaced the whole module with a 4Mbyte module - I saw
the 1.5Mbyte before it was taken away, it was the size of a 1980s era
microwave oven.
Don't have top go back that far. I can remember sitting in our VMS
Admin's (Hi Lori if your still around.) and we were laughing about
the fact that, given the size and form factor of the original "SIMMs"
for the first Alpha's to come out it would take an entire room just
to hold the memory the draft spec said it could address.

bill
Dave Froble
2020-11-27 18:29:38 UTC
Permalink
Post by Marc Van Dyck
Post by Dave Froble
Which most likely seemed a huge number, back in 1978.
What can be reported, and what can exist, just might not be the same.
Run a simple test. Store the messages, each one unique, then read
them back insuring each message was stored and forwarded. Might be
interesting.
Yes I will do that next monday and report the results here.
Post by Dave Froble
What was the max memory on an 11/780? Something like 8 MB? Yes, it
was a virtual memory machine. But how many messages might the devs
have anticipated, back then?
There just might be a word integer storing the # of messages, or,
maybe VMS doesn't care, and it's some type of list.
That's what I remembered from the Digital Technical Journal article I
mentioned.
BTW, the first VAX I worked with had 1.5 MB of DEC memory and another
MB from a less expensive third party company. This is the kind of
figure that lets you figure how far we are coming from, and how fast
it has evolved. Today it's the memory size you get in a wristwatch...
This is something it's easy to forget. HW has made immense advances.
We've watched, and forget the past.

It is quite unlikely that software developers would not be influenced by
current (at that time) state of the HW. And so we have software that
has some limitations based upon the available HW at the time the
software software was developed.

Let's understand, much of VMS is ancient. First released in 1978. As
has been discussed, VMS has not had near the upgrades that it should
have had performed. It's not just what the security bigots have been
complaining about, it's the whole OS. Ports for the most part do not
include modernization. Bytes. Words. Longwords. Lots of structures
that are no longer adequate.

VMS has lots of good ideas. But rarely can things stand still. They
either advance, or by doing nothing, regress.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2020-11-27 18:42:05 UTC
Permalink
Post by Dave Froble
Let's understand, much of VMS is ancient. First released in 1978. As
has been discussed, VMS has not had near the upgrades that it should
have had performed. It's not just what the security bigots have been
complaining about, it's the whole OS.
VSI management are the ones wrongly claiming that VMS is the most
secure operating system on the planet. You will find that much of
the security discussion is in response to that.

And as I have said before, VSI are just painting a huge target on
the backs of the VMS user community if the right people notice what
VSI are saying. VSI really need to stop saying idiotic things like that.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Marc Van Dyck
2020-11-30 17:25:38 UTC
Permalink
Post by Marc Van Dyck
Yes I will do that next monday and report the results here.
OK, as promised, I tested. I does not work.

I ran the procedure below. I tested it interactively with a small
amount of messages, and then submitted it in batch, so as to have a log
file, with 1.000.000 messages.

$ SET NOVERIFY
$ total = P1
$ disp_mem = P2
$ disp_count = P3
$ MBX /CREATE /MES = 100 /POSITIONS = 'total' /PERM mvd$test1
$ count = 0
$ text = "This is message number "
$ WRITE SYS$OUTPUT "Start Writing at ''F$TIME()'..."
$ SHOW MEMORY /POOL
$snd:
$ count = count + 1
$ IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE
SYS$OUTPUT "''F$TIME()' - messages sent : ''count'"
$ IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY
/POOL
$ message = "''text'''count'"
$ MBX /SEND /SYMBOL = message /NOLOG mvd$test1
$ IF count .LT. total THEN GOTO snd
$ WRITE SYS$OUTPUT "Finished writing ''total' messages"
$ count = 0
$ WRITE SYS$OUTPUT "Start Reading at ''F$TIME()'..."
$rec:
$ count = count + 1
$ MBX /RECEIVE /END = done mvd$test1 message
$ IF F$INTEGER (message - text) .NE. count THEN GOTO ERROR
$ IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE
SYS$OUTPUT "''F$TIME()' - messages read : ''count'"
$ IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY
/POOL
$ GOTO rec
$error:
$ WRITE SYS$OUTPUT "Error encountered at ''F$TIME()'"
$ WRITE SYS$OUTPUT "Last message read = ''message' - counter =
''count'"
$done:
$ SHOW MEMORY /POOL
$ WRITE SYS$OUTPUT "Finished reading ''total' messages at
''F$TIME()'"
$ WRITE SYS$OUTPUT "Don't forget to execute MBX /DELETE
mvd$test1"
$ EXIT

I won’t post the whole log file here, it’s too large. But here the
begin and the end of the writing phase. The system happily accepts the
1.000.000 messages, and one can see the free nonpaged dynamic memory
decrease while the messages are sent:

Start Writing at 30-NOV-2020 16:23:06.06...
System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 356.07 34.54
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 109.20 67.82
30-NOV-2020 16:23:09.95 - messages sent : 1000
30-NOV-2020 16:23:13.44 - messages sent : 2000
30-NOV-2020 16:23:17.10 - messages sent : 3000
30-NOV-2020 16:23:20.49 - messages sent : 4000
30-NOV-2020 16:23:24.08 - messages sent : 5000
30-NOV-2020 16:23:27.61 - messages sent : 6000
30-NOV-2020 16:23:31.24 - messages sent : 7000
30-NOV-2020 16:23:34.94 - messages sent : 8000
30-NOV-2020 16:23:38.65 - messages sent : 9000
30-NOV-2020 16:23:42.18 - messages sent : 10000
System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 354.85 35.76
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 109.21 67.81
30-NOV-2020 16:23:46.24 - messages sent : 11000



30-NOV-2020 17:26:18.05 - messages sent : 990000
System Memory Resources on 30-NOV-2020 17:26

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 235.22 155.39
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 110.46 66.56
30-NOV-2020 17:26:22.07 - messages sent : 991000
30-NOV-2020 17:26:25.99 - messages sent : 992000
30-NOV-2020 17:26:29.73 - messages sent : 993000
30-NOV-2020 17:26:34.16 - messages sent : 994000
30-NOV-2020 17:26:38.16 - messages sent : 995000
30-NOV-2020 17:26:41.70 - messages sent : 996000
30-NOV-2020 17:26:45.81 - messages sent : 997000
30-NOV-2020 17:26:50.05 - messages sent : 998000
30-NOV-2020 17:26:54.14 - messages sent : 999000
30-NOV-2020 17:26:57.83 - messages sent : 1000000
System Memory Resources on 30-NOV-2020 17:26

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 234.00 156.62
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 110.36 66.66
Finished writing 1000000 messages

But, when messages are read, an end of file marker is received
after some 16.000 messages, probably the first time the message count
goes through zero:

Start Reading at 30-NOV-2020 17:26:57.85...
30-NOV-2020 17:27:01.89 - messages read : 1000
30-NOV-2020 17:27:06.46 - messages read : 2000
30-NOV-2020 17:27:10.24 - messages read : 3000
30-NOV-2020 17:27:13.98 - messages read : 4000
30-NOV-2020 17:27:18.13 - messages read : 5000
30-NOV-2020 17:27:21.89 - messages read : 6000
30-NOV-2020 17:27:26.35 - messages read : 7000
30-NOV-2020 17:27:30.68 - messages read : 8000
30-NOV-2020 17:27:34.73 - messages read : 9000
30-NOV-2020 17:27:39.10 - messages read : 10000
System Memory Resources on 30-NOV-2020 17:27

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 235.18 155.43
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 109.03 68.00
30-NOV-2020 17:27:42.79 - messages read : 11000
30-NOV-2020 17:27:46.92 - messages read : 12000
30-NOV-2020 17:27:50.76 - messages read : 13000
30-NOV-2020 17:27:54.35 - messages read : 14000
30-NOV-2020 17:27:58.51 - messages read : 15000
30-NOV-2020 17:28:02.39 - messages read : 16000
System Memory Resources on 30-NOV-2020 17:28

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 236.03 154.58
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.98 133.63
Lock Manager Dyn Memory (MB) 177.03 109.02 68.00
Finished reading 1000000 messages at 30-NOV-2020 17:28:06.05
Don't forget to execute MBX /DELETE mvd$test1
VANDYCK job terminated at 30-NOV-2020 17:28

The mailbox is indeed indicated as empty, but one can see that its
deletion still frees up a lot of nonpaged dynamic memory, a clear
indication that the mailbox was not really empty :

ROCK> mbx/info mvd$test1
Current status of mailbox named MVD$TEST1 :

Physical name : _MBA459:
Owner UIC : [000001,000034]
UIC Protection mask : (S:RWPL,O:RWPL,G:RWPL,W:RWPL)
Message size : 100
Processes attached : 0
Pending messages : 0

ROCK>
ROCK> show mem/pool
System Memory Resources on 30-NOV-2020 17:35

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 236.06 154.56
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.99 133.63
Lock Manager Dyn Memory (MB) 177.03 109.12 67.90
ROCK> mbx/att mvd$test1
ROCK> mbx/del mvd$test1
ROCK> show mem/pool
System Memory Resources on 30-NOV-2020 17:35

Dynamic Memory Usage: Total Free In Use
Nonpaged Dynamic Memory (MB) 390.62 356.05 34.57
Bus Addressable Memory (MB) 1.49 1.35 0.13
Paged Dynamic Memory (MB) 390.62 256.99 133.63
Lock Manager Dyn Memory (MB) 177.03 109.12 67.90

So it is clear that sending more than 64k messages to a mailbox cannot
be trusted and should therefore be avoided. I think it should be stated
explicitly in the documentation.
--
Marc Van Dyck
Jan-Erik Söderholm
2020-11-30 17:42:55 UTC
Permalink
Post by Marc Van Dyck
Post by Marc Van Dyck
Yes I will do that next monday and report the results here.
OK, as promised, I tested. I does not work.
I ran the procedure below. I tested it interactively with a small
amount of messages, and then submitted it in batch, so as to have a log
file, with 1.000.000 messages.
$       SET NOVERIFY
$       total = P1
$       disp_mem = P2
$       disp_count = P3
$       MBX /CREATE /MES = 100 /POSITIONS = 'total' /PERM mvd$test1
$       count = 0
$       text = "This is message number "
$       WRITE SYS$OUTPUT "Start Writing at ''F$TIME()'..."
$       SHOW MEMORY /POOL
$       count = count + 1
$       IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE SYS$OUTPUT
"''F$TIME()' - messages sent : ''count'"
$       IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY /POOL
$       message = "''text'''count'"
$       MBX /SEND /SYMBOL = message /NOLOG mvd$test1
$       IF count .LT. total THEN GOTO snd
$       WRITE SYS$OUTPUT "Finished writing ''total' messages"
$       count = 0
$       WRITE SYS$OUTPUT "Start Reading at ''F$TIME()'..."
$       count = count + 1
$       MBX /RECEIVE /END = done mvd$test1 message
$       IF F$INTEGER (message - text) .NE. count THEN GOTO ERROR
$       IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE SYS$OUTPUT
"''F$TIME()' - messages read : ''count'"
$       IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY /POOL
$       GOTO rec
$       WRITE SYS$OUTPUT "Error encountered at ''F$TIME()'"
$       WRITE SYS$OUTPUT "Last message read = ''message' - counter = ''count'"
$       SHOW MEMORY /POOL
$       WRITE SYS$OUTPUT "Finished reading ''total' messages at ''F$TIME()'"
$       WRITE SYS$OUTPUT "Don't forget to execute MBX /DELETE mvd$test1"
$       EXIT
I won’t post the whole log file here, it’s too large. But here the
begin and the end of the writing phase. The system happily accepts the
1.000.000 messages, and one can see the free nonpaged dynamic memory
Start Writing at 30-NOV-2020 16:23:06.06...
             System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      356.07       34.54
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.20       67.82
30-NOV-2020 16:23:09.95 - messages sent : 1000
30-NOV-2020 16:23:13.44 - messages sent : 2000
30-NOV-2020 16:23:17.10 - messages sent : 3000
30-NOV-2020 16:23:20.49 - messages sent : 4000
30-NOV-2020 16:23:24.08 - messages sent : 5000
30-NOV-2020 16:23:27.61 - messages sent : 6000
30-NOV-2020 16:23:31.24 - messages sent : 7000
30-NOV-2020 16:23:34.94 - messages sent : 8000
30-NOV-2020 16:23:38.65 - messages sent : 9000
30-NOV-2020 16:23:42.18 - messages sent : 10000
            System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      354.85       35.76
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.21       67.81
30-NOV-2020 16:23:46.24 - messages sent : 11000

30-NOV-2020 17:26:18.05 - messages sent : 990000
             System Memory Resources on 30-NOV-2020 17:26
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      235.22      155.39
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      110.46       66.56
30-NOV-2020 17:26:22.07 - messages sent : 991000
30-NOV-2020 17:26:25.99 - messages sent : 992000
30-NOV-2020 17:26:29.73 - messages sent : 993000
30-NOV-2020 17:26:34.16 - messages sent : 994000
30-NOV-2020 17:26:38.16 - messages sent : 995000
30-NOV-2020 17:26:41.70 - messages sent : 996000
30-NOV-2020 17:26:45.81 - messages sent : 997000
30-NOV-2020 17:26:50.05 - messages sent : 998000
30-NOV-2020 17:26:54.14 - messages sent : 999000
30-NOV-2020 17:26:57.83 - messages sent : 1000000
             System Memory Resources on 30-NOV-2020 17:26
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      234.00      156.62
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      110.36       66.66
Finished writing 1000000 messages
But, when messages are read, an end of file marker is received
after some 16.000 messages, probably the first time the message count
Start Reading at 30-NOV-2020 17:26:57.85...
30-NOV-2020 17:27:01.89 - messages read : 1000
30-NOV-2020 17:27:06.46 - messages read : 2000
30-NOV-2020 17:27:10.24 - messages read : 3000
30-NOV-2020 17:27:13.98 - messages read : 4000
30-NOV-2020 17:27:18.13 - messages read : 5000
30-NOV-2020 17:27:21.89 - messages read : 6000
30-NOV-2020 17:27:26.35 - messages read : 7000
30-NOV-2020 17:27:30.68 - messages read : 8000
30-NOV-2020 17:27:34.73 - messages read : 9000
30-NOV-2020 17:27:39.10 - messages read : 10000
             System Memory Resources on 30-NOV-2020 17:27
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      235.18      155.43
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.03       68.00
30-NOV-2020 17:27:42.79 - messages read : 11000
30-NOV-2020 17:27:46.92 - messages read : 12000
30-NOV-2020 17:27:50.76 - messages read : 13000
30-NOV-2020 17:27:54.35 - messages read : 14000
30-NOV-2020 17:27:58.51 - messages read : 15000
30-NOV-2020 17:28:02.39 - messages read : 16000
             System Memory Resources on 30-NOV-2020 17:28
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      236.03      154.58
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.02       68.00
Finished reading 1000000 messages at 30-NOV-2020 17:28:06.05
Don't forget to execute MBX /DELETE mvd$test1
 VANDYCK      job terminated at 30-NOV-2020 17:28
The mailbox is indeed indicated as empty, but one can see that its
deletion still frees up a lot of nonpaged dynamic memory, a clear
ROCK> mbx/info mvd$test1
Owner UIC           : [000001,000034]
UIC Protection mask : (S:RWPL,O:RWPL,G:RWPL,W:RWPL)
Message size        :   100
Processes attached  :     0
Pending messages    :     0
ROCK>
ROCK> show mem/pool
             System Memory Resources on 30-NOV-2020 17:35
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      236.06      154.56
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.99      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.12       67.90
ROCK> mbx/att mvd$test1
ROCK> mbx/del mvd$test1
ROCK> show mem/pool
             System Memory Resources on 30-NOV-2020 17:35
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      356.05       34.57
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.99      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.12       67.90
So it is clear that sending more than 64k messages to a mailbox cannot
be trusted and should therefore be avoided. I think it should be stated
explicitly in the documentation.
As long as sopmeone reads them, it should be OK... :-)
Phillip Helbig (undress to reply)
2020-11-30 20:33:55 UTC
Permalink
In article <rq3auv$kka$***@dont-email.me>,
=?UTF-8?Q?Jan-Erik_S=c3=b6derholm?= <jan-***@telia.com>
writes:

Maybe quote fewer lines if you write just one line in your post?

And reading that one line, nomen est omen!
Post by Jan-Erik Söderholm
Post by Marc Van Dyck
Post by Marc Van Dyck
Yes I will do that next monday and report the results here.
OK, as promised, I tested. I does not work.
I ran the procedure below. I tested it interactively with a small
amount of messages, and then submitted it in batch, so as to have a log
file, with 1.000.000 messages.
$       SET NOVERIFY
$       total = P1
$       disp_mem = P2
$       disp_count = P3
$       MBX /CREATE /MES = 100 /POSITIONS = 'total' /PERM mvd$test1
$       count = 0
$       text = "This is message number "
$       WRITE SYS$OUTPUT "Start Writing at ''F$TIME()'..."
$       SHOW MEMORY /POOL
$       count = count + 1
$       IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE SYS$OUTPUT
"''F$TIME()' - messages sent : ''count'"
$       IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY /POOL
$       message = "''text'''count'"
$       MBX /SEND /SYMBOL = message /NOLOG mvd$test1
$       IF count .LT. total THEN GOTO snd
$       WRITE SYS$OUTPUT "Finished writing ''total' messages"
$       count = 0
$       WRITE SYS$OUTPUT "Start Reading at ''F$TIME()'..."
$       count = count + 1
$       MBX /RECEIVE /END = done mvd$test1 message
$       IF F$INTEGER (message - text) .NE. count THEN GOTO ERROR
$       IF ((count/disp_count)*disp_count) .EQ. count THEN WRITE SYS$OUTPUT
"''F$TIME()' - messages read : ''count'"
$       IF ((count/disp_mem)*disp_mem) .EQ. count THEN SHOW MEMORY /POOL
$       GOTO rec
$       WRITE SYS$OUTPUT "Error encountered at ''F$TIME()'"
$       WRITE SYS$OUTPUT "Last message read = ''message' - counter = ''count'"
$       SHOW MEMORY /POOL
$       WRITE SYS$OUTPUT "Finished reading ''total' messages at ''F$TIME()'"
$       WRITE SYS$OUTPUT "Don't forget to execute MBX /DELETE mvd$test1"
$       EXIT
I won’t post the whole log file here, it’s too large. But here the
begin and the end of the writing phase. The system happily accepts the
1.000.000 messages, and one can see the free nonpaged dynamic memory
Start Writing at 30-NOV-2020 16:23:06.06...
             System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      356.07       34.54
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.20       67.82
30-NOV-2020 16:23:09.95 - messages sent : 1000
30-NOV-2020 16:23:13.44 - messages sent : 2000
30-NOV-2020 16:23:17.10 - messages sent : 3000
30-NOV-2020 16:23:20.49 - messages sent : 4000
30-NOV-2020 16:23:24.08 - messages sent : 5000
30-NOV-2020 16:23:27.61 - messages sent : 6000
30-NOV-2020 16:23:31.24 - messages sent : 7000
30-NOV-2020 16:23:34.94 - messages sent : 8000
30-NOV-2020 16:23:38.65 - messages sent : 9000
30-NOV-2020 16:23:42.18 - messages sent : 10000
            System Memory Resources on 30-NOV-2020 16:23
␍␊Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      354.85       35.76
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.21       67.81
30-NOV-2020 16:23:46.24 - messages sent : 11000

30-NOV-2020 17:26:18.05 - messages sent : 990000
             System Memory Resources on 30-NOV-2020 17:26
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      235.22      155.39
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      110.46       66.56
30-NOV-2020 17:26:22.07 - messages sent : 991000
30-NOV-2020 17:26:25.99 - messages sent : 992000
30-NOV-2020 17:26:29.73 - messages sent : 993000
30-NOV-2020 17:26:34.16 - messages sent : 994000
30-NOV-2020 17:26:38.16 - messages sent : 995000
30-NOV-2020 17:26:41.70 - messages sent : 996000
30-NOV-2020 17:26:45.81 - messages sent : 997000
30-NOV-2020 17:26:50.05 - messages sent : 998000
30-NOV-2020 17:26:54.14 - messages sent : 999000
30-NOV-2020 17:26:57.83 - messages sent : 1000000
             System Memory Resources on 30-NOV-2020 17:26
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      234.00      156.62
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      110.36       66.66
Finished writing 1000000 messages
But, when messages are read, an end of file marker is received
after some 16.000 messages, probably the first time the message count
Start Reading at 30-NOV-2020 17:26:57.85...
30-NOV-2020 17:27:01.89 - messages read : 1000
30-NOV-2020 17:27:06.46 - messages read : 2000
30-NOV-2020 17:27:10.24 - messages read : 3000
30-NOV-2020 17:27:13.98 - messages read : 4000
30-NOV-2020 17:27:18.13 - messages read : 5000
30-NOV-2020 17:27:21.89 - messages read : 6000
30-NOV-2020 17:27:26.35 - messages read : 7000
30-NOV-2020 17:27:30.68 - messages read : 8000
30-NOV-2020 17:27:34.73 - messages read : 9000
30-NOV-2020 17:27:39.10 - messages read : 10000
             System Memory Resources on 30-NOV-2020 17:27
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      235.18      155.43
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.03       68.00
30-NOV-2020 17:27:42.79 - messages read : 11000
30-NOV-2020 17:27:46.92 - messages read : 12000
30-NOV-2020 17:27:50.76 - messages read : 13000
30-NOV-2020 17:27:54.35 - messages read : 14000
30-NOV-2020 17:27:58.51 - messages read : 15000
30-NOV-2020 17:28:02.39 - messages read : 16000
             System Memory Resources on 30-NOV-2020 17:28
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      236.03      154.58
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.98      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.02       68.00
Finished reading 1000000 messages at 30-NOV-2020 17:28:06.05
Don't forget to execute MBX /DELETE mvd$test1
 VANDYCK      job terminated at 30-NOV-2020 17:28
The mailbox is indeed indicated as empty, but one can see that its
deletion still frees up a lot of nonpaged dynamic memory, a clear
ROCK> mbx/info mvd$test1
Owner UIC           : [000001,000034]
UIC Protection mask : (S:RWPL,O:RWPL,G:RWPL,W:RWPL)
Message size        :   100
Processes attached  :     0
Pending messages    :     0
ROCK>
ROCK> show mem/pool
             System Memory Resources on 30-NOV-2020 17:35
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      236.06      154.56
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.99      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.12       67.90
ROCK> mbx/att mvd$test1
ROCK> mbx/del mvd$test1
ROCK> show mem/pool
             System Memory Resources on 30-NOV-2020 17:35
Dynamic Memory Usage:              Total        Free      In Use
 Nonpaged Dynamic Memory (MB)    390.62      356.05       34.57
 Bus Addressable Memory  (MB)      1.49        1.35        0.13
 Paged Dynamic Memory    (MB)    390.62      256.99      133.63
 Lock Manager Dyn Memory (MB)    177.03      109.12       67.90
So it is clear that sending more than 64k messages to a mailbox cannot
be trusted and should therefore be avoided. I think it should be stated
explicitly in the documentation.
As long as sopmeone reads them, it should be OK... :-)
Hein RMS van den Heuvel
2020-12-01 17:44:25 UTC
Permalink
Post by Marc Van Dyck
Post by Marc Van Dyck
Yes I will do that next monday and report the results here.
OK, as promised, I tested. I does not work.
Thanks for that! - I was tempted to do just that.
Post by Marc Van Dyck
But, when messages are read, an end of file marker is received
after some 16.000 messages, probably the first time the message count
Yup - 1000000%65536 = 16960
Post by Marc Van Dyck
30-NOV-2020 17:28:02.39 - messages read : 16000
Too bad there is no indication of the actual last message number read.
Post by Marc Van Dyck
Finished reading 1000000 messages at 30-NOV-2020 17:28:06.05
That appears to be incorrect. It simply displays what was desired not what happened.
Post by Marc Van Dyck
So it is clear that sending more than 64k messages to a mailbox cannot
be trusted and should therefore be avoided. I think it should be stated
explicitly in the documentation.
Agreed. And an error returned until fixed.

Hein.
Marc Van Dyck
2020-12-01 17:47:13 UTC
Permalink
Post by Hein RMS van den Heuvel
Post by Marc Van Dyck
Post by Marc Van Dyck
Yes I will do that next monday and report the results here.
OK, as promised, I tested. I does not work.
Thanks for that! - I was tempted to do just that.
Post by Marc Van Dyck
But, when messages are read, an end of file marker is received
after some 16.000 messages, probably the first time the message count
Yup - 1000000%65536 = 16960
Post by Marc Van Dyck
30-NOV-2020 17:28:02.39 - messages read : 16000
Too bad there is no indication of the actual last message number read.
Post by Marc Van Dyck
Finished reading 1000000 messages at 30-NOV-2020 17:28:06.05
That appears to be incorrect. It simply displays what was desired not what happened.
Yes, glitch in the testing code. I did not expect that to happen, so
did not account for it.
Post by Hein RMS van den Heuvel
Post by Marc Van Dyck
So it is clear that sending more than 64k messages to a mailbox cannot
be trusted and should therefore be avoided. I think it should be stated
explicitly in the documentation.
Agreed. And an error returned until fixed.
Hein.
--
Marc Van Dyck
Michael Moroney
2020-12-01 20:02:51 UTC
Permalink
The discussion of mailboxes and MQ kind of reminded me of an old problem:

Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)

We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp). A second process reads records ftom the file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again) If the link/remote system was down,
messages would accumulate in the indexed files. Once the link was back, the
second process would empty it.

In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
most 1-2 records, the file grew and grew and that part of the system slowed. A
workaround was to create a new version of the file once in a while, and delete
the old one (once known to be empty). A $ CONVERT/RECLAIM on an empty or
nearly empty file would chew on it for a while and would make the file faster
[note this was done only to understand what was going on]. The slowness was
from accumulated RMS cruft from all the deleted records.

So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
Arne Vajhøj
2020-12-01 20:13:41 UTC
Permalink
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
Fully dynamic size or fixed max size?

If fixed max size then a circular buffer in a FIX file with a
WRT pointer and a RD pointer.

If willing to go more modern then an SQLite database
would probably do everything you need.

Arne
Arne Vajhøj
2020-12-01 20:17:04 UTC
Permalink
Post by Michael Moroney
Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)
We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp). A second process reads records ftom the file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again) If the link/remote system was down,
messages would accumulate in the indexed files. Once the link was back, the
second process would empty it.
But maybe an MQ could be used!

writer---MQ with persistence---reader---socket---destination

The MQ would ensure the FIFO characteristics with little effort.

Arne
Hein RMS van den Heuvel
2020-12-01 20:26:11 UTC
Permalink
Post by Michael Moroney
In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
No, not easily. This may warrant a dedicated topic.
There are two things causing 'crud'
1) Each Indexed file databucket is assigned a high-key value which is using in the index above it.
With ever growing key values, once a bucket cannot hold a record with the next key value, a new bucket is allocated.
The old bucket highest key value now is frozen and entered in the index.
The next bucket gets the new record with a 'high key' (0xffffffff or 'zzzzzzzz' - so to speak - yes this is sloppy wording)
Going forward no new record will ever be targeted to any older bucket even though over time they are empty they still have a fixed key range.
2) each record gets a new 16 bit id within the bucket. So even when records are deleted before the bucket is filled, the ids are consumed and after 64K records a new bucket is allocated and the old one never ever re-used

You can postpone the growth by creating larger buckets (63 block max) but the price there is disk IO bandwidth on the write and delete (reads are from cache).
You can try to reused bucket better for example by coming up with round robin PK values for example by making them HHMMSSYYYYMMDD -but now the application needs to know whether the prior date was truly processed.

CONVERT/RECLAIM can help, but needs exclusive file access, so you might was well convert or just put a new file in place and move in the stragglers.

Nothing pretty out there.

Hein
Michael Moroney
2020-12-02 17:35:10 UTC
Permalink
Post by Hein RMS van den Heuvel
There are two things causing 'crud'
...

Thanks for the peek under the hood regarding indexed files. I thought of this
problem since it seemed somewhat like a persistant "mailbox" which could
survive reboots/crashes while data was in the pipe. Sometimes old vexing
problems which "seem" simple at first glance continue to occupy space in my
mind.
Dave Froble
2020-12-02 22:16:39 UTC
Permalink
Post by Michael Moroney
Post by Hein RMS van den Heuvel
There are two things causing 'crud'
...
Thanks for the peek under the hood regarding indexed files. I thought of this
problem since it seemed somewhat like a persistant "mailbox" which could
survive reboots/crashes while data was in the pipe. Sometimes old vexing
problems which "seem" simple at first glance continue to occupy space in my
mind.
Michael, what are you attempting to do? Perhaps solutions already
exist. If you want to review such, let me know.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Dave Froble
2020-12-01 20:39:55 UTC
Permalink
Post by Michael Moroney
Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)
We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp). A second process reads records ftom the file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again) If the link/remote system was down,
messages would accumulate in the indexed files. Once the link was back, the
second process would empty it.
In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
most 1-2 records, the file grew and grew and that part of the system slowed. A
workaround was to create a new version of the file once in a while, and delete
the old one (once known to be empty). A $ CONVERT/RECLAIM on an empty or
nearly empty file would chew on it for a while and would make the file faster
[note this was done only to understand what was going on]. The slowness was
from accumulated RMS cruft from all the deleted records.
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
Just about any file system, RMS, RDBMS, etc would see a FIFO queue as
it's most difficult task. I'd guess the "cruft" would be universal in
such a solution.

Without thinking much about the task, I'd suggest that a linked list
might be a decent solution. No cruft.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Jan-Erik Söderholm
2020-12-01 23:04:37 UTC
Permalink
Post by Michael Moroney
Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)
We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp). A second process reads records ftom the file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again) If the link/remote system was down,
messages would accumulate in the indexed files. Once the link was back, the
second process would empty it.
In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
most 1-2 records, the file grew and grew and that part of the system slowed. A
workaround was to create a new version of the file once in a while, and delete
the old one (once known to be empty). A $ CONVERT/RECLAIM on an empty or
nearly empty file would chew on it for a while and would make the file faster
[note this was done only to understand what was going on]. The slowness was
from accumulated RMS cruft from all the deleted records.
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer. It will reuse deleted space if the process
doing the delete just disconnect and reconnect now and then (to free the
deleted space). Could be once a day or so. But on the other hand, there
is no real performance issue with locked free space, Rdb will just not
use it.

Is there an indication on what "a fairly large number" of messages is?
Arne Vajhøj
2020-12-02 15:34:17 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Michael Moroney
Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)
We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp).  A second process reads records ftom the
file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again)  If the link/remote system
was down,
messages would accumulate in the indexed files.  Once the link was
back, the
second process would empty it.
In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
most 1-2 records, the file grew and grew and that part of the system slowed. A
workaround was to create a new version of the file once in a while, and delete
the old one (once known to be empty).  A $ CONVERT/RECLAIM on an empty or
nearly empty file would chew on it for a while and would make the file faster
[note this was done only to understand what was going on].  The
slowness was
from accumulated RMS cruft from all the deleted records.
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?  That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer. It will reuse deleted space if the process
doing the delete just disconnect and reconnect now and then (to free the
deleted space). Could be once a day or so. But on the other hand, there
is no real performance issue with locked free space, Rdb will just not
use it.
I have mentioned it already, but if one use a message queue that
can use relational database for message storage, then one can
get the queue functionality out of the box on top of the database
(Rdb or other).

Arne
Michael Moroney
2020-12-02 17:38:46 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer.
I kind of meant RMS file, something accessed via RMS $GET/$PUT/$DELETE or their
high level equivalents. I know if just disk space anything could be used.
Post by Jan-Erik Söderholm
Is there an indication on what "a fairly large number" of messages is?
Maybe 6000 messages/day, it could vary quite a bit.
Jan-Erik Söderholm
2020-12-02 21:19:30 UTC
Permalink
Post by Michael Moroney
Post by Jan-Erik Söderholm
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer.
I kind of meant RMS file, something accessed via RMS $GET/$PUT/$DELETE or their
high level equivalents. I know if just disk space anything could be used.
Post by Jan-Erik Söderholm
Is there an indication on what "a fairly large number" of messages is?
Maybe 6000 messages/day, it could vary quite a bit.
OK, right... I thought it was at least 100 messages/sec or such.
*Any* solution should handle approx 4 messages/minute.

And then I'd take a SQL rel database over a plain RMS file
anyday, just for the ease of management.
Dave Froble
2020-12-02 21:26:58 UTC
Permalink
Post by Michael Moroney
Post by Jan-Erik Söderholm
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer.
I kind of meant RMS file, something accessed via RMS $GET/$PUT/$DELETE or their
high level equivalents. I know if just disk space anything could be used.
Post by Jan-Erik Söderholm
Is there an indication on what "a fairly large number" of messages is?
Maybe 6000 messages/day, it could vary quite a bit.
I believe I mentioned linked lists earlier.

If you desire to use an RMS file, a relative file could easily be used
for a linked list. Multiple ways to design it. One might be a file
with say 10,000 records, where part of the data is a link to the next
record in the list. Could have reverse links also. Have two lists, a
list of unused records, and a list of records with active messages. Use
simple logic to pop the next available unused record for use, and to add
a used record back to the list. Front or back should not matter.
Always add new messages at the end of the list of messages.

Really simple, really quick, no cruft, ....

Considerations:

If the list needs to be persistent, use record 1 for the list pointers.
Otherwise keep the pointers in memory.

No reason there cannot be more than one message list.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Jan-Erik Söderholm
2020-12-02 21:40:28 UTC
Permalink
Post by Dave Froble
Post by Michael Moroney
Post by Jan-Erik Söderholm
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?  That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer.
I kind of meant RMS file, something accessed via RMS $GET/$PUT/$DELETE or their
high level equivalents.  I know if just disk space anything could be used.
Post by Jan-Erik Söderholm
Is there an indication on what "a fairly large number" of messages is?
Maybe 6000 messages/day, it could vary quite a bit.
I believe I mentioned linked lists earlier.
If you desire to use an RMS file, a relative file could easily be used for
a linked list.  Multiple ways to design it.  One might be a file with say
10,000 records, where part of the data is a link to the next record in the
list.  Could have reverse links also.  Have two lists, a list of unused
records, and a list of records with active messages.  Use simple logic to
pop the next available unused record for use, and to add a used record back
to the list.  Front or back should not matter. Always add new messages at
the end of the list of messages.
Really simple, really quick, no cruft, ....
If the list needs to be persistent, use record 1 for the list pointers.
Otherwise keep the pointers in memory.
No reason there cannot be more than one message list.
Still much more application logic than a simple SQL INSERT... from one
application and SQL SELECT..., <process>, SQL DELETE... from the other.
Dave Froble
2020-12-02 22:39:30 UTC
Permalink
Post by Jan-Erik Söderholm
Post by Dave Froble
Post by Michael Moroney
Post by Jan-Erik Söderholm
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
It depends on the definition of a "VMS file". I see no issues with having
an Rdb table as the buffer.
I kind of meant RMS file, something accessed via RMS
$GET/$PUT/$DELETE or their
high level equivalents. I know if just disk space anything could be used.
Post by Jan-Erik Söderholm
Is there an indication on what "a fairly large number" of messages is?
Maybe 6000 messages/day, it could vary quite a bit.
I believe I mentioned linked lists earlier.
If you desire to use an RMS file, a relative file could easily be used
for a linked list. Multiple ways to design it. One might be a file
with say 10,000 records, where part of the data is a link to the next
record in the list. Could have reverse links also. Have two lists, a
list of unused records, and a list of records with active messages.
Use simple logic to pop the next available unused record for use, and
to add a used record back to the list. Front or back should not
matter. Always add new messages at the end of the list of messages.
Really simple, really quick, no cruft, ....
If the list needs to be persistent, use record 1 for the list
pointers. Otherwise keep the pointers in memory.
No reason there cannot be more than one message list.
Still much more application logic than a simple SQL INSERT... from one
application and SQL SELECT..., <process>, SQL DELETE... from the other.
Jan-Erik

Did you ever hear the story about the devil's demo system?

Yes, what you mention can work. I'm not too sure about it's
performance. I know, we don't worry about performance with today's
hardware. But still, 100 is always going to be more than 10.`

If there needs to be some logic involved in managing the list, then some
programing will be required to impose that logic, thus applications
could not go straight to the database.

Not every system will have a RDBMS available. Sometimes that will be a
consideration.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
David Jones
2020-12-03 18:02:54 UTC
Permalink
On Wednesday, December 2, 2020 at 5:39:34 PM UTC-5, Dave Froble wrote:

While discussing options, there's always the file-system-is-a-database approach. Each
message is a file and the filename is a date key. Files-11 is generally a poor choice for
that access pattern (small files, lots of deletes), but it's not that uncommon in unix.
Unix has specialized file systems designed to accommodate such poor choices of
the application writers.

OpenSSL implements its certificate store as a 'certs' directory where the each file holds
a certificate and the name is a hash of the signature.
Hein RMS van den Heuvel
2020-12-03 16:41:23 UTC
Permalink
Post by Dave Froble
Post by Michael Moroney
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?
Maybe 6000 messages/day, it could vary quite a bit.
For Michael's 6000 message/day case, where the reader can likely keep up with the writer (unless writes are in bursts, or reads can be stalled), an indexed file with a largish bucketsize able to hold the normal maximum of outstanding records is an easy solution.
Just refresh every month or every year or so.

For busier files, if you can, try NOT to just read the file from the beginning in the loop as you may have to wade through empty buckets. Use greater than some last-read PK value instead - stashing a high PK away once a day if desired.
Post by Dave Froble
I believe I mentioned linked lists earlier.
If you desire to use an RMS file, a relative file could easily be used for a linked list.
Sure Dave, but for a double linked list there are several writes involved and for a single linked list you may have to scan for a while to find the end to add to it. There again some last read pointer stashed away may help.
Relative files typically need a special tool to maintain/analyze them as system provided converts and re-creates do not work.
Typically there is a desire for an initialized master records, and links with record numbers are lost on covert.
I case of trouble you'll have some serious dumping and debugging to be done versus a simple type or dump/record

Now for the original question (Marc's) replacing mailboxes the reads are destructive.
Once read there is no going back and there should be no gaps.
For that a relative file or just a fixed length sequential file could work with some round robin scheme allowing for the max outstanding records. To avoid reading from the beginning all the time just use a simple master record with a sloppy last read record number. Update that only every 100 or 1000 reads and when hitting the configured max of course. When updating be sure to add the current time, process-id and process name. With a little extra work update a little last-5 updates array allowing one to look back and see how often it goes around and what the write rate is per 1000 messages or whatever interval chosen. btw ... Go ahead waste a few more bytes and make the header row and stash readable, not binary, for your own comfort. You have my permission :-).

Cheers,
Hein
Dave Froble
2020-12-03 21:46:46 UTC
Permalink
Post by Hein RMS van den Heuvel
Post by Dave Froble
Post by Michael Moroney
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?
Maybe 6000 messages/day, it could vary quite a bit.
For Michael's 6000 message/day case, where the reader can likely keep up with the writer (unless writes are in bursts, or reads can be stalled), an indexed file with a largish bucketsize able to hold the normal maximum of outstanding records is an easy solution.
Just refresh every month or every year or so.
For busier files, if you can, try NOT to just read the file from the beginning in the loop as you may have to wade through empty buckets. Use greater than some last-read PK value instead - stashing a high PK away once a day if desired.
Post by Dave Froble
I believe I mentioned linked lists earlier.
If you desire to use an RMS file, a relative file could easily be used for a linked list.
Sure Dave, but for a double linked list there are several writes involved
That sort of depends upon the design.
Post by Hein RMS van den Heuvel
and for a single linked list you may have to scan for a while to find the end to add to it.
Not if you have begin and end pointers for each list. If actually
searching the list, yeah, going from node to node will require reading.
Post by Hein RMS van den Heuvel
There again some last read pointer stashed away may help.
Relative files typically need a special tool to maintain/analyze them as system provided converts and re-creates do not work.
As far as I know, relative files do not require maintenance.

My 40 year+ old messaging system uses Virtual file type and one disk
block for each message, but, that is a rather complex utility.
Post by Hein RMS van den Heuvel
Typically there is a desire for an initialized master records, and links with record numbers are lost on covert.
No CONVERT required.
Post by Hein RMS van den Heuvel
I case of trouble you'll have some serious dumping and debugging to be done versus a simple type or dump/record
Now for the original question (Marc's) replacing mailboxes the reads are destructive.
Once read there is no going back and there should be no gaps.
For that a relative file or just a fixed length sequential file could work with some round robin scheme allowing for the max outstanding records. To avoid reading from the beginning all the time just use a simple master record with a sloppy last read record number. Update that only every 100 or 1000 reads and when hitting the configured max of course. When updating be sure to add the current time, process-id and process name. With a little extra work update a little last-5 updates array allowing one to look back and see how often it goes around and what the write rate is per 1000 messages or whatever interval chosen. btw ... Go ahead waste a few more bytes and make the header row and stash readable, not binary, for your own comfort. You have my permission :-).
Cheers,
Hein
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Michael Moroney
2020-12-04 16:22:39 UTC
Permalink
Post by Hein RMS van den Heuvel
Post by Michael Moroney
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?
Maybe 6000 messages/day, it could vary quite a bit.
For Michael's 6000 message/day case, where the reader can likely keep up with the writer (unless writes are in bursts, or reads can be stalled), an indexed file with a largish bucketsize able to hold the normal maximum of outstanding records is an easy solution.
Just refresh every month or every year or so.
This problem appeared since the portion using this method was running for quite
some time (well over a year) and someone noticed a process was using loads of
CPU or disk IO (maybe both) and wondered why a trivial process was using so
much. It was tracked down to the very old file. I was tasked with finding out
what was going on and to fix it but not spend much time on it. I was surprised
to find a huge but empty file, and pondered what the proper fix was (appears
the easiest is using a relative file which I knew little about) The final
"fix" was a DCL procedure to replace the file with a fresh one during
"maintenance" times.

Again this is from a previous job, nobody worry about this beyond a "how to get
VMS to do X the best way" amusement puzzle. It just bugged me that simple
repeated RMS put/get/delete sequences worked out like this. I initially
searched for some RMS incantation for a FDL file. But I didn't know the
difference between an RMS bucket and a pail.

Is there a reason why the equivalent of CONVERT/RECLAIM can't be run for one
bucket only when it becomes empty/unusable?
Stephen Hoffman
2020-12-04 16:48:06 UTC
Permalink
Post by Michael Moroney
Is there a reason why the equivalent of CONVERT/RECLAIM can't be run
for one bucket only when it becomes empty/unusable?
RMS enhancements including online file vacuuming, and app
synchronization for online backups, and better instrumentation and
reporting, have yet to make the schedule.

I'd expect that the existing bucket-processing code effectively
prohibits online free space coalescence, and that resolving that was
considered complex and disruptive.

The tools and documentation and instrumentation necessary for detecting
and reporting clogged files are still somewhat spotty with RMS, too.
(e.g. DUMP /HEADER and "map area words in use", etc.)

You can use a different database with the necessary support. DEC
certainly would have been interested in selling Rdb to you. SQLite now
works, and you'll prolly want to use UPDATE here and not DELETE.

Upgrading the built-in support—giving away features which were once
charged for—can be a contentious change in many organizations.

Or put differently, write your request on the back of a hundred $
dollar bill 💵 and send 💸 it to VSI...
--
Pure Personal Opinion | HoffmanLabs LLC
Dave Froble
2020-12-04 20:40:25 UTC
Permalink
Post by Michael Moroney
Post by Hein RMS van den Heuvel
Post by Michael Moroney
Post by Michael Moroney
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently?
Maybe 6000 messages/day, it could vary quite a bit.
For Michael's 6000 message/day case, where the reader can likely keep up with the writer (unless writes are in bursts, or reads can be stalled), an indexed file with a largish bucketsize able to hold the normal maximum of outstanding records is an easy solution.
Just refresh every month or every year or so.
This problem appeared since the portion using this method was running for quite
some time (well over a year) and someone noticed a process was using loads of
CPU or disk IO (maybe both) and wondered why a trivial process was using so
much. It was tracked down to the very old file. I was tasked with finding out
what was going on and to fix it but not spend much time on it. I was surprised
to find a huge but empty file, and pondered what the proper fix was (appears
the easiest is using a relative file which I knew little about) The final
"fix" was a DCL procedure to replace the file with a fresh one during
"maintenance" times.
Again this is from a previous job, nobody worry about this beyond a "how to get
VMS to do X the best way" amusement puzzle. It just bugged me that simple
repeated RMS put/get/delete sequences worked out like this. I initially
searched for some RMS incantation for a FDL file. But I didn't know the
difference between an RMS bucket and a pail.
Is there a reason why the equivalent of CONVERT/RECLAIM can't be run for one
bucket only when it becomes empty/unusable?
It begins with a fundamental design issue in RMS, at least in my opinion.

Data records in an RMS indexed file are ordered by the primary key.
Thus any activity that adds or deletes data records must affect that
ordered location.

Then secondary keys are not on a one to one correspondence with data
records, but rather one key for each unique key and a list of pointers
for each data record with the same key value.

I've disliked this design from when RMS was first introduced on the
PDP-11 systems.

In the competing database I've been involved with, with design going
back to 1974, each data record has unique key records for each key
structure on a one for one basis. I have found this to be much more
flexible and efficient. Perhaps that's just my opinion.

But consider, in RMS, when the primary key changes, that is actually a
delete and re-add of the data record, not just a delete and re-add of
the key record. Keys are normally small, a data record can be quite large.

Would I use either of them in a new product? Most likely not. A RDBMS
today works well with new fast hardware, and is both standard (sort of)
and flexible.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Hein RMS van den Heuvel
2020-12-05 17:02:02 UTC
Permalink
Post by Dave Froble
The final "fix" was a DCL procedure to replace the file with a fresh one during "maintenance" times.
Well done Michael.
Post by Dave Froble
It just bugged me that simple repeated RMS put/get/delete sequences worked out like this.
But I didn't know the difference between an RMS bucket and a pail.
That made me smile.
A pail does not have an associated key value, other then that they are pretty much the same.
They both hold stuff; smaller pails are lighter and more nimble but you need more of them; bigger stuff needs bigger pails; there is a limit to the size of a pail you can get.
:-)
Post by Dave Froble
Is there a reason why the equivalent of CONVERT/RECLAIM can't be run for one
bucket only when it becomes empty/unusable?
It's probably the old OpenVMS adage... if you can't fix a problem 100% for 100% of the anticipated case then do not bother.
I've always disliked that approach. I lean towards welcoming solutions which fix 90% of the problem 90% of the time.
Convert(/reclaim) currently just operates on-disk.
If would be wonderful if notably reclaim would have an online variant even if that was limit to 'data level' only.
To work online one would either have to add (a) new RMS call(s), or the standalone utility would need to run with CMEXEC privs and learn to take out bucket and area locks. To avoid online locking conflicts a standalone tool could just try to gather all the locks it needs (empty bucket, bucket pointing to it, area to return the bucket to) and be willing to just release, go to sleep, and try again when a blocking ast is triggered.
I suspect the original engineers were concerned with applications holding on to RFA's or Fast deletes leaving RRVs in secondary indexes leading to false positive reads when a bucket and record-id is recycled into usage.
Post by Dave Froble
It begins with a fundamental design issue in RMS, at least in my opinion.
Data records in an RMS indexed file are ordered by the primary key.
Thus any activity that adds or deletes data records must affect that ordered location.
Dave, indeed that could have been other/better choices.
Overall RMS held up pretty nicely over 40 years and from a few megabytes being a big file to tens, hundreds, of gigabytes in a file.
Production RMS files now are 1000 times bigger than folks expected back in RMS-11 days.

nitpicking - RMS itself does NOT support changing primary keys. Applications have to code that themselves with delete + insert as you write. (Only) The Cobol RLT hides this for the end user.
Post by Dave Froble
Then secondary keys are not on a one to one correspondence with data
records, but rather one key for each unique key and a list of pointers
for each data record with the same key value.
Ok, but I see that just as extreme key compression.
Post by Dave Froble
In the competing database I've been involved with, with design going
back to 1974, each data record has unique key records for each key
structure on a one for one basis.
The record RFA ( = RRV ) can be consider that key to some degree.
It is a GUID of sorts, just too bad they (understandably) picked 16 bits for the ID at the time.
The 32 bit VBN is of course also once such unfortunate picks at the time.
Both are hard to fix as they are externally exposed in the RFA.
The 63 block bucket size limit is just an internal 16 bit buffer size limit.
It was equally understandable coming from RMS-11 days, but it should have been fixed during the Alpha port (just before my time :-).

Rambling on a Snowy day in Nashua, NH.
Hein.
Craig A. Berry
2020-12-05 17:44:55 UTC
Permalink
Post by Hein RMS van den Heuvel
The record RFA ( = RRV ) can be consider that key to some degree.
It is a GUID of sorts, just too bad they (understandably) picked 16 bits for the ID at the time.
The BASIC manual claims that the RFA data type is 6 bytes:

<http://h30266.www3.hpe.com/odl/vax/progtool/cpqbasic39/bas_ref_001.htm#334dts2_global>

So is that the combination of VBN plus the 16-bit ID you are talking about?
Hein RMS van den Heuvel
2020-12-05 18:08:22 UTC
Permalink
Post by Craig A. Berry
Post by Hein RMS van den Heuvel
The record RFA ( = RRV ) can be consider that key to some degree.
It is a GUID of sorts, just too bad they (understandably) picked 16 bits for the ID at the time.
<http://h30266.www3.hpe.com/odl/vax/progtool/cpqbasic39/bas_ref_001.htm#334dts2_global>
So is that the combination of VBN plus the 16-bit ID you are talking about?
Correct.
Of course it is officially an opaque type one is not supposed to look into.
The structure is somewhat expose as rab$l_rfa0 + rab$w_rfa4 or rab$l_rfa0 variants for rab$w_rfa[3]
Interestingly (to some) it could readily grow to 4 bytes as there is an adjacent unnamed fill word to allow for aligned quad-word moves.

RFA's (Record File Address) For an indexed file is is VBN + ID.
The IDs are handed out sequentially, not in key order, with the bucket header holding the next-ID value.

The RRV's (Record Retrieval Vector) are the same but with a leading flag byte.
They are the links in the alternate (aka as secondary) keys and not exposed directly.
Interestingly (to some :-), the RRV's can be just the 1 flag byte with no RFA for deleted records in order to keep track on 'read next'.

For a Sequential file the RFA is the VBN plus byte offset for the start of a record.

For Relative files I seem to recall - but not sure anymore - the RFA is just the record number which can be mapped directly to a VBN and offset in a bucket for the (fixed length) record cell flag byte.

Hein
Stephen Hoffman
2020-12-05 21:31:06 UTC
Permalink
Post by Hein RMS van den Heuvel
Interestingly (to some) it could readily grow to 4 bytes as there is an
adjacent unnamed fill word to allow for aligned quad-word moves.
An investment in migrating RMS indexed files from a word to a longword
seems unlikely to substantially postpone our reaching the limitations
of the current design.
This also given we're (hopefully) migrating from the longword logical
block addresses (2 TiB limit) to longer will be disruptive, and given
we're using other databases for many apps.
Maybe once we're past 2 TiB and headed for exabyte-scale volumes, RMS
indexed updates will be interesting.
Or maybe we'll be using some other database or other (new) RMS
key-value file format support then.
GFS+ or otherwise, and Rdb or SQLite or otherwise.
--
Pure Personal Opinion | HoffmanLabs LLC
Michael Moroney
2020-12-06 17:14:13 UTC
Permalink
Post by Stephen Hoffman
This also given we're (hopefully) migrating from the longword logical
block addresses (2 TiB limit) to longer will be disruptive, and given
we're using other databases for many apps.
Maybe once we're past 2 TiB and headed for exabyte-scale volumes, RMS
indexed updates will be interesting.
You'll be happy to know that VMS addressing drives larger than 2TB has already
been done (largely by myself) and will be in V9.x, and is in a V8.x version
which may or may not see the light of day. This will not help with Files-11
since ODS-2/5 also has a 2TB limit and really can't be updated. To use the
changes will require a new file system.
Stephen Hoffman
2020-12-06 18:46:29 UTC
Permalink
Post by Michael Moroney
Post by Stephen Hoffman
This also given we're (hopefully) migrating from the longword logical
block addresses (2 TiB limit) to longer will be disruptive, and given
we're using other databases for many apps.
Maybe once we're past 2 TiB and headed for exabyte-scale volumes, RMS
indexed updates will be interesting.
You'll be happy to know that VMS addressing drives larger than 2TB has
already been done (largely by myself) and will be in V9.x, and is in a
V8.x version which may or may not see the light of day. This will not
help with Files-11 since ODS-2/5 also has a 2TB limit and really can't
be updated. To use the changes will require a new file system.
Trivia: OpenVMS has supported sector addressing past 2 TB starting with
V8.4; for over a decade. You've added sector addressing support for
storage capacities past 2 TiB.

Addressing support past 2 TiB likely isn't widely useful for those of
us with RMS dependencies, though. SQLite, Rdb, and such can likely be
modified, depending on what interfaces the XQP presents.

Might want to chat with the VSI developer-relations folks, around what
changes our apps are headed for with (presumably) 64-bit LBNs, so that
we might ponder what changes will be necessary. This as part of the
upcoming second beta for OpenVMS x86-64, presumably. e.g. $io_perform
XQP access only? Maybe with some example code?

And FWIW, a supremely ugly hack-around for the ongoing 2 TiB RMS limits
would involve a GPT-aware partitioned device driver, possibly mixed
with my ever-favorite bound-volume support. 🤮 As has undoubtedly been
suggested before.
--
Pure Personal Opinion | HoffmanLabs LLC
Tad Winters
2020-12-02 04:11:38 UTC
Permalink
Post by Michael Moroney
Consider an application which produces a fairly large number of small (100-200
byte) messages. The messages are sent via TCP to another system, and this can't
be changed. (which is why something like MQ wasn't used)
We don't want to lose any messages.
In order to do that, the code was changed to write the messages to an indexed
file (1 key, the timestamp). A second process reads records ftom the file,
sends to the remote system and upon acknowledgement, deletes the record.
(if no record, it sleeps and tries again) If the link/remote system was down,
messages would accumulate in the indexed files. Once the link was back, the
second process would empty it.
In other words, an RMS indexed file was just a FIFO. By being a file, records
are not lost. That worked, but despite the file being almost always empty or at
most 1-2 records, the file grew and grew and that part of the system slowed. A
workaround was to create a new version of the file once in a while, and delete
the old one (once known to be empty). A $ CONVERT/RECLAIM on an empty or
nearly empty file would chew on it for a while and would make the file faster
[note this was done only to understand what was going on]. The slowness was
from accumulated RMS cruft from all the deleted records.
So in theory, is it possible to implement a VMS file as a FIFO and work
efficiently? That is one process can write records in order, a second
process can read and delete records in order, without RMS cruft from
accumulating?
_______________________________________________
Info-vax mailing list
http://rbnsn.com/mailman/listinfo/info-vax_rbnsn.com
I wouldn't be inclined to use indexed files because of the overhead. If
I had to use them, maybe I'd design the solution with two indexed files,
one for the first half of a day and the other for the second half. If
the consumer has removed all the records from one of the indexed files,
and the current time is now after the time the other indexed file should
be in use, it renames and then deletes the current file. It can then
look for the other indexed file from which to read. Meanwhile, the
producer writes to the proper indexed file, according to the time of
day, creating a new file, if necessary.

I think I would prefer to use relative files, with record number 1
containing the values of the head, tail, and total number of records
used so that it would be a circular buffer. I think with one more
value, you could make if possible for the total number of records to
increase, when the buffer becomes empty or on the next cycle. Relative
files are much faster than indexed files, in my experience.
Stephen Hoffman
2020-11-27 19:41:07 UTC
Permalink
Post by Marc Van Dyck
Unless I mis-read it, the OpenVMS documentation does not state any size
limitation for permanent mailboxes. There is apparently just a
limitation on the size of each message, but the number of messages can
apparently be arbitrarily high. I have made a call to $CREMBX to create
a mailbox for 1.000.000 messages of 200 bytes each, and VMS did not
complain.
...the maximum number of outstanding messages can be 65535.
Lob a bug report / enhancement suggestion at VSI. Doc could seemingly
use some updates, if not also mailbox driver enhancements within
OpenVMS.
Post by Marc Van Dyck
So my question is, why this limitation ? Is it just because when this
interface was written, noone imagined that there could ever be a
mailbox with more than 64k outstanding messages ? Or am I really going
to break something other than this counter if I try loading more than
64k messages ?
I'd assume the mailbox device driver simply hasn't been overhauled /
updated / enhanced for 64-bit support, but haven't looked at that
source code in a while. Various drivers haven't been updated.

Akin to apps using event flags and logical names, the mailbox API
doesn't really scale all that well.

Back when I was more heavily using the mailbox driver more centrally
for app comms, I ended up splitting the app communications API layer
into code that used shared memory (for speed) and networking (using
DECnet and "more recently" IP) for distribution. The mailbox work
largely became secondary to network mailbox traffic and ancillary
discussions, and not as the primary communications path. Now I
typically go directly for sockets, though OpenVMS updates there for
better support of TLS and DTLS, of DNS, authentication, and related
tasks would be welcome. I've written and worked with various apps that
use one or more files as a ginormous ring buffer, too.

Back a couple of decades ago, shared memory comms was a more common
topic. This often mixed with interlocked bitlocks and/or DLM locks for
coordination and notifications. In more recent years, IP (TLS, DTLS,
etc) is what most folks are seemingly using for new work, outside of
legacy app maintenance and related remediation.
--
Pure Personal Opinion | HoffmanLabs LLC
David Jones
2020-11-28 19:19:46 UTC
Permalink
Post by Marc Van Dyck
Unless I mis-read it, the OpenVMS documentation does not state any size
limitation for permanent mailboxes. There is apparently just a
limitation on the size of each message, but the number of messages can
apparently be arbitrarily high. I have made a call to $CREMBX to
create a mailbox for 1.000.000 messages of 200 bytes each, and VMS did
not complain.
I understand from a Digital Technical Journal article that mailbox
space
is not reserved at mailbox creation time, but allocated each time a
new message is dropped in the mailbox. This I have been able to verify
by loading a mailbox and see the values in $SHOW MEM/POOL decrease
accordingly (loading chunks of 10.000 messages at a time...).
The UCB field where the driver tracks the message count is only 16 bits. The
mailbox driver sources I have, circa 1999, show it doing no boundary checks
against a 65K limit for pending message count. It does, however, check for
zero message count in a few places, which could potentially be an issue if
the counter rolls over at the wrong time. I suppose the authors could imagine
a case where you'd get 65K messages before running out of buffer quota
for the mailbox.
Marc Van Dyck
2020-11-29 11:25:32 UTC
Permalink
Post by David Jones
The UCB field where the driver tracks the message count is only 16 bits. The
mailbox driver sources I have, circa 1999, show it doing no boundary checks
against a 65K limit for pending message count. It does, however, check for
zero message count in a few places, which could potentially be an issue if
the counter rolls over at the wrong time. I suppose the authors could imagine
a case where you'd get 65K messages before running out of buffer quota
for the mailbox.
OK, that was the information I needed. So internally the count can't go
higher than 64k otherwise I risk screwing things. It's a pity that this
is not clearly stated in the documentation. But ok, for us it will just
be a minor annoyance. We'll just use several mailboxes in parallel
rather than a single big one. Many thanks !
--
Marc Van Dyck
Loading...