Discussion:
March 2017 Roadmap
(too old to reply)
c***@gmail.com
2017-03-06 23:25:54 UTC
Permalink
The quarterly update to the roadmap is now on our website. When we posted the December 2016 roadmap I mentioned that we had not yet fully digested the effects of the Alpha release. We now have and you will see some projects have been moved out as we expected they likely would.

The next State of the Port will be the first week in April.
Richard Maher
2017-03-06 23:28:55 UTC
Permalink
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Jan-Erik Soderholm
2017-03-06 23:35:47 UTC
Permalink
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
http://www.vmssoftware.com/products_roadmap.html

And it doesn't change from for a new roadmap, AFAIK...
David Froble
2017-03-07 02:03:00 UTC
Permalink
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Do your own work, you lazy fuck!

:-)

How's that Richard, up to your standards?

:-)
m***@googlemail.com
2017-03-07 02:26:39 UTC
Permalink
Post by David Froble
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Do your own work, you lazy fuck!
:-)
How's that Richard, up to your standards?
:-)
I think you're very hurtful David :-)
David Froble
2017-03-07 03:07:05 UTC
Permalink
Post by m***@googlemail.com
Post by David Froble
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Do your own work, you lazy fuck!
:-)
How's that Richard, up to your standards?
:-)
I think you're very hurtful David :-)
You ... you ... you ... you hypocrite! After some of the things you say to
others on this venue, you can write that?

:-)
m***@googlemail.com
2017-03-07 03:37:52 UTC
Permalink
Post by David Froble
Post by m***@googlemail.com
Post by David Froble
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Do your own work, you lazy fuck!
:-)
How's that Richard, up to your standards?
:-)
I think you're very hurtful David :-)
You ... you ... you ... you hypocrite! After some of the things you say to
others on this venue, you can write that?
:-)
Also, please be advised that profanity is neither big nor clever!
IanD
2017-03-07 22:48:21 UTC
Permalink
Post by David Froble
Post by Richard Maher
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
A hot-link would have killed you?
Do your own work, you lazy fuck!
:-)
How's that Richard, up to your standards?
:-)
too funny...
Stephen Hoffman
2017-03-06 23:42:52 UTC
Permalink
The quarterly update to the roadmap is now on our website...
http://www.vmssoftware.com/pdfs/VSI_Roadmap_20170306.pdf
--
Pure Personal Opinion | HoffmanLabs LLC
m***@googlemail.com
2017-03-07 01:22:29 UTC
Permalink
Post by Stephen Hoffman
The quarterly update to the roadmap is now on our website...
http://www.vmssoftware.com/pdfs/VSI_Roadmap_20170306.pdf
Re: "Additional Scripting Language"

Consolidating Python would be good.

Probably more community effort (and I don't personally believe in JavaScript on the server) but Node.js is used more and more these days for server examples.
IanD
2017-03-08 08:05:45 UTC
Permalink
Post by m***@googlemail.com
Post by Stephen Hoffman
The quarterly update to the roadmap is now on our website...
http://www.vmssoftware.com/pdfs/VSI_Roadmap_20170306.pdf
Re: "Additional Scripting Language"
Consolidating Python would be good.
Probably more community effort (and I don't personally believe in JavaScript on the server) but Node.js is used more and more these days for server examples.
With no disrespect the whole who have given and who currently give their blood sweat and tears now, open source on VMS appears to be virtually stalled pending some much needed help to move it forward on the platform

Do I have the skills to know what I'm talking about? No, but I see the results. Projects stalled waiting for core functionality to be implemented

How much of the work done on bringing Java up to date could be released and transitioned for the long time suffering VMS open source folk to get their hands on I wonder?

Python 3 is already mainstream and is pitched as the default when people are told to download python when learning. It is waiting for some fundamental aspects to be worked on for the VMS port to continue so I understand?

I really wish I knew enough to be of real help but I don't but I do see VMS getting further and further behind. Open source is often the first port of call now when companies look for software deployment, it's no longer seen as the backyard hack crap it used to be

In my place, I'd desperately love to investigate RabbitMQ as I believe it could be of use but when I look at the Erlang port and read comments it's as though everything fell in a hole and was not just hidden but was buried :-(

I can only hope that Python being dropped from the list of VMS's future was an oversight by someone way too busy because I think Python is almost as important as Java is to have supported on VMS
Craig A. Berry
2017-03-08 15:52:18 UTC
Permalink
Post by IanD
How much of the work done on bringing Java up to date could be released and
transitioned for the long time suffering VMS open source folk to get their
hands on I wonder?
Somewhere between none and a little, I suspect. The skills used are the skills needed (someone who can get the JVM's JIT working on IA64 could certainly do the same for libffi) but I doubt there is very much code that could be applied elsewhere for reasons both licensing and technical.
Post by IanD
I can only hope that Python being dropped from the list of VMS's future was
an oversight by someone way too busy because I think Python is almost as
important as Java is to have supported on VMS
What was it dropped from? It's on the latest roadmap under future investigations.
IanD
2017-03-09 10:42:41 UTC
Permalink
Post by Craig A. Berry
Post by IanD
How much of the work done on bringing Java up to date could be released and
transitioned for the long time suffering VMS open source folk to get their
hands on I wonder?
Somewhere between none and a little, I suspect. The skills used are the skills needed (someone who can get the JVM's JIT working on IA64 could certainly do the same for libffi) but I doubt there is very much code that could be applied elsewhere for reasons both licensing and technical.
Post by IanD
I can only hope that Python being dropped from the list of VMS's future was
an oversight by someone way too busy because I think Python is almost as
important as Java is to have supported on VMS
What was it dropped from? It's on the latest roadmap under future investigations.
So it is

I looked where it was last time but it appears to have moved now to be under open source tested but investigated
Arne Vajhøj
2017-03-09 01:40:04 UTC
Permalink
Post by IanD
With no disrespect the whole who have given and who currently give
their blood sweat and tears now, open source on VMS appears to be
virtually stalled pending some much needed help to move it forward on
the platform
Do I have the skills to know what I'm talking about? No, but I see
the results. Projects stalled waiting for core functionality to be
implemented
I really wish I knew enough to be of real help but I don't but I do
see VMS getting further and further behind. Open source is often the
first port of call now when companies look for software deployment,
it's no longer seen as the backyard hack crap it used to be
It is an unfortunate state that the ratio

number of people that know what DEC/CQP/HP/VSI should do : number of
people willing to take over maintenance of open source project XYZ

is like

100:1

But it is what it is.

Arne
Arne Vajhøj
2017-03-09 01:42:55 UTC
Permalink
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port and
read comments it's as though everything fell in a hole and was not
just hidden but was buried :-(
Any reason why you particular want RabbitMQ.

I am asking because the roadmap that started this thread
claims that ActiveMQ is available.

Arne
Kerry Main
2017-03-09 01:53:30 UTC
Permalink
-----Original Message-----
Arne Vajhøj via Info-vax
Sent: March 8, 2017 8:43 PM
Subject: Re: [Info-vax] March 2017 Roadmap
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe
Post by IanD
it could be of use but when I look at the Erlang port and read
comments it's as though everything fell in a hole and was not just
hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims that
ActiveMQ is available.
Arne
Likely already known, but just in case:

RabbitMQ on OpenVMS: (albeit a bit dated)
http://www.johndapps.com/rabbitmqonopenvms


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Arne Vajhøj
2017-03-09 02:18:10 UTC
Permalink
Post by m***@googlemail.com
-----Original Message-----
Arne Vajhøj via Info-vax
Sent: March 8, 2017 8:43 PM
Subject: Re: [Info-vax] March 2017 Roadmap
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe
Post by IanD
it could be of use but when I look at the Erlang port and read
comments it's as though everything fell in a hole and was not just
hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims that
ActiveMQ is available.
RabbitMQ on OpenVMS: (albeit a bit dated)
http://www.johndapps.com/rabbitmqonopenvms
I did not, but I don't think it changes anything.

Its arguments are basically that:
* a MQ is needed for integration
* RabbitMQ supports AMQP

ActiveMQ does that as well.

Arne
IanD
2017-03-09 10:59:17 UTC
Permalink
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port and
read comments it's as though everything fell in a hole and was not
just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread
claims that ActiveMQ is available.
Arne
Since I'm starting out learning everything I have come across has said Rabbit is the most popular

Here is one such source, there's plenty of others

https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq

It seems in my preliminary looking around, Rabbit has more frequently updated python support as well

Whether true or not, I've run across people saying they have found Rabbit to be more stable which they attribute to Erlang's OTP design principals

But hey, I'm just starting out and consequently I'm swayed by the masses at the moment which favour Rabbit
IanD
2017-03-09 11:11:40 UTC
Permalink
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port and
read comments it's as though everything fell in a hole and was not
just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread
claims that ActiveMQ is available.
Arne
Since I'm starting out learning everything I have come across has said Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It seems in my preliminary looking around, Rabbit has more frequently updated python support as well
Whether true or not, I've run across people saying they have found Rabbit to be more stable which they attribute to Erlang's OTP design principals
But hey, I'm just starting out and consequently I'm swayed by the masses at the moment which favour Rabbit
Why I'm looking to investigate message queueing is to see if it can be used to take things forward where I work

Information is being distributed using ftp at present to the tune of 10,000's of files per day from different sources which are basically event information. The events themselves are not large chunks of data, perhaps a few k to less than 5mb in size. The data is processed, then bundled and shipped downstream to be consumed elsewhere. It's bundled because ftp'ing all the individual files all over again would be too taxing - maybe.

It seems to me that message queueing would be a good solution for this type of information distribution

Am I right in believing message queueing via something like Rabbit would allow us to scale up ? There is talk of a lot more transactions happening and I'm somewhat nervous that ftp may not take us to where we want to go
Richard Maher
2017-03-09 11:21:10 UTC
Permalink
Post by IanD
Why I'm looking to investigate message queueing is to see if it can
be used to take things forward where I work
Information is being distributed using ftp at present to the tune of
10,000's of files per day from different sources which are basically
event information. The events themselves are not large chunks of
data, perhaps a few k to less than 5mb in size. The data is
processed, then bundled and shipped downstream to be consumed
elsewhere. It's bundled because ftp'ing all the individual files all
over again would be too taxing - maybe.
The best way to replicate data is not at all. Step outside the square
and re-think your problem. BTW You don't have data shares where you are?
Post by IanD
It seems to me that message queueing would be a good solution for
this type of information distribution
But you're a sys admin so what would you know?
Post by IanD
Am I right in believing message queueing via something like Rabbit
would allow us to scale up ? There is talk of a lot more
transactions happening and I'm somewhat nervous that ftp may not take
us to where we want to go
What next Enterprise Service Bus? Rendezvous? talk about Back to the
Future :-(
Jan-Erik Soderholm
2017-03-09 11:28:14 UTC
Permalink
Post by IanD
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port and
read comments it's as though everything fell in a hole and was
not just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims that
ActiveMQ is available.
Arne
Since I'm starting out learning everything I have come across has said
Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It seems in my preliminary looking around, Rabbit has more frequently
updated python support as well
Whether true or not, I've run across people saying they have found
Rabbit to be more stable which they attribute to Erlang's OTP design
principals
But hey, I'm just starting out and consequently I'm swayed by the
masses at the moment which favour Rabbit
Why I'm looking to investigate message queueing is to see if it can be
used to take things forward where I work
Information is being distributed using ftp at present to the tune of
10,000's of files per day from different sources which are basically
event information. The events themselves are not large chunks of data,
perhaps a few k to less than 5mb in size. The data is processed, then
bundled and shipped downstream to be consumed elsewhere. It's bundled
because ftp'ing all the individual files all over again would be too
taxing - maybe.
It seems to me that message queueing would be a good solution for this
type of information distribution
Am I right in believing message queueing via something like Rabbit would
allow us to scale up ? There is talk of a lot more transactions
happening and I'm somewhat nervous that ftp may not take us to where we
want to go
Having a good message queuing infrastructure take avay the issues
of network disturbances from the applications. Just post your data
to the queue and it will arrive at the other end, sometime.

We use IBM-MQ for similar uses, integration of the VMS system with
other systems (running on every other plattform you can think of).
This is handled by a corporate messaging group that uses IBM
WebSphere and IBM-MQ for the communication. So there is a central
messaging gateway, so to speak.

On our VMS system, we use the "MQ client for VMS" parts, that is
just an API and we are still dependant on a connection to one of
the WebSphere servers to be able to post data. To receive data,
our applications are waiting in a MQ READ call so as soon as
anything arrives on the queue, it is processed at once.

I have used DEC-DMQ earlier (now BEA MessageQ owned by Oracle)
https://docs.oracle.com/cd/E13203_01/tuxedo/msgq/intro/i_chap1.htm
but that was on the Digital time frame. Nice product anyway.

I have not looked closer at the different OSS solutions available today.

Jan-Erik.
Richard Maher
2017-03-09 12:25:01 UTC
Permalink
Post by Jan-Erik Soderholm
Having a good message queuing infrastructure take avay the issues
of network disturbances from the applications. Just post your data
to the queue and it will arrive at the other end, sometime.
Having a good BATCH queue makes up for pesky problems with the UI :-(
Jan-Erik Soderholm
2017-03-09 12:51:05 UTC
Permalink
Post by Richard Maher
Post by Jan-Erik Soderholm
Having a good message queuing infrastructure take avay the issues
of network disturbances from the applications. Just post your data
to the queue and it will arrive at the other end, sometime.
Having a good BATCH queue makes up for pesky problems with the UI :-(
Does it? Interesting, I must make a note about that!

But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.

And that message queuing is something from the past.

Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".

Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
David Froble
2017-03-09 17:22:07 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Richard Maher
Post by Jan-Erik Soderholm
Having a good message queuing infrastructure take avay the issues
of network disturbances from the applications. Just post your data
to the queue and it will arrive at the other end, sometime.
Having a good BATCH queue makes up for pesky problems with the UI :-(
Does it? Interesting, I must make a note about that!
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
Hmmm ....

That wasn't how I read Richard's comment. I thought that he was saying that
using FTP to make multiple copies of data was poor practice.

I'm more used to using messaging to send the data, and yes, one could consider
that multiple copies, but I do not. Of course, the message could just point to
the data. I guess how things are done will depend greatly on the task(s) to be
performed.
Post by Jan-Erik Soderholm
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Arne Vajhøj
2017-03-09 22:56:37 UTC
Permalink
Post by Jan-Erik Soderholm
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Web services and message queues complement each other.

Web services for sync and message queues for async.

Both are technical building blocks for SOA.

The SOA purists will even claim that message queues are more
suited for SOA than web services.

Arne
Richard Maher
2017-03-10 00:33:49 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Web services and message queues complement each other.
Web services for sync and message queues for async.
I question your terminology. You are not talking about "async" you are
recommending "store and forward" there are huge differences!

Web Services can, and mostly are, Async. I like the new Promise(resolve,
fail) design for virtual inlining/waiting for async services.

Now with ServiceWorkers UAs and WebApps can be instantiated on Server
demand for Push Messages etc.
Post by Arne Vajhøj
Both are technical building blocks for SOA.
Horses for courses.
Post by Arne Vajhøj
The SOA purists will even claim that message queues are more
suited for SOA than web services.
Anecdotal bollocks. "Experts say" really? What next "Friends of the couple"?
Post by Arne Vajhøj
Arne
Arne Vajhøj
2017-03-10 00:51:53 UTC
Permalink
Post by Richard Maher
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Web services and message queues complement each other.
Web services for sync and message queues for async.
I question your terminology. You are not talking about "async" you are
recommending "store and forward" there are huge differences!
The sender/producer write the data and the receiver/consumer
read the data later. And the sender/producer does not know what
happened (unless using acks).

That is async.
Post by Richard Maher
Web Services can, and mostly are, Async. I like the new Promise(resolve,
fail) design for virtual inlining/waiting for async services.
Now with ServiceWorkers UAs and WebApps can be instantiated on Server
demand for Push Messages etc.
The HTTP protocol is fundamentally sync. Requests is send and
a wait for response happen.

There has been made various smart libraries to make web service
calls appear async by having somebody else than caller wait for
the response.

That is not true async. And probably less suited for long delays.
Post by Richard Maher
Post by Arne Vajhøj
The SOA purists will even claim that message queues are more
suited for SOA than web services.
Anecdotal bollocks. "Experts say" really?
The arguments is very simple: message queue solutions are less
coupled than web service solutions as message queues does not come
with the same availability and response time requirements as
web services.

Arne
Richard Maher
2017-03-10 01:00:38 UTC
Permalink
Post by Arne Vajhøj
Post by Richard Maher
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Web services and message queues complement each other.
Web services for sync and message queues for async.
I question your terminology. You are not talking about "async" you are
recommending "store and forward" there are huge differences!
The sender/producer write the data and the receiver/consumer
read the data later. And the sender/producer does not know what
happened (unless using acks).
"It rubs the lotion on its skin"? What are you talking about?
https://en.wikipedia.org/wiki/Asynchrony_(computer_programming)
http://activemq.apache.org/networks-of-brokers.html
https://docs.oracle.com/cd/E13222_01/wls/docs90/saf_admin/overview.html
Post by Arne Vajhøj
That is async.
Post by Richard Maher
Web Services can, and mostly are, Async. I like the new Promise(resolve,
fail) design for virtual inlining/waiting for async services.
Now with ServiceWorkers UAs and WebApps can be instantiated on Server
demand for Push Messages etc.
The HTTP protocol is fundamentally sync. Requests is send and
a wait for response happen.
There has been made various smart libraries to make web service
calls appear async by having somebody else than caller wait for
the response.
That is not true async. And probably less suited for long delays.
Post by Richard Maher
Post by Arne Vajhøj
The SOA purists will even claim that message queues are more
suited for SOA than web services.
Anecdotal bollocks. "Experts say" really?
The arguments is very simple: message queue solutions are less
coupled than web service solutions as message queues does not come
with the same availability and response time requirements as
web services.
Arne
For Pete's sake stop waffling and stick to referenced facts
David Froble
2017-03-10 01:25:08 UTC
Permalink
Post by Richard Maher
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
But seriously, I saw you other reply to IanD. You seems to think
that all systems should read/access the data right from the source
between different systems. Yes, a nice thought, I guess.
And that message queuing is something from the past.
Doesn't work when you have a number of systems and you have to have
well defined interfaces, timings for the transfers and so on. You have
to match the "nightly batch window" on other systems, a several hour
period when the the system is shut down for "batch updates".
Yes, SOA, WebServices and so on was ment to solve all that. But that
doesn't really work in (all) real life environments.
Web services and message queues complement each other.
Web services for sync and message queues for async.
I question your terminology. You are not talking about "async" you are
recommending "store and forward" there are huge differences!
Can you explain the differences? As far as I can see, either sender and
receiver must be ready to communicate at the same time, sync, or they can
process the message at different times, async, and both do not have to be
running at the same time.

Or maybe I don't understand "async" ....
Richard Maher
2017-03-10 01:49:29 UTC
Permalink
Post by David Froble
Can you explain the differences? As far as I can see, either sender and
receiver must be ready to communicate at the same time, sync, or they
can process the message at different times, async, and both do not have
to be running at the same time.
Or maybe I don't understand "async" ....
FFS see the post immediately before this one in tree view. 25 mins
before yours with links to definitions.
David Froble
2017-03-10 04:33:40 UTC
Permalink
Post by Richard Maher
Post by David Froble
Can you explain the differences? As far as I can see, either sender and
receiver must be ready to communicate at the same time, sync, or they
can process the message at different times, async, and both do not have
to be running at the same time.
Or maybe I don't understand "async" ....
FFS see the post immediately before this one in tree view. 25 mins
before yours with links to definitions.
Ok, I read a bit. The best impression I came away with is that some people are
hijacking the word "asynchronous". For many years, it basically was the
opposite of "synchronous". Now, sync means both sides of any interaction must
be happening at the same time. I'll stick with that, and wiki and the rest can
pound salt.

It's like a letter through the mail. You write it, the entity you're writing to
doesn't know anything about it, and won't until it is delivered to them. Once
you send it via the post office, you no longer have anything to do with getting
it to the recipient. Now, to me, that is async messaging. On the other haand,
a phone call is a sync communication. Sort of.
IanD
2017-03-10 05:04:45 UTC
Permalink
Post by David Froble
Post by Richard Maher
Post by David Froble
Can you explain the differences? As far as I can see, either sender and
receiver must be ready to communicate at the same time, sync, or they
can process the message at different times, async, and both do not have
to be running at the same time.
Or maybe I don't understand "async" ....
FFS see the post immediately before this one in tree view. 25 mins
before yours with links to definitions.
Ok, I read a bit. The best impression I came away with is that some people are
hijacking the word "asynchronous". For many years, it basically was the
opposite of "synchronous". Now, sync means both sides of any interaction must
be happening at the same time. I'll stick with that, and wiki and the rest can
pound salt.
It's like a letter through the mail. You write it, the entity you're writing to
doesn't know anything about it, and won't until it is delivered to them. Once
you send it via the post office, you no longer have anything to do with getting
it to the recipient. Now, to me, that is async messaging. On the other haand,
a phone call is a sync communication. Sort of.
So the sender and the receiver are not in direct contact /sharing the same medium of contact, with one another at the same point in time?

This starts to go down the path of blocking / non-blocking I/O perhaps?

I was recently told by someone who does web related programming and they were having all sorts of excited times over languages with non-blocking I/O abilities

I too thought wow, this sounds great. When I looked a bit further, I thought to myself, isn't this like VMS system services that you can call that are non '_w' calls? (i.e. without the wait?). It seemed to me that these so called non-blocking I/O wonders were nothing more than this and worse, they seem to require the caller to constantly poll for some event to see if their request is complete or not, versus being interrupted

Perhaps it's all over my head and I don't know what the hell I'm talking about but it did, at first hand appear to me that the web world is getting excited about something that VMS has been doing for a long time ???
David Froble
2017-03-10 13:44:43 UTC
Permalink
Post by IanD
Post by David Froble
Post by Richard Maher
Post by David Froble
Can you explain the differences? As far as I can see, either sender and
receiver must be ready to communicate at the same time, sync, or they
can process the message at different times, async, and both do not have
to be running at the same time.
Or maybe I don't understand "async" ....
FFS see the post immediately before this one in tree view. 25 mins
before yours with links to definitions.
Ok, I read a bit. The best impression I came away with is that some people are
hijacking the word "asynchronous". For many years, it basically was the
opposite of "synchronous". Now, sync means both sides of any interaction must
be happening at the same time. I'll stick with that, and wiki and the rest can
pound salt.
It's like a letter through the mail. You write it, the entity you're writing to
doesn't know anything about it, and won't until it is delivered to them. Once
you send it via the post office, you no longer have anything to do with getting
it to the recipient. Now, to me, that is async messaging. On the other haand,
a phone call is a sync communication. Sort of.
So the sender and the receiver are not in direct contact /sharing the same
medium of contact, with one another at the same point in time?
Yes, that's basically async.
Post by IanD
This starts to go down the path of blocking / non-blocking I/O perhaps?
Don't know about that. Nor is this "blocking" something I'd heard about until I
started doing some weendoze work.

On weendoze, at least in the past, TCP/IP could block other operations if the
current operation was "blocking". Now, that's sort of like the UPS driver's
truck is in your driveway delivering a package, and the FedEx driver also has a
package, but he has to wait in the street until the UPS driver is finished with
the delivery and leaves.
Post by IanD
I was recently told by someone who does web related programming and they were
having all sorts of excited times over languages with non-blocking I/O
abilities
The tape drive imitating weendoze weenies most likely never heard about things
such as ASTs and other things that real programmers using real OS have been
using for many years. They must be easily impressed, but only when something
isn't completely over their head and they can actually understand it.
Post by IanD
I too thought wow, this sounds great. When I looked a bit further, I thought
to myself, isn't this like VMS system services that you can call that are non
'_w' calls? (i.e. without the wait?). It seemed to me that these so called
non-blocking I/O wonders were nothing more than this and worse, they seem to
require the caller to constantly poll for some event to see if their request
is complete or not, versus being interrupted
A QIOW is basically a sync operation. It does not complete until it, well, is
complete. If you invoke it, you then sit and wait for that completion, you
don't do other things. A QIO on the other hand is queued up to execute when it
can, the caller then can move on and do other things and can get a notification
when the QIO operation is completed. That's sort of async, from some perspectives.
Post by IanD
Perhaps it's all over my head and I don't know what the hell I'm talking
about but it did, at first hand appear to me that the web world is getting
excited about something that VMS has been doing for a long time ???
That's nothing new ....
Craig A. Berry
2017-03-10 14:13:48 UTC
Permalink
Post by David Froble
Don't know about that. Nor is this "blocking" something I'd heard about
until I started doing some weendoze work.
On weendoze, at least in the past, TCP/IP could block other operations
if the current operation was "blocking". Now, that's sort of like the
UPS driver's truck is in your driveway delivering a package, and the
FedEx driver also has a package, but he has to wait in the street until
the UPS driver is finished with the delivery and leaves.
That's concurrency, not blocking versus non-blocking. Blocking just
means the UPS driver can't start his next delivery until you answer the
door and take your package off his hands -- $QIOW rather than $QIO.
Richard Maher
2017-03-10 15:23:30 UTC
Permalink
Post by IanD
So the sender and the receiver are not in direct contact /sharing the
same medium of contact, with one another at the same point in time?
This starts to go down the path of blocking / non-blocking I/O
perhaps?
I was recently told by someone who does web related programming and
they were having all sorts of excited times over languages with
non-blocking I/O abilities
You are correct. They finally realized that threads don't scale well and
asynch i/o scales better. A la mode de VMS.
David Froble
2017-03-09 17:11:51 UTC
Permalink
Post by IanD
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port and
read comments it's as though everything fell in a hole and was not
just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread
claims that ActiveMQ is available.
Arne
Since I'm starting out learning everything I have come across has said Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It seems in my preliminary looking around, Rabbit has more frequently updated python support as well
Whether true or not, I've run across people saying they have found Rabbit to be more stable which they attribute to Erlang's OTP design principals
But hey, I'm just starting out and consequently I'm swayed by the masses at the moment which favour Rabbit
Why I'm looking to investigate message queueing is to see if it can be used to take things forward where I work
Information is being distributed using ftp at present to the tune of 10,000's of files per day from different sources which are basically event information. The events themselves are not large chunks of data, perhaps a few k to less than 5mb in size. The data is processed, then bundled and shipped downstream to be consumed elsewhere. It's bundled because ftp'ing all the individual files all over again would be too taxing - maybe.
It seems to me that message queueing would be a good solution for this type of information distribution
Am I right in believing message queueing via something like Rabbit would allow us to scale up ? There is talk of a lot more transactions happening and I'm somewhat nervous that ftp may not take us to where we want to go
What are called message systems can be very valuable. Well, in my opinion,
async message systems anyway.

Basically, any task can decide it has some data it needs to forward to another
task, and instead of a direct connection and sync transmission, the data is sent
to a messaging product which will store the data for whatever resource it is
intended. Resources can be identified in various ways, but a name is usually a
good selection for identifying the resource. Then, when it's ready for data,
the resource can inquire the messaging system to see if it has any "mail". (I
use "mail" loosely here, traditional mail, while itself a messaging system, is
not part of what I'm describing.)

Such systems can serve a single computer, or be network aware.

Do note that not all "messaging" software are created equal. In addition to
being local or network aware, there can be other differences, type and size of
data supported, historical logging, and such. Choose the wrong one and you
might be worse off than choosing none at all.

I've been using a "bespoke" (a word that needs to end up the same place that
"legacy" needs to end up) messaging system for over 30 years. It allows the
pieces of an application to be tied together without having intimate (carnal)
knowledge of other pieces.
Arne Vajhøj
2017-03-10 00:21:45 UTC
Permalink
Post by IanD
Why I'm looking to investigate message queueing is to see if it can
be used to take things forward where I work
Information is being distributed using ftp at present to the tune of
10,000's of files per day from different sources which are basically
event information. The events themselves are not large chunks of
data, perhaps a few k to less than 5mb in size. The data is
processed, then bundled and shipped downstream to be consumed
elsewhere. It's bundled because ftp'ing all the individual files all
over again would be too taxing - maybe.
It seems to me that message queueing would be a good solution for
this type of information distribution
Am I right in believing message queueing via something like Rabbit
would allow us to scale up ? There is talk of a lot more
transactions happening and I'm somewhat nervous that ftp may not take
us to where we want to go
Message queue performance depends a lot on the context:
* message size
* durable vs non-durable messages
* transactional vs non-transactional messages

ActiveMQ on their page claims using a modest system:
* 20000 messages/second non-durable
* 2000 messages/second durable

http://activemq.apache.org/performance.html

RabbitMQ in a rather powerful config (32 node
cluster with each cluster member 8 VCPU and 30 GB RAM)
did 2 million messages/second non-durable.

https://content.pivotal.io/blog/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine

I think you can make it work performance wise.

Is the 5 MB the max size of single messages or the max size
of a file bundle with multiple messages?

If the first and the hardware used is very modest, then maybe
put up with FTP and just use message queue for meta data and
a reference to the file.

As a rule of thumb - don't stuff data in a message queue that
you would not stuff in a database on the same hardware.

Arne
Richard Maher
2017-03-10 00:36:36 UTC
Permalink
Post by Arne Vajhøj
* message size
* durable vs non-durable messages
* transactional vs non-transactional messages
* 20000 messages/second non-durable
* 2000 messages/second durable
http://activemq.apache.org/performance.html
RabbitMQ in a rather powerful config (32 node
cluster with each cluster member 8 VCPU and 30 GB RAM)
did 2 million messages/second non-durable.
https://content.pivotal.io/blog/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine
If you're willing to defer delivery then what do you care how fast it
is? The queue is for guaranteed delivery is it not? Store And Forward?
Catering for all those network issues Dirk dreams of?
Arne Vajhøj
2017-03-10 00:44:01 UTC
Permalink
Post by Richard Maher
Post by Arne Vajhøj
* message size
* durable vs non-durable messages
* transactional vs non-transactional messages
* 20000 messages/second non-durable
* 2000 messages/second durable
http://activemq.apache.org/performance.html
RabbitMQ in a rather powerful config (32 node
cluster with each cluster member 8 VCPU and 30 GB RAM)
did 2 million messages/second non-durable.
https://content.pivotal.io/blog/rabbitmq-hits-one-million-messages-per-second-on-google-compute-engine
If you're willing to defer delivery then what do you care how fast it
is? The queue is for guaranteed delivery is it not? Store And Forward?
Catering for all those network issues Dirk dreams of?
A durable queue or topic can guarantee delivery.

So if receiver gets behind then it is usually not a big issue - messages
will just queue up and be read when the receiver catches up.

But the max performance still limits the sender.

Arne
Arne Vajhøj
2017-03-09 22:53:20 UTC
Permalink
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port
and read comments it's as though everything fell in a hole and
was not just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims
that ActiveMQ is available.
Since I'm starting out learning everything I have come across has
said Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It is possible that RabbitMQ is more widely used than ActiveMQ.

But I will say that ActiveMQ is so widely used that it is an
acceptable option.
Post by IanD
It seems in my preliminary looking around, Rabbit has more frequently
updated python support as well
Again - very possible.

But ActiveMQ does have Python support.
Post by IanD
Whether true or not, I've run across people saying they have found
Rabbit to be more stable which they attribute to Erlang's OTP design
principals
ActiveMQ is pretty stable.

Overall I think you should give ActiveMQ a try. Its status
in VSI roadmap may make up for other aspects.

Arne
IanD
2017-03-10 03:39:31 UTC
Permalink
Post by Arne Vajhøj
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port
and read comments it's as though everything fell in a hole and
was not just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims
that ActiveMQ is available.
Since I'm starting out learning everything I have come across has
said Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It is possible that RabbitMQ is more widely used than ActiveMQ.
But I will say that ActiveMQ is so widely used that it is an
acceptable option.
Post by IanD
It seems in my preliminary looking around, Rabbit has more frequently
updated python support as well
Again - very possible.
But ActiveMQ does have Python support.
Post by IanD
Whether true or not, I've run across people saying they have found
Rabbit to be more stable which they attribute to Erlang's OTP design
principals
ActiveMQ is pretty stable.
Overall I think you should give ActiveMQ a try. Its status
in VSI roadmap may make up for other aspects.
Arne
Thanks

And for more background details...

At this stage I am just looking at possible ideas as I started to look at RabbitMQ

Basically, we have an application that takes events from different hardware and they get bundled on some upstream system(s) that then collate them and ftp them onto the VMS system as bundled event file(s)

There are approximately 200K files in an 8 hour period, ranging in size from a few k to about 5M on the upper size of things

The number of actual events would be about 2 million (but they are bundled into files that are ftp's to us)

So at around peak, we would be receiving via ftp about 10 - 12 files/sec.

Nothing really large but large enough as the information basically needs to be unpacked (from the ftp file and processed).
Once the processing is done, they are bundled again into files with additional information and ftp'd to other downstream systems for customer consumption (display only)

There is talk about a 400% increase in events so I naturally started to wonder if the current ftp method was really up to the task and therefore started to wonder about message queuing as an alternative - in particular the idea of message queuing having the ability to broadcast which may allow some parallel processing work to be done on the flowing data versus a strict waterfall approach (this is just an idea, i have not looked into the feasibility exactly)

In a previous role, we used a webserver (WASD) that handled this type of task well

We do not have a relational DB and everything is flat file based with near zero chance of moving to a relational DB. They would rather spend the money on decommissioning than go to the expense of moving to a relational DB (in fact, this is the long term plan - story of my life in regards to VMS it seems - everyone wanting to decommission...)
Arne Vajhøj
2017-03-10 03:50:49 UTC
Permalink
Post by IanD
And for more background details...
At this stage I am just looking at possible ideas as I started to look at RabbitMQ
Basically, we have an application that takes events from different
hardware and they get bundled on some upstream system(s) that then
collate them and ftp them onto the VMS system as bundled event
file(s)
There are approximately 200K files in an 8 hour period, ranging in
size from a few k to about 5M on the upper size of things
The number of actual events would be about 2 million (but they are
bundled into files that are ftp's to us)
So at around peak, we would be receiving via ftp about 10 - 12
files/sec.
Nothing really large but large enough as the information basically
needs to be unpacked (from the ftp file and processed). Once the
processing is done, they are bundled again into files with additional
information and ftp'd to other downstream systems for customer
consumption (display only)
There is talk about a 400% increase in events so I naturally started
to wonder if the current ftp method was really up to the task and
therefore started to wonder about message queuing as an alternative -
in particular the idea of message queuing having the ability to
broadcast which may allow some parallel processing work to be done on
the flowing data versus a strict waterfall approach (this is just an
idea, i have not looked into the feasibility exactly)
Well - if you want durable messages, then you will need to do
disk IO.

But you will avoid the FTP process creation overhead by going
message queue.

And by going message queue you will also avoid all the hassles
of avoiding starting on files that are not completely transferred yet.

And yes - it should also make it easier to consume events in parallel
by multiple processes/threads.

Arne
Richard Maher
2017-03-10 03:53:03 UTC
Permalink
Post by IanD
And for more background details...
At this stage I am just looking at possible ideas as I started to look at RabbitMQ
Basically, we have an application that takes events from different
hardware and they get bundled on some upstream system(s) that then
collate them and ftp them onto the VMS system as bundled event
file(s)
There are approximately 200K files in an 8 hour period, ranging in
size from a few k to about 5M on the upper size of things
The number of actual events would be about 2 million (but they are
bundled into files that are ftp's to us)
So at around peak, we would be receiving via ftp about 10 - 12
files/sec.
Nothing really large but large enough as the information basically
needs to be unpacked (from the ftp file and processed). Once the
processing is done, they are bundled again into files with
additional information and ftp'd to other downstream systems for
customer consumption (display only)
There is talk about a 400% increase in events so I naturally started
to wonder if the current ftp method was really up to the task and
therefore started to wonder about message queuing as an alternative
- in particular the idea of message queuing having the ability to
broadcast which may allow some parallel processing work to be done
on the flowing data versus a strict waterfall approach (this is just
an idea, i have not looked into the feasibility exactly)
FWIW when I was writing software to process mobile calls at Telecom
Cellular (NZ) and Trade Matching (LCH/ICCH) a successful strategy was to
log the raw data to disk as quickly as possible in an auto-extendable
sequential file. Simply get raw data from switch or trade and persist it
to disk. The only additional processing it did was to acquire a lock
before the write and update the Lock Value Block with the latest
sequential transaction number written. That way the value-added reader
process would not read past EOF and have to open/close the file and
re-read all the processed rows in front.

This algorithm allowed the writers to press on handling real-time alarms
and the back-end/down-stream processor could catch up when possible. We
also had the switches dump files of calls rather than call-by-call
processing.

I'm not saying I know your environment and requirements better than you
do I'm just saying that patching up a failed design with insulating tape
and fencing wire will only get you so far.
David Froble
2017-03-10 04:49:57 UTC
Permalink
Post by IanD
Post by Arne Vajhøj
Post by IanD
Post by Arne Vajhøj
Post by IanD
In my place, I'd desperately love to investigate RabbitMQ as I
believe it could be of use but when I look at the Erlang port
and read comments it's as though everything fell in a hole and
was not just hidden but was buried :-(
Any reason why you particular want RabbitMQ.
I am asking because the roadmap that started this thread claims
that ActiveMQ is available.
Since I'm starting out learning everything I have come across has
said Rabbit is the most popular
Here is one such source, there's plenty of others
https://stackshare.io/stackups/rabbitmq-vs-activemq-vs-zeromq
It is possible that RabbitMQ is more widely used than ActiveMQ.
But I will say that ActiveMQ is so widely used that it is an
acceptable option.
Post by IanD
It seems in my preliminary looking around, Rabbit has more frequently
updated python support as well
Again - very possible.
But ActiveMQ does have Python support.
Post by IanD
Whether true or not, I've run across people saying they have found
Rabbit to be more stable which they attribute to Erlang's OTP design
principals
ActiveMQ is pretty stable.
Overall I think you should give ActiveMQ a try. Its status
in VSI roadmap may make up for other aspects.
Arne
Thanks
And for more background details...
At this stage I am just looking at possible ideas as I started to look at RabbitMQ
Basically, we have an application that takes events from different hardware and they get bundled on some upstream system(s) that then collate them and ftp them onto the VMS system as bundled event file(s)
There are approximately 200K files in an 8 hour period, ranging in size from a few k to about 5M on the upper size of things
The number of actual events would be about 2 million (but they are bundled into files that are ftp's to us)
So at around peak, we would be receiving via ftp about 10 - 12 files/sec.
Nothing really large but large enough as the information basically needs to be unpacked (from the ftp file and processed).
Once the processing is done, they are bundled again into files with additional information and ftp'd to other downstream systems for customer consumption (display only)
There is talk about a 400% increase in events so I naturally started to wonder if the current ftp method was really up to the task and therefore started to wonder about message queuing as an alternative - in particular the idea of message queuing having the ability to broadcast which may allow some parallel processing work to be done on the flowing data versus a strict waterfall approach (this is just an idea, i have not looked into the feasibility exactly)
In a previous role, we used a webserver (WASD) that handled this type of task well
We do not have a relational DB and everything is flat file based with near zero chance of moving to a relational DB. They would rather spend the money on decommissioning than go to the expense of moving to a relational DB (in fact, this is the long term plan - story of my life in regards to VMS it seems - everyone wanting to decommission...)
Well Ian, you give scant details. So I would ask some questions.

Is there any reason to bundle the data? Sounds like you must then unbundle it
at some point.

If the sender(s) and receiver(s) are not always active, then an async message
system could be a solution. Volume would be a consideration there.

However, if they are always available, and that would mean that if one was down
sync communications would also be down. If you can do sync communications, then
perhaps a sync communications system might be appropriate. Listener sockets and
clients can do that. You can also keep up a permanent link.

More versatile would be async communications. Using automated messages would
allow better automation of the task(s). You could process each individual
packet of data as it arrived.

Note that such a procedure isn't something for a system manager to implement.
Software architects and engineers seem to be called for. That of course costs
money, and you have indicated in the past that even if you could offer these
people the best system in the world, they still would not want it.

Note, a RDBMS would not necessarily be of more benefit in what you describe.
Text files and such can store data just as well as anything else. It's in
retrieval that a RDBMS shines. For temporary data, the flat files are probably
a better solution. Actually, with messages, no files are required. "A" sends
data to "B" in a message, "B" processes the data, then sends data to "C" in
another message. It may be that at some point some data may be stored for
historical purposes. A RDBMS would be good for that. But there are other methods.
Arne Vajhøj
2017-03-10 03:55:43 UTC
Permalink
Post by Arne Vajhøj
Post by IanD
It seems in my preliminary looking around, Rabbit has more frequently
updated python support as well
Again - very possible.
But ActiveMQ does have Python support.
I just checked my "snippet storage".

I actually did some hello world level
ActiveMQ Python code 10 years ago.

import time
import sys

import stomp

con = stomp.Connection('localhost', 61613)
con.send('/queue/arne', 'Dette er en test')
con.disconnect()

and:

import threading
import sys

import stomp

class MyListener(object):
def __init__(self, semarg):
self.sem = semarg
self.sem.acquire()
def receive(self, msg):
self.val = msg.split('\n\n')[1]
self.sem.release()

con = stomp.Connection('localhost', 61613)
sem = threading.Semaphore(1)
mylist = MyListener(sem)
con.addlistener(mylist)
con.start()
con.subscribe('/queue/arne')
sem.acquire()
con.unsubscribe('/queue/arne')
con.disconnect()
print mylist.val

Arne
Dirk Munk
2017-03-07 00:19:30 UTC
Permalink
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we posted the December 2016 roadmap I mentioned that we had not yet fully digested the effects of the Alpha release. We now have and you will see some projects have been moved out as we expected they likely would.
The next State of the Port will be the first week in April.
Nice...

A few questions:


1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course, but
I still haven't read anywhere if you will support DECnet Phase v / OSI
over IPv6.

2. PTP is scheduled for 2018-Q1. How are you going to do that, which
kind of adapter?

3. For x86-64 the maximum number of cores is listed as 64+. Assuming it
is 64, then for maximum performance you can use a system with four times
the Xeon Gold 6130, or a system with twice the Xeon Platinum 8180. Not
very impressive in x86-64 land :-) . I suppose you will aim for more
cores then 64?
David Froble
2017-03-07 02:08:31 UTC
Permalink
Post by Dirk Munk
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet fully
digested the effects of the Alpha release. We now have and you will
see some projects have been moved out as we expected they likely would.
The next State of the Port will be the first week in April.
Nice...
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course, but
I still haven't read anywhere if you will support DECnet Phase v / OSI
over IPv6.
Wait for the movie ....

I believe it was Q1 2017, and I've been waiting for it before I attempt any more
work with sockets. Guess summer will really be a vacation ....

:-(
Post by Dirk Munk
2. PTP is scheduled for 2018-Q1. How are you going to do that, which
kind of adapter?
Wait for the movie ....
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming it
is 64, then for maximum performance you can use a system with four times
the Xeon Gold 6130, or a system with twice the Xeon Platinum 8180. Not
very impressive in x86-64 land :-) . I suppose you will aim for more
cores then 64?
First, please mention even one application, or group of applications, that might
need more than 64 cores ....

I do believe that it's been mentioned before that for most uses anything past 8
cores with VMS isn't real helpful?
Dirk Munk
2017-03-07 12:43:00 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
Nice...
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course,
but I still haven't read anywhere if you will support DECnet Phase v /
OSI over IPv6.
Wait for the movie ....
No, I've asked Sue the same question months ago. When you look on the
TCPIP 10.5 presentations, only DECnet Phase IV tunnels over IP are
mentioned, not even Phase V over IPv4.
Post by David Froble
I believe it was Q1 2017, and I've been waiting for it before I attempt
any more work with sockets. Guess summer will really be a vacation ....
:-(
Post by Dirk Munk
2. PTP is scheduled for 2018-Q1. How are you going to do that, which
kind of adapter?
Wait for the movie ....
Technically it is quite a challenge to let PTP set the system time.....
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
I'm sure, but high-end XEONs have more then 8 cores. I don't think Intel
is going to reduce the number of cores on XEONs because VMS doesn't need
more then 8.

If VMS can't *effectively* scale over more then 8 cores, then it should
be tablet OS, not a server OS.
Jan-Erik Soderholm
2017-03-07 12:56:26 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
Nice...
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course,
but I still haven't read anywhere if you will support DECnet Phase v /
OSI over IPv6.
Wait for the movie ....
No, I've asked Sue the same question months ago. When you look on the TCPIP
10.5 presentations, only DECnet Phase IV tunnels over IP are mentioned, not
even Phase V over IPv4.
Post by David Froble
I believe it was Q1 2017, and I've been waiting for it before I attempt
any more work with sockets. Guess summer will really be a vacation ....
:-(
Post by Dirk Munk
2. PTP is scheduled for 2018-Q1. How are you going to do that, which
kind of adapter?
Wait for the movie ....
Technically it is quite a challenge to let PTP set the system time.....
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
I'm sure, but high-end XEONs have more then 8 cores. I don't think Intel is
going to reduce the number of cores on XEONs because VMS doesn't need more
then 8.
If VMS can't *effectively* scale over more then 8 cores, then it should be
tablet OS, not a server OS.
Rubish. The majoroty of the VMS market is probably happy with 2-4 cores.
And the high-number-core XEON servers are usually used to host VMs, not
to run a single 64 (or whatever) core OS instance.

Note also that HP is looking at building XEON servers with "locked down"
processors where maybe 2 or 4 cores are enabled. The thought seems to be
that the market for high-number-cores servers for VMS is rather small,
and that the major part of the VMS market prefers fewer cores with lower
license costs.

If your workload is happy with 2-4 cores why shoud you pay licenses
for 8-16 cores. I'm not talking about VMS licenses here, they seems
to be "socket" based in the future.

Anyway, that "64" number is just a figure on the paper.
David Froble
2017-03-07 14:02:42 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we
posted the December 2016 roadmap I mentioned that we had not yet
fully digested the effects of the Alpha release. We now have and you
will see some projects have been moved out as we expected they likely
would.
The next State of the Port will be the first week in April.
Nice...
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course,
but I still haven't read anywhere if you will support DECnet Phase v /
OSI over IPv6.
Wait for the movie ....
No, I've asked Sue the same question months ago. When you look on the
TCPIP 10.5 presentations, only DECnet Phase IV tunnels over IP are
mentioned, not even Phase V over IPv4.
Post by David Froble
I believe it was Q1 2017, and I've been waiting for it before I attempt
any more work with sockets. Guess summer will really be a vacation ....
:-(
Post by Dirk Munk
2. PTP is scheduled for 2018-Q1. How are you going to do that, which
kind of adapter?
Wait for the movie ....
Technically it is quite a challenge to let PTP set the system time.....
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
I'm sure, but high-end XEONs have more then 8 cores. I don't think Intel
is going to reduce the number of cores on XEONs because VMS doesn't need
more then 8.
If VMS can't *effectively* scale over more then 8 cores, then it should
be tablet OS, not a server OS.
At this time the largest usage of any of our customers is 300 users, plus
background jobs, services, and such. We're now planning a new system which will
have maybe 500 users plus the background jobs and such. I don't think a tablet
will do the job. Nor do I think a tablet OS and VMS should be mentioned in the
same sentence.

Not my job, so I'm not sure, but, I think this is on 2 cores ....
Michael Moroney
2017-03-07 16:17:41 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course,
but I still haven't read anywhere if you will support DECnet Phase v /
OSI over IPv6.
Wait for the movie ....
No, I've asked Sue the same question months ago. When you look on the
TCPIP 10.5 presentations, only DECnet Phase IV tunnels over IP are
mentioned, not even Phase V over IPv4.
Is there a demand for that? Who is forward-looking enough to need IPv6
yet is still using DECnet V?

What about 6to4 tunnels?

Do keep in mind that VSI is resource (people) limited, its people have to
be working on things very important to VSI's future (VMS on x86) and/or
stuff that actually makes money for VSI.
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
I'm sure, but high-end XEONs have more then 8 cores. I don't think Intel
is going to reduce the number of cores on XEONs because VMS doesn't need
more then 8.
This is very, very workload dependent. CPU crunchers that don't have
much I/O will do well but frequently VMS gets spinlock contention as
multiple cores contend for something (frequently IPL 8 stuff) and you
see lots of MP Synch time. Maybe the dedicated core lock manager stuff
should be enabled by default, and other stuff converted to do something
similar, if possible. HP's TCPIP apparently also has that.

In testing, I've run number crunchers on 64 core BL890s that have done well
using all cores. I've even run multithread processes that accumulated a
year of CPU time (as seen with $ SHOW SYSTEM) after running for a couple
weeks.
Post by Dirk Munk
If VMS can't *effectively* scale over more then 8 cores, then it should
be tablet OS, not a server OS.
I wonder how well Windows/Linux really does, excluding the case where the
multicore base machine is a VM host.

Also an option will be a "cluster in a box" perhaps, a VM host with a zillion
cores, with multiple VMS guests in a cluster.
Dirk Munk
2017-03-07 17:45:10 UTC
Permalink
Post by Michael Moroney
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
1. VSI TCPIP v10.5 is scheduled for Q3-2017. It has IPv6 of course,
but I still haven't read anywhere if you will support DECnet Phase v /
OSI over IPv6.
Wait for the movie ....
No, I've asked Sue the same question months ago. When you look on the
TCPIP 10.5 presentations, only DECnet Phase IV tunnels over IP are
mentioned, not even Phase V over IPv4.
Is there a demand for that? Who is forward-looking enough to need IPv6
yet is still using DECnet V?
What about 6to4 tunnels?
Do keep in mind that VSI is resource (people) limited, its people have to
be working on things very important to VSI's future (VMS on x86) and/or
stuff that actually makes money for VSI.
Michael,

This is what I know:

* The MultiNet SPD states that DECnet Phase V is supported. It uses the
PWIP driver (as with HPE IP).

* I don't know if the PWIP driver can use IPv6, the SPD isn't clear
about that.

* There is an RFC for DECnet Phase V / OSI over IPv6, RFC2126. It is
exactly 20 years old this month.

* This RFC isn't in the list of RFC's supported by Multinet, however
RFC1006 and RFC1859 are not mentioned either, and these RFC's are about
DECnet Phase V / OSI over IPv4, so they must be supported.

* I don't know if VSI has tested if it works over IPv6, please tell us.

* Suppose a VMS systems has to be used in an IPv6 only network, and it
uses DECnet Phase V / OSI over IP, then what?

* I don't know how many companies follow the Facebook example, and
switch off IPv4 on their internal network.
Post by Michael Moroney
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
I'm sure, but high-end XEONs have more then 8 cores. I don't think Intel
is going to reduce the number of cores on XEONs because VMS doesn't need
more then 8.
This is very, very workload dependent. CPU crunchers that don't have
much I/O will do well but frequently VMS gets spinlock contention as
multiple cores contend for something (frequently IPL 8 stuff) and you
see lots of MP Synch time. Maybe the dedicated core lock manager stuff
should be enabled by default, and other stuff converted to do something
similar, if possible. HP's TCPIP apparently also has that.
In testing, I've run number crunchers on 64 core BL890s that have done well
using all cores. I've even run multithread processes that accumulated a
year of CPU time (as seen with $ SHOW SYSTEM) after running for a couple
weeks.
That sounds good!
Post by Michael Moroney
Post by Dirk Munk
If VMS can't *effectively* scale over more then 8 cores, then it should
be tablet OS, not a server OS.
I wonder how well Windows/Linux really does, excluding the case where the
multicore base machine is a VM host.
Good point, why don't you test it on the biggest HPE x86-64 box? Or
perhaps ask HPE if they tested it?

That will give you some figures to compare to OpenVMS scaling.
Post by Michael Moroney
Also an option will be a "cluster in a box" perhaps, a VM host with a zillion
cores, with multiple VMS guests in a cluster.
Michael Moroney
2017-03-07 19:37:48 UTC
Permalink
Post by Dirk Munk
Michael,
* The MultiNet SPD states that DECnet Phase V is supported. It uses the
PWIP driver (as with HPE IP).
* I don't know if the PWIP driver can use IPv6, the SPD isn't clear
about that.
* There is an RFC for DECnet Phase V / OSI over IPv6, RFC2126. It is
exactly 20 years old this month.
I did look the last time this came up, it does appear some IPv6 work
was done to DECnet V, but was incomplete/pulled.
Post by Dirk Munk
* This RFC isn't in the list of RFC's supported by Multinet, however
RFC1006 and RFC1859 are not mentioned either, and these RFC's are about
DECnet Phase V / OSI over IPv4, so they must be supported.
* I don't know if VSI has tested if it works over IPv6, please tell us.
No such testing has been done.

Do you know of a VMS customer/user that actually wants/needs this?
Dirk Munk
2017-03-07 23:33:03 UTC
Permalink
Post by Michael Moroney
Post by Dirk Munk
Michael,
* The MultiNet SPD states that DECnet Phase V is supported. It uses
the PWIP driver (as with HPE IP).
* I don't know if the PWIP driver can use IPv6, the SPD isn't clear
about that.
* There is an RFC for DECnet Phase V / OSI over IPv6, RFC2126. It is
exactly 20 years old this month.
I did look the last time this came up, it does appear some IPv6 work
was done to DECnet V, but was incomplete/pulled.
Most likely when the work on IPv6 itself was pulled also.
Post by Michael Moroney
Post by Dirk Munk
* This RFC isn't in the list of RFC's supported by Multinet, however
RFC1006 and RFC1859 are not mentioned either, and these RFC's are about
DECnet Phase V / OSI over IPv4, so they must be supported.
* I don't know if VSI has tested if it works over IPv6, please tell >> us.
No such testing has been done.
Do you know of a VMS customer/user that actually wants/needs this?
No I don't. But I can't see in the future. The transition from IPv4 to
IPv6 is happening over the coming years, so the need for this may arise
next year.

I fully understand your position, let me be very clear about that.

But look at it from another point of view.

You have and support DECnet Phase V / OSI, and part of that package is
DECnet over IP. I wouldn't be surprised if many users of DECnet Phase V
have changed to DECnet over IP, because of the IP-only networks many
companies will have these days.

Now IP is changing from IPv4 to IPv6. If you will not have DECnet over
IPv6, you will be effectively telling your customers that you're giving
up on DECnet Phase V. I don't know what effects that will have on groups
of your customers.

Anyway, VSI needs to be clear on this, it will have to take a
standpoint. The SPDs of TCPIP v10.5 and DECnet phase V / OSI will have
to reflect this, it must be clear if it is or will be supported.
Simon Clubley
2017-03-07 20:14:27 UTC
Permalink
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming it
is 64, then for maximum performance you can use a system with four times
the Xeon Gold 6130, or a system with twice the Xeon Platinum 8180. Not
very impressive in x86-64 land :-) . I suppose you will aim for more
cores then 64?
First, please mention even one application, or group of applications, that might
need more than 64 cores ....
Well, it would allow you to compile LLVM (or current versions of gcc)
in a reasonable amount of time...

A more serious answer would be that some long duration algorithms
are inherently parallel algorithms so the more cores the better.

Whether any of those applications run on VMS is another matter however.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
David Froble
2017-03-07 20:50:28 UTC
Permalink
Post by Simon Clubley
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming it
is 64, then for maximum performance you can use a system with four times
the Xeon Gold 6130, or a system with twice the Xeon Platinum 8180. Not
very impressive in x86-64 land :-) . I suppose you will aim for more
cores then 64?
First, please mention even one application, or group of applications, that might
need more than 64 cores ....
Well, it would allow you to compile LLVM (or current versions of gcc)
in a reasonable amount of time...
A more serious answer would be that some long duration algorithms
are inherently parallel algorithms so the more cores the better.
Whether any of those applications run on VMS is another matter however.
Simon.
Ayep!

And David Mathog protested rather vigorously in the past about the performance
of VMS in situations where computing was important, and I/O to disk was not so
important.

Perhaps VMS is not so good at every application ....

For those things that need what VMS provides, it can be very good.
Arne Vajhøj
2017-03-08 00:01:04 UTC
Permalink
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+. Assuming
it is 64, then for maximum performance you can use a system with four
times the Xeon Gold 6130, or a system with twice the Xeon Platinum
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of applications,
that might need more than 64 cores ....
There are probably not so many.

But some comes to mind:
* high end databases (IBM did a TPC-C result with 192 cores)
* high performance OLTP applications that can not be scaled horizontally
(Azul sold/sell a Java appliance with up to 864 cores)
* simulation number crunching

Commercially the market for >64 cores must be very small.

But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
Post by David Froble
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
That must depend much more on the application than on the OS.

Arne
Kerry Main
2017-03-08 04:14:28 UTC
Permalink
-----Original Message-----
Arne Vajhøj via Info-vax
Sent: March 7, 2017 7:01 PM
Subject: Re: [Info-vax] March 2017 Roadmap
Post by David Froble
Post by Dirk Munk
3. For x86-64 the maximum number of cores is listed as 64+.
Assuming
Post by David Froble
Post by Dirk Munk
it is 64, then for maximum performance you can use a system with
four
Post by David Froble
Post by Dirk Munk
times the Xeon Gold 6130, or a system with twice the Xeon
Platinum
Post by David Froble
Post by Dirk Munk
8180. Not very impressive in x86-64 land :-) . I suppose you will aim
for more cores then 64?
First, please mention even one application, or group of
applications,
Post by David Froble
that might need more than 64 cores ....
There are probably not so many.
Correct - while not a completely 1 to 1 comparison, likely 90+% of
most Windows/Linux VM's run on VMware in 8 vCPU's or less.
* high end databases (IBM did a TPC-C result with 192 cores)
* high performance OLTP applications that can not be scaled
horizontally (Azul sold/sell a Java appliance with up to 864 cores)
* simulation number crunching
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support "up
to
256" or "up to 1024"?
OpenVMS Engineering usually like to test and verify before putting
support statements together.

Its similar to the 96 node cluster support .. most agree larger
numbers would work, but from what I have heard, they just have never
had a business justification to put together a 150 node cluster and do
a cluster verification test.
Post by David Froble
I do believe that it's been mentioned before that for most uses
anything past 8 cores with VMS isn't real helpful?
ROTFL .. I guess those Customers still running Alpha GS1280's with 32
cpus don't count?
That must depend much more on the application than on the OS.
Arne
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.

While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are NUMA
based.

OpenVMS has a long history and experience with scaling up with RAD's
and NUMA dating back to early Alpha designs:

https://www.hpe.com/h20195/V2/GetPDF.aspx/4AA4-0531ENW.pdf
http://h41379.www4.hpe.com/openvms/journal/v16/rad.pdf

Fwiw, I also remember more than a few years ago an internal benchmark
comparing Tru64 UNIX vs. OpenVMS on the same HW (think it was big 32
cpu GS1280's?). Tru64 was hands down the fastest UNIX at the time
(even Sun folks would admit that, before arguing what poor ISV support
it had). Anyway, OpenVMS numbers were +/- 10% of the Tru64 UNIX
numbers.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
David Froble
2017-03-08 05:13:23 UTC
Permalink
Post by Kerry Main
-----Original Message-----
Post by David Froble
I do believe that it's been mentioned before that for most uses
anything past 8 cores with VMS isn't real helpful?
ROTFL .. I guess those Customers still running Alpha GS1280's with 32
cpus don't count?
Give me a bit of a break Kerry. The GS1280 is a very impressive system, with
some things still perhaps not matched in the newest systems today.

Perhaps there are reasons some customers are still using them?

But for lesser systems, you might find my comment a bit more accurate.
Kerry Main
2017-03-09 01:27:39 UTC
Permalink
-----Original Message-----
David Froble via Info-vax
Sent: March 8, 2017 12:13 AM
Subject: Re: [Info-vax] March 2017 Roadmap
Post by Kerry Main
-----Original Message-----
Post by David Froble
I do believe that it's been mentioned before that for most uses
anything past 8 cores with VMS isn't real helpful?
ROTFL .. I guess those Customers still running Alpha GS1280's with 32
cpus don't count?
Give me a bit of a break Kerry. The GS1280 is a very impressive
system, with some things still perhaps not matched in the newest
systems today.
Perhaps there are reasons some customers are still using them?
It was only the latest I4 blades that are finally able to exceed the
GS1280 performance levels. That's how far ahead the GS1280 was in
terms of performance.
But for lesser systems, you might find my comment a bit more
accurate.
Its not accurate because, like most platforms today, most OpenVMS
Alpha/IA64 Customers do not need more than 8 cpu's. This is also true
for those using clustering where the load can be distributed evenly
across all available nodes. Of course, there are likely 20% that are
performance sensitive.

Your comment "Isn't really helpful" implies some sort of limitations
in OpenVMS.

Most Alpha / IA64 OpenVMS Customers will not be upgrading to X86-64
for performance reasons, but rather for more current and reduced
support costs, standardization of server HW etc. Their existing Alpha
/ IA64 servers typically handle their existing loads with no issues.


Regards,

Kerry Main
Kerry dot main at starkgaming dot com
Arne Vajhøj
2017-03-09 00:35:39 UTC
Permalink
Post by Kerry Main
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support "up
to
Post by Arne Vajhøj
256" or "up to 1024"?
OpenVMS Engineering usually like to test and verify before putting
support statements together.
Its similar to the 96 node cluster support .. most agree larger
numbers would work, but from what I have heard, they just have never
had a business justification to put together a 150 node cluster and do
a cluster verification test.
Fair enough.

But then they do a test.

A 4s80c160t x86-64 server is not that expensive.

Or rent an EC2 instance at AWS. An x1.32xlarge instance
(128 VCPU, 1953 GB memory, 4 TB SSD) cost $13.34 per hour.
Post by Kerry Main
Post by Arne Vajhøj
That must depend much more on the application than on the OS.
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.
While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are NUMA
based.
????

Most real world applications has hundreds of schedulable threads today.

A plain desktop Windows can managed about a thousand before
the overhead becomes significant.

It is true that most applications has a limit on how many cores they
can use, but that is usually because they are not CPU bound.

Arne
Kerry Main
2017-03-09 01:53:55 UTC
Permalink
-----Original Message-----
Arne Vajhøj via Info-vax
Sent: March 8, 2017 7:36 PM
Subject: Re: [Info-vax] March 2017 Roadmap
Post by Kerry Main
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support "up
to
Post by Arne Vajhøj
256" or "up to 1024"?
OpenVMS Engineering usually like to test and verify before putting
support statements together.
Its similar to the 96 node cluster support .. most agree larger
numbers would work, but from what I have heard, they just have
never
Post by Kerry Main
had a business justification to put together a 150 node cluster and do
a cluster verification test.
Fair enough.
But then they do a test.
A 4s80c160t x86-64 server is not that expensive.
Or rent an EC2 instance at AWS. An x1.32xlarge instance
(128 VCPU, 1953 GB memory, 4 TB SSD) cost $13.34 per hour.
First, VSI needs to get a X86-64 OpenVMS system booted.

😊
Post by Kerry Main
Post by Arne Vajhøj
That must depend much more on the application than on the OS.
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.
While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are
NUMA
Post by Kerry Main
based.
????
Most real world applications has hundreds of schedulable threads today.
A plain desktop Windows can managed about a thousand before the
overhead becomes significant.
That’s just plain wrong. Large scale multi-threading requires special programming in order to avoid deadlocks and race conditions.

Reference MS's recommendations:
https://msdn.microsoft.com/en-us/library/1c9txz50(v=vs.110).aspx
It is true that most applications has a limit on how many cores they can
use, but that is usually because they are not CPU bound.
Arne
Arne Vajhøj
2017-03-09 02:19:03 UTC
Permalink
Post by Kerry Main
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support "up
to
Post by Arne Vajhøj
256" or "up to 1024"?
OpenVMS Engineering usually like to test and verify before putting
support statements together.
Its similar to the 96 node cluster support .. most agree larger
numbers would work, but from what I have heard, they just have
never
Post by Kerry Main
had a business justification to put together a 150 node cluster and do
a cluster verification test.
Fair enough.
But then they do a test.
A 4s80c160t x86-64 server is not that expensive.
Or rent an EC2 instance at AWS. An x1.32xlarge instance
(128 VCPU, 1953 GB memory, 4 TB SSD) cost $13.34 per hour.
First, VSI needs to get a X86-64 OpenVMS system booted.
😊
Yep.

:-)

Arne
Arne Vajhøj
2017-03-09 02:35:07 UTC
Permalink
Post by Kerry Main
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
That must depend much more on the application than on the OS.
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.
While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are
NUMA
Post by Kerry Main
based.
????
Most real world applications has hundreds of schedulable threads today.
A plain desktop Windows can managed about a thousand before the
overhead becomes significant.
That’s just plain wrong. Large scale multi-threading requires
special
programming in order to avoid deadlocks and race conditions.
https://msdn.microsoft.com/en-us/library/1c9txz50(v=vs.110).aspx
Most multi-threaded programming is done using various frameworks
that handle most of the thread synchronization leaving only
trivial precautions to be made by the application developer.

Very rarely is all the thread synchronization done manually
by the application programmer.

So the .NET Monitor class that your link references (or
its Java equivalents in the Object class) are not seen much
in the wild.

They are relative frequently used in 1st and 2nd year
CSc work.

Which of course reveals how specialized the skill set is:
you need to hire someone with 2+ years of IT education
from within the last 20 years. There must be a double digit
number of millions of people with those skills.

The link you gave actually do reference some of the simpler
stuff: TPL and PLINQ. They could also have mentioned
async/await and the basic ThreadPool class. Which all
ensure that the application developer do not need to
worry about Monitor methods at all.

And for the Java folk it is ThreadPool, Fork Join framework,
and parallel streams.

Stuff like this may have been a big problem 25 years ago
in C with pthreads. But today it is rather trivial.

Arne
Arne Vajhøj
2017-03-09 03:01:46 UTC
Permalink
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
That must depend much more on the application than on the OS.
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.
While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are
NUMA
Post by Kerry Main
based.
????
Most real world applications has hundreds of schedulable threads today.
A plain desktop Windows can managed about a thousand before the
overhead becomes significant.
That’s just plain wrong. Large scale multi-threading requires
special
programming in order to avoid deadlocks and race conditions.
https://msdn.microsoft.com/en-us/library/1c9txz50(v=vs.110).aspx
Most multi-threaded programming is done using various frameworks
that handle most of the thread synchronization leaving only
trivial precautions to be made by the application developer.
Very rarely is all the thread synchronization done manually
by the application programmer.
So the .NET Monitor class that your link references (or
its Java equivalents in the Object class) are not seen much
in the wild.
They are relative frequently used in 1st and 2nd year
CSc work.
you need to hire someone with 2+ years of IT education
from within the last 20 years. There must be a double digit
number of millions of people with those skills.
The link you gave actually do reference some of the simpler
stuff: TPL and PLINQ. They could also have mentioned
async/await and the basic ThreadPool class. Which all
ensure that the application developer do not need to
worry about Monitor methods at all.
And for the Java folk it is ThreadPool, Fork Join framework,
and parallel streams.
Stuff like this may have been a big problem 25 years ago
in C with pthreads. But today it is rather trivial.
Let me illustrate with a little Java example:

import java.util.ArrayList;
import java.util.List;

public class ThreadFun {
public static class SharedState {
public void doSomething() {
}
}
public static class SomeWork {
private SharedState ss;
public SomeWork(SharedState ss) {
this.ss = ss;
}
public void doSomethingHeavy() {
try {
synchronized(ss) { // synchronize access between threads
ss.doSomething();
}
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static void main(String[] args) {
SharedState ss = new SharedState();
List<SomeWork> work = new ArrayList<>();
for(int i = 0; i < 1000; i++) {
work.add(new SomeWork(ss));
}
// serial
long t1 = System.currentTimeMillis();
work.stream().forEach(w -> w.doSomethingHeavy());
long t2 = System.currentTimeMillis();
System.out.printf("Serial : %d\n", t2 - t1);
// parallel

System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism",
"250"); // use 250 threads
long t3 = System.currentTimeMillis();
work.stream().parallel().forEach(w -> w.doSomethingHeavy()); //
parallelize
long t4 = System.currentTimeMillis();
System.out.printf("Parallel : %d\n", t4 - t3);
}
}

Output:

Serial : 100055
Parallel : 420

250 threads. Synchronization of access to shared state. Dirt simple.

Arne
Jan-Erik Soderholm
2017-03-09 09:39:06 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
Post by Kerry Main
Post by Arne Vajhøj
That must depend much more on the application than on the OS.
I would say both the OS and App have an impact on scalability. Not
sure about the latest versions, but as I recall, MS stated MS Exchange
2012 would not scale over 12 cores.
While less of an issue than in the past, when going over 16+ cores,
process / OS scheduling, and application design can be a big deal on
NUMA style architectures. Btw, most X86-64 platforms today are
NUMA
Post by Kerry Main
based.
????
Most real world applications has hundreds of schedulable threads today.
A plain desktop Windows can managed about a thousand before the
overhead becomes significant.
That’s just plain wrong. Large scale multi-threading requires
special
programming in order to avoid deadlocks and race conditions.
https://msdn.microsoft.com/en-us/library/1c9txz50(v=vs.110).aspx
Most multi-threaded programming is done using various frameworks
that handle most of the thread synchronization leaving only
trivial precautions to be made by the application developer.
Very rarely is all the thread synchronization done manually
by the application programmer.
So the .NET Monitor class that your link references (or
its Java equivalents in the Object class) are not seen much
in the wild.
They are relative frequently used in 1st and 2nd year
CSc work.
you need to hire someone with 2+ years of IT education
from within the last 20 years. There must be a double digit
number of millions of people with those skills.
The link you gave actually do reference some of the simpler
stuff: TPL and PLINQ. They could also have mentioned
async/await and the basic ThreadPool class. Which all
ensure that the application developer do not need to
worry about Monitor methods at all.
And for the Java folk it is ThreadPool, Fork Join framework,
and parallel streams.
Stuff like this may have been a big problem 25 years ago
in C with pthreads. But today it is rather trivial.
import java.util.ArrayList;
import java.util.List;
public class ThreadFun {
public static class SharedState {
public void doSomething() {
}
}
public static class SomeWork {
private SharedState ss;
public SomeWork(SharedState ss) {
this.ss = ss;
}
public void doSomethingHeavy() {
try {
synchronized(ss) { // synchronize access between threads
ss.doSomething();
}
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static void main(String[] args) {
SharedState ss = new SharedState();
List<SomeWork> work = new ArrayList<>();
for(int i = 0; i < 1000; i++) {
work.add(new SomeWork(ss));
}
// serial
long t1 = System.currentTimeMillis();
work.stream().forEach(w -> w.doSomethingHeavy());
long t2 = System.currentTimeMillis();
System.out.printf("Serial : %d\n", t2 - t1);
// parallel
System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism",
"250"); // use 250 threads
long t3 = System.currentTimeMillis();
work.stream().parallel().forEach(w -> w.doSomethingHeavy()); //
parallelize
long t4 = System.currentTimeMillis();
System.out.printf("Parallel : %d\n", t4 - t3);
}
}
Serial : 100055
Parallel : 420
250 threads. Synchronization of access to shared state. Dirt simple.
Arne
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.

It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
Arne Vajhøj
2017-03-10 00:24:21 UTC
Permalink
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.

But I don't think they are significant different between 64, 128, 256 etc..

Arne
Jan-Erik Soderholm
2017-03-10 00:45:52 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
Arne
OK.

What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
are they expected to run a standard business environment with a single
OS instance? Or will they more probably be used as VM hosts?
Arne Vajhøj
2017-03-10 00:54:31 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
OK.
What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
are they expected to run a standard business environment with a single
OS instance? Or will they more probably be used as VM hosts?
By far most of them are used to run multiple VM's.

I did post some examples of exceptions a dozens or more
emails ago.
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
* high end databases (IBM did a TPC-C result with 192 cores)
* high performance OLTP applications that can not be scaled
horizontally (Azul sold/sell a Java appliance with up to 864 cores)
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
* simulation number crunching
Arne
Jan-Erik Soderholm
2017-03-10 12:17:00 UTC
Permalink
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of
threads that is the issue. It is not any (or much) different from
handling 1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of
(physical!) CPUs or cores that is. There comes a number of
hardware related issues into the picture. Handling of caches and
so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
OK.
What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
are they expected to run a standard business environment with a
single OS instance? Or will they more probably be used as VM hosts?
By far most of them are used to run multiple VM's.
I did post some examples of exceptions a dozens or more emails ago.
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
But some comes to mind: * high end databases (IBM did a TPC-C result
with 192 cores)
Just a test, not a real world business environment.
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
* high performance OLTP applications that can not be scaled
horizontally (Azul sold/sell a Java appliance with up to 864 cores)
How many was sold? (Don't need to answer...)
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
* simulation number crunching
Yes, that is big in the VMS market, I've heard...

I think my question still stands unanswered.
Post by Arne Vajhøj
Arne
Richard Maher
2017-03-10 13:03:15 UTC
Permalink
Post by Jan-Erik Soderholm
How many was sold? (Don't need to answer...)
Post by Arne Vajhøj
* simulation number crunching
Yes, that is big in the VMS market, I've heard...
I think my question still stands unanswered.
Agreed. I think the answer is easily obtainable but I'm too lazy. Can
someone simply look-up the Azure or AWS or Google Cloud options for
server configuration? Not a definitive answer but a pretty good
indicator of the sweet spot for customer demand with be Top of the range
server minus 1. (Don't know how VMS will cope with solid state TB drives
but that's a separate issue.)
David Froble
2017-03-10 13:59:57 UTC
Permalink
Post by Richard Maher
Post by Jan-Erik Soderholm
How many was sold? (Don't need to answer...)
Post by Arne Vajhøj
* simulation number crunching
Yes, that is big in the VMS market, I've heard...
I think my question still stands unanswered.
Agreed. I think the answer is easily obtainable but I'm too lazy. Can
someone simply look-up the Azure or AWS or Google Cloud options for
server configuration? Not a definitive answer but a pretty good
indicator of the sweet spot for customer demand with be Top of the range
server minus 1. (Don't know how VMS will cope with solid state TB drives
but that's a separate issue.)
Right now VMS copes very well with a cache. Fit your database into memory, and
things get much better. We've tested, and when you disable the cache, you can
watch the entire system slow down significantly. About then the users are
looking to mate up a tree, a rope, and the person who disabled the cache.

I'd figure that a SSD would just bring the difference between disk and cache a
bit closer.
Dirk Munk
2017-03-10 09:52:22 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
Arne
OK.
What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
The next generation of XEON CPUs will have 28 cores, so a four socket
x86 system will have > 100 cores... A four socket x86 system really
isn't something special, I'm sure you will agree.
Post by Jan-Erik Soderholm
are they expected to run a standard business environment with a single
OS instance? Or will they more probably be used as VM hosts?
Windows can handle > 600 cores, Linux 4096. Those are architectural
limits for today's versions.

In my view the theoretical maximum number of cores in an OS should never
be reached. If you could increase the maximum number of cores to 65536
without any adverse effect on the performance of VMS, then why not. That
number will never be reached in the foreseeable future, so VMS will not
have to be redesigned if a lower maximum number of cores would actually
be reached.
Jan-Erik Soderholm
2017-03-10 12:29:49 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
Arne
OK.
What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
The next generation of XEON CPUs will have 28 cores, so a four socket x86
system will have > 100 cores... A four socket x86 system really isn't
something special, I'm sure you will agree.
Which wasn't what I ask. I do not care how many cores you can put
into a some future box, it is mostly marketing hype anyway.

You are just chasing numbers with no real world relevance.
Post by Jan-Erik Soderholm
are they expected to run a standard business environment with a single
OS instance? Or will they more probably be used as VM hosts?
Windows can handle > 600 cores, Linux 4096. Those are architectural limits
for today's versions.
I don't care about "architectural limits", I care about what is actualy
used out there. Does anyone run a generall business server using Windows
with > 600 cores or using Linux using 4096 cores? Not counting VM boxes...
In my view the theoretical maximum number of cores in an OS should never be
reached. If you could increase the maximum number of cores to 65536 without
any adverse effect on the performance of VMS, then why not. That number
will never be reached in the foreseeable future, so VMS will not have to be
redesigned if a lower maximum number of cores would actually be reached.
Dirk Munk
2017-03-10 12:55:12 UTC
Permalink
Post by Jan-Erik Soderholm
Post by Jan-Erik Soderholm
Post by Arne Vajhøj
Post by Jan-Erik Soderholm
As far as I understand, it is not handling a large number of threads
that is the issue. It is not any (or much) different from handling
1000s of processes in (say) an VMS system.
It is the scheduling of these threads on a large number of (physical!)
CPUs or cores that is. There comes a number of hardware related issues
into the picture. Handling of caches and so on.
There are definitely some issues.
But I don't think they are significant different between 64, 128, 256 etc..
Arne
OK.
What is the largest common number of cores using a single OS-instance
in a non-VM environment in use today? These 100+ cores XEON servers,
The next generation of XEON CPUs will have 28 cores, so a four socket x86
system will have > 100 cores... A four socket x86 system really isn't
something special, I'm sure you will agree.
Which wasn't what I ask. I do not care how many cores you can put
into a some future box, it is mostly marketing hype anyway.
How do you mean 'future box' and 'marketing hype'? Wake up!

The Xeon Platinum 8180 is due next month or so, it has 28 cores, and
with hyperthreading enabled 56. So a simple DL580 would get 112 cores or
224 with hyperthreading enabled.
Post by Jan-Erik Soderholm
You are just chasing numbers with no real world relevance.
No, you fail to recognize that CPU's are getting more and more cores,
and that an OS has to deal with that. With the present 64 core limit,
VMS can't even run on a two socket server in a couple of years!

I have used DL580's in the real world, and not with VM's.
Post by Jan-Erik Soderholm
Post by Jan-Erik Soderholm
are they expected to run a standard business environment with a single
OS instance? Or will they more probably be used as VM hosts?
Windows can handle > 600 cores, Linux 4096. Those are architectural limits
for today's versions.
I don't care about "architectural limits", I care about what is actualy
used out there. Does anyone run a generall business server using Windows
with > 600 cores or using Linux using 4096 cores? Not counting VM boxes...
There are Linux boxes using 2048 cores.
Post by Jan-Erik Soderholm
In my view the theoretical maximum number of cores in an OS should never be
reached. If you could increase the maximum number of cores to 65536 without
any adverse effect on the performance of VMS, then why not. That number
will never be reached in the foreseeable future, so VMS will not have to be
redesigned if a lower maximum number of cores would actually be reached.
Jan-Erik Soderholm
2017-03-10 13:28:39 UTC
Permalink
With the present 64 core limit, VMS can't
even run on a two socket server in a couple of years!
I'm confident that there will be two socket servers with
64 (or less) cores for a very long time to come.

The absolute high-end servers will never be the only servers.

If there would *only* be servers with *more* then 64 cores
available, yes we would have an issue, but that will never
happen, where "never" is "in at least 10 years" or so.
Dirk Munk
2017-03-10 15:31:33 UTC
Permalink
Post by Jan-Erik Soderholm
With the present 64 core limit, VMS can't
even run on a two socket server in a couple of years!
I'm confident that there will be two socket servers with
64 (or less) cores for a very long time to come.
The absolute high-end servers will never be the only servers.
absolute high-end servers? You can buy the DL580 on Amazon !!!
Post by Jan-Erik Soderholm
If there would *only* be servers with *more* then 64 cores
available, yes we would have an issue, but that will never
happen, where "never" is "in at least 10 years" or so.
Yes, great point of view for consolidation of old VMS applications, no
good for anyone who is considering new software development on VMS.
Stephen Hoffman
2017-03-09 13:52:48 UTC
Permalink
Post by Kerry Main
That’s just plain wrong. Large scale multi-threading requires special
programming in order to avoid deadlocks and race conditions.
I'm running ~3500 threads on the local macOS box right now, and it's
nearly quiescent. If I spool up some more activity or a build, I can
get double or triple that number of threads active and might even spool
up the fans on the box. As for newer programming, having support for
something akin to C blocks and libdispatch — or Rust, which
specifically targets concurrency — would be handy on OpenVMS, but I'm
not expecting those sorts of updates to arrive any time soon.
Concurrent programming on OpenVMS is a bit of a slog, with either
pthreads or KP threads or such for multithreading, or with home-grown
threading, or with home-grown multi-processing as is probably most
common with existing applications.
--
Pure Personal Opinion | HoffmanLabs LLC
Michael Moroney
2017-03-08 16:10:17 UTC
Permalink
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Post by Arne Vajhøj
Post by David Froble
I do believe that it's been mentioned before that for most uses anything
past 8 cores with VMS isn't real helpful?
That must depend much more on the application than on the OS.
Yes. The number cruncher I mentioned did quite well keeping
the 64 processor system running at about 6350% in user mode.

If the application does lots of IO, locking etc. the processors
will start fighting over things like the IPL 8 spinlock.
Arne Vajhøj
2017-03-09 00:25:15 UTC
Permalink
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.

Better before first release than 5 years after.

Arne
Dirk Munk
2017-03-09 08:45:55 UTC
Permalink
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
David Froble
2017-03-09 17:41:55 UTC
Permalink
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.

Perhaps much depends upon "much of the SMP support depends on the fact that
there are 64 bits in a natively addressable quadword". Perhaps emphasis on the
word "much".

Could another data structure be used? Of course. But at what extra cost, if
any? Why not a byte mask instead of a bit mask? It's very unclear to me just
how much additional overhead might be encountered with a more complex data
structure.

The bottom line is, I don't know, and I'm guessing most who are asking for such
also don't know. If there is no appreciable cost, other than the initial
implementation, sure, allow 65536 cores. You'll never see it. I'm sure VSI
would never test, or support, such.

From my perspective, multiple cores for VMS can interact closely with each
other. Such sycronization is not without cost, sometimes significant cost, and
the curve as number of cores increase is not linear.

For an application or system that is going to assign a core some task, which can
be completely contained to that core (or cores), perhaps there is much less
overhead. I'm sure there are such tasks. I for one haven't seen them on VMS.
But, I don't get out much ....

This isn't just about "bragging rights" ....
Dirk Munk
2017-03-09 19:37:50 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.

Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.

It is not about bragging, x86 systems with more then 64 cores are
nothing special.

Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.

It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
Richard Maher
2017-03-09 21:30:03 UTC
Permalink
I couldn't care less if VMS' maximum was 64 cores as long as it could
scale OUT in the cloud as a cheap commodity OS and support REDIS cache.

The cloud would spin up as many instances as required of the desired
state to meet the load.
David Froble
2017-03-09 22:52:17 UTC
Permalink
Post by Richard Maher
I couldn't care less if VMS' maximum was 64 cores as long as it could
scale OUT in the cloud as a cheap commodity OS and support REDIS cache.
The cloud would spin up as many instances as required of the desired
state to meet the load.
As long as the application(s) could use such an environment.

I see we've put away the potty language for a while. Should I take book on how
long that will last?

:-)
Richard Maher
2017-03-10 00:49:15 UTC
Permalink
Post by David Froble
Post by Richard Maher
I couldn't care less if VMS' maximum was 64 cores as long as it could
scale OUT in the cloud as a cheap commodity OS and support REDIS cache.
The cloud would spin up as many instances as required of the desired
state to meet the load.
As long as the application(s) could use such an environment.
I see we've put away the potty language for a while. Should I take book
on how long that will last?
:-)
I watched the first episode of QI the other day (Just not the same
without Stephen Fry) and they had an interesting set of heat maps of the
States from Twitter. For example, the states that say "Darn" and "Gosh"
the most and those where "Asshole" is never used, and it occurred to me
how easily I could be misunderstood in such a seemingly prudish place.

It's not only banjo-plucking red-neck, trailor trash, that swears.
Although I am little bit bogan.

I contend my profanity is almost always contextualized, adds colour and
emphasis and my posts would be less without it. Also, something I recall
from Educating Rita, is that only poor people seem to have a problem
with swearing.

Curiously enough, what has got me band from various groups is not
swearing but being articulate enough to turn a mirror on the poisonous
cliques that frequent these places. Just try to point out that anything
other than monolithic leftist group-think is not tolerated and you'll
get a warning for violating the all governing CoC. Tell them that there
more interested in land-rights for gay whales than they are technology
and you're banned :-(
David Froble
2017-03-09 22:50:25 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there would
be any, but it could be a concern. Me, I'll accept the 64 core limit over a
higher limit and adverse affects on the OS.

First, it's got to work ....
Dirk Munk
2017-03-09 23:41:41 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86 Superdome
can have 384 cores. I would expect VMS to be able to run on those
systems, maybe not immediately, but surely later on.

I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the OS
to handle more cores.

If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
David Froble
2017-03-10 01:44:24 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86 Superdome
can have 384 cores. I would expect VMS to be able to run on those
systems, maybe not immediately, but surely later on.
I don't care how many cores you can stuff into a box. It really doesn't matter.
The only thing that matters is that a configuration is usable and delivers
decent performance.
Post by Dirk Munk
I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the OS
to handle more cores.
You're just stuck on numbers. You're not addressing "practical". It's an ego
trip. Go ahead, get a bunch of cores, keep them running crunching numbers, they
will probably keep you warm all winter, if you can afford the electric bill.

Where's Richard when his colorful language is needed?
Post by Dirk Munk
If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
It's as simple as .....

No, I'm not going to type that, though I got pretty close ....
Dirk Munk
2017-03-10 09:28:14 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86
Superdome can have 384 cores. I would expect VMS to be able to run on
those systems, maybe not immediately, but surely later on.
I don't care how many cores you can stuff into a box. It really doesn't
matter. The only thing that matters is that a configuration is usable
and delivers decent performance.
Post by Dirk Munk
I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the
OS to handle more cores.
You're just stuck on numbers. You're not addressing "practical". It's
an ego trip. Go ahead, get a bunch of cores, keep them running
crunching numbers, they will probably keep you warm all winter, if you
can afford the electric bill.
Where's Richard when his colorful language is needed?
Post by Dirk Munk
If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
It's as simple as .....
No, I'm not going to type that, though I got pretty close ....
Your problem is that you are so eager to get VMS running on x86, that
you completely overlook that people will see it as a new operating
system, and will judge it from the specifications.

If they see that it can not scale over 64 cores, then I can tell you
what they will think. And that is that the design of OpenVMS for x86-64
must be seriously flawed if it can't scale over 64 cores, when the
number of cores in high-end Xeon CPUs is increasing in every generation.
The next generation will have 28 cores, at the time OpenVMS x86-64 hits
the market it may be over 32 cores. That means that even a two socket
server would have too many cores!

The only way computers can get more power these days is by increasing
the number of cores. An operating system that can't handle a large
number of cores is obsolete.

VMS was designed in a time when only single core CPUs existed. So the
limit of 32 and later 64 cores is very understandable. If OpenVMS hadn't
been neglected for 10 or 15 years, I'm sure the number of cores would
have been raised by now.

On an Itanium Superdome VMS can use 32 cores, Linux 128. If HP had taken
VMS seriously, VMS would have been capable of using 128 cores as well by
now.

Unless you think VSI should follow in HP's footsteps by not increasing
the core limit sufficiently to accommodate modern servers, stop saying
that 64 cores is enough. It makes no sense.
David Froble
2017-03-10 14:19:04 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86
Superdome can have 384 cores. I would expect VMS to be able to run on
those systems, maybe not immediately, but surely later on.
I don't care how many cores you can stuff into a box. It really doesn't
matter. The only thing that matters is that a configuration is usable
and delivers decent performance.
Post by Dirk Munk
I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the
OS to handle more cores.
You're just stuck on numbers. You're not addressing "practical". It's
an ego trip. Go ahead, get a bunch of cores, keep them running
crunching numbers, they will probably keep you warm all winter, if you
can afford the electric bill.
Where's Richard when his colorful language is needed?
Post by Dirk Munk
If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
It's as simple as .....
No, I'm not going to type that, though I got pretty close ....
Your problem is that you are so eager to get VMS running on x86, that
you completely overlook that people will see it as a new operating
system, and will judge it from the specifications.
Yep, I was right. You're on some ego trip, "mine is bigger than yours", type of
thing.

Yes, I am "eager" for VMS on x86, not that I'm all that happy with x86, but it's
the biggest game in town. However, that has nothing to do with my
considerations for the usefulness of a large number of cores.

I do believe that I've already written that if 65536 cores could be used,
WITHOUT CAUSING PROBLEMS, fine, go ahead, implement it.
Post by Dirk Munk
If they see that it can not scale over 64 cores, then I can tell you
what they will think. And that is that the design of OpenVMS for x86-64
must be seriously flawed if it can't scale over 64 cores, when the
number of cores in high-end Xeon CPUs is increasing in every generation.
The next generation will have 28 cores, at the time OpenVMS x86-64 hits
the market it may be over 32 cores. That means that even a two socket
server would have too many cores!
The only way computers can get more power these days is by increasing
the number of cores. An operating system that can't handle a large
number of cores is obsolete.
VMS was designed in a time when only single core CPUs existed. So the
limit of 32 and later 64 cores is very understandable. If OpenVMS hadn't
been neglected for 10 or 15 years, I'm sure the number of cores would
have been raised by now.
On an Itanium Superdome VMS can use 32 cores, Linux 128. If HP had taken
VMS seriously, VMS would have been capable of using 128 cores as well by
now.
Unless you think VSI should follow in HP's footsteps by not increasing
the core limit sufficiently to accommodate modern servers, stop saying
that 64 cores is enough. It makes no sense.
What doesn't make sense is you.

For real world tasks, you know, what people actually use computers for, most of
the time in an SMP environment there is contention between cores for resources.
The more contention, the more things grind to a halt. You refuse to address
this real world consideration.

The only way I can think of to avoid the contention is for each core to have
it's own resources, and then you don't have a single computer, you have bunch
of computers in a single box.
Dirk Munk
2017-03-10 15:17:45 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86
Superdome can have 384 cores. I would expect VMS to be able to run on
those systems, maybe not immediately, but surely later on.
I don't care how many cores you can stuff into a box. It really doesn't
matter. The only thing that matters is that a configuration is usable
and delivers decent performance.
Post by Dirk Munk
I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the
OS to handle more cores.
You're just stuck on numbers. You're not addressing "practical". It's
an ego trip. Go ahead, get a bunch of cores, keep them running
crunching numbers, they will probably keep you warm all winter, if you
can afford the electric bill.
Where's Richard when his colorful language is needed?
Post by Dirk Munk
If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
It's as simple as .....
No, I'm not going to type that, though I got pretty close ....
Your problem is that you are so eager to get VMS running on x86, that
you completely overlook that people will see it as a new operating
system, and will judge it from the specifications.
Yep, I was right. You're on some ego trip, "mine is bigger than yours",
type of thing.
Rubbish, if I were to buy a simple DL580 with four CPUs and 112 cores in
two years time, and I can't use VMS on it because it supports no more
then 64 cores, then it is bye bye VMS. I'm not talking about high-end
super computers, just plain simple x86 servers.
Post by David Froble
Yes, I am "eager" for VMS on x86, not that I'm all that happy with x86,
but it's the biggest game in town. However, that has nothing to do with
my considerations for the usefulness of a large number of cores.
I do believe that I've already written that if 65536 cores could be
used, WITHOUT CAUSING PROBLEMS, fine, go ahead, implement it.
Fine, and I'm saying that when VMS for x86-64 is introduced and it
supports no more then 64 cores without a reasonable prospect of
supporting more, then VMS is dead. In fact VSI could stop the
development right now.

This is not about your VMS application, this is about VMS becoming a
viable operating system for new developments also, and with 64 cores
max. no one in his right mind is going to invest in applications and
knowledge for VMS.
Post by David Froble
Post by Dirk Munk
If they see that it can not scale over 64 cores, then I can tell you
what they will think. And that is that the design of OpenVMS for
x86-64 must be seriously flawed if it can't scale over 64 cores, when
the number of cores in high-end Xeon CPUs is increasing in every
generation. The next generation will have 28 cores, at the time
OpenVMS x86-64 hits the market it may be over 32 cores. That means
that even a two socket server would have too many cores!
The only way computers can get more power these days is by increasing
the number of cores. An operating system that can't handle a large
number of cores is obsolete.
VMS was designed in a time when only single core CPUs existed. So the
limit of 32 and later 64 cores is very understandable. If OpenVMS
hadn't been neglected for 10 or 15 years, I'm sure the number of cores
would have been raised by now.
On an Itanium Superdome VMS can use 32 cores, Linux 128. If HP had
taken VMS seriously, VMS would have been capable of using 128 cores as
well by now.
Unless you think VSI should follow in HP's footsteps by not increasing
the core limit sufficiently to accommodate modern servers, stop saying
that 64 cores is enough. It makes no sense.
What doesn't make sense is you.
For real world tasks, you know, what people actually use computers for,
most of the time in an SMP environment there is contention between cores
for resources. The more contention, the more things grind to a halt.
You refuse to address this real world consideration.
No, that is a matter for VSI. If there are Linux systems with 2048
cores, then VMS surely must be able to have more then 64.

If that is not possible, then VMS is only interesting for old
applications that need to be transferred to new hardware. That is
consolidation, no new applications.
Post by David Froble
The only way I can think of to avoid the contention is for each core to
have it's own resources, and then you don't have a single computer, you
have bunch of computers in a single box.
Again, I leave that problem to VSI. If VMS is capable of isolating
single cores for specific jobs, then perhaps that may contribute to
solving those problems very efficiently.
David Froble
2017-03-10 15:54:31 UTC
Permalink
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
Arne
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword".
Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
For an application or system that is going to assign a core some task,
which can be completely contained to that core (or cores), perhaps there
is much less overhead. I'm sure there are such tasks. I for one
haven't seen them on VMS. But, I don't get out much ....
This isn't just about "bragging rights" ....
It seems that Linux supports 4096 cores, and windows > 600.
Of course these are theoretical values, but never the less it is far
more then the present 64 cores of VMS.
It is not about bragging, x86 systems with more then 64 cores are
nothing special.
Your suggestion for an architecture maximum of 65536 cores is a good
one. It doesn't matter that such system is not practical, but at least
you will know that what ever happens in the future, you will not run
into problems because you reach the maximum number of cores.
It's like with IPv6, the address space is so enormous that it is
impossible that it will ever be exhausted.
You avoided the topic of adverse affect on VMS. I don't know that there
would be any, but it could be a concern. Me, I'll accept the 64 core
limit over a higher limit and adverse affects on the OS.
First, it's got to work ....
At this moment, a 4 socket Proliant can have 88 cores. An x86
Superdome can have 384 cores. I would expect VMS to be able to run on
those systems, maybe not immediately, but surely later on.
I don't care how many cores you can stuff into a box. It really doesn't
matter. The only thing that matters is that a configuration is usable
and delivers decent performance.
Post by Dirk Munk
I think it would be very unwise to start out with an x86-64 VMS for a
maximum of 64 cores, and then later on redesign the very core of the
OS to handle more cores.
You're just stuck on numbers. You're not addressing "practical". It's
an ego trip. Go ahead, get a bunch of cores, keep them running
crunching numbers, they will probably keep you warm all winter, if you
can afford the electric bill.
Where's Richard when his colorful language is needed?
Post by Dirk Munk
If more then 64 cores would have a very adverse affect on VMS that can
not be fixed, then x86-64 VMS is dead in the water. It's as simple as
that I'm afraid.
It's as simple as .....
No, I'm not going to type that, though I got pretty close ....
Your problem is that you are so eager to get VMS running on x86, that
you completely overlook that people will see it as a new operating
system, and will judge it from the specifications.
Yep, I was right. You're on some ego trip, "mine is bigger than yours",
type of thing.
Rubbish, if I were to buy a simple DL580 with four CPUs and 112 cores in
two years time, and I can't use VMS on it because it supports no more
then 64 cores, then it is bye bye VMS. I'm not talking about high-end
super computers, just plain simple x86 servers.
Post by David Froble
Yes, I am "eager" for VMS on x86, not that I'm all that happy with x86,
but it's the biggest game in town. However, that has nothing to do with
my considerations for the usefulness of a large number of cores.
I do believe that I've already written that if 65536 cores could be
used, WITHOUT CAUSING PROBLEMS, fine, go ahead, implement it.
Fine, and I'm saying that when VMS for x86-64 is introduced and it
supports no more then 64 cores without a reasonable prospect of
supporting more, then VMS is dead. In fact VSI could stop the
development right now.
This is not about your VMS application, this is about VMS becoming a
viable operating system for new developments also, and with 64 cores
max. no one in his right mind is going to invest in applications and
knowledge for VMS.
Post by David Froble
Post by Dirk Munk
If they see that it can not scale over 64 cores, then I can tell you
what they will think. And that is that the design of OpenVMS for
x86-64 must be seriously flawed if it can't scale over 64 cores, when
the number of cores in high-end Xeon CPUs is increasing in every
generation. The next generation will have 28 cores, at the time
OpenVMS x86-64 hits the market it may be over 32 cores. That means
that even a two socket server would have too many cores!
The only way computers can get more power these days is by increasing
the number of cores. An operating system that can't handle a large
number of cores is obsolete.
VMS was designed in a time when only single core CPUs existed. So the
limit of 32 and later 64 cores is very understandable. If OpenVMS
hadn't been neglected for 10 or 15 years, I'm sure the number of cores
would have been raised by now.
On an Itanium Superdome VMS can use 32 cores, Linux 128. If HP had
taken VMS seriously, VMS would have been capable of using 128 cores as
well by now.
Unless you think VSI should follow in HP's footsteps by not increasing
the core limit sufficiently to accommodate modern servers, stop saying
that 64 cores is enough. It makes no sense.
What doesn't make sense is you.
For real world tasks, you know, what people actually use computers for,
most of the time in an SMP environment there is contention between cores
for resources. The more contention, the more things grind to a halt.
You refuse to address this real world consideration.
No, that is a matter for VSI.
Oh, really? They have some magic wand that allows them to do things, just
because you say they must?
Post by Dirk Munk
If there are Linux systems with 2048
cores, then VMS surely must be able to have more then 64.
Ok, stop right there. I challenge you to find any Linux system actually using
2048 cores for other than compute bound jobs, running VMs, and such. You know,
like actually accessing disks and other devices. Having contention between
cores for resources. Being able to lock a resource for individual use.

Now, until you can do that, then you're just a bunch of hot air.
Post by Dirk Munk
If that is not possible, then VMS is only interesting for old
applications that need to be transferred to new hardware. That is
consolidation, no new applications.
Your saying it doesn't make it so ....
Post by Dirk Munk
Post by David Froble
The only way I can think of to avoid the contention is for each core to
have it's own resources, and then you don't have a single computer, you
have bunch of computers in a single box.
Again, I leave that problem to VSI. If VMS is capable of isolating
single cores for specific jobs, then perhaps that may contribute to
solving those problems very efficiently.
VMS already does that, but only for a few cores.

You're real big on demanding things, but rather short on explaining just how
they can be done, huh?

Arne Vajhøj
2017-03-09 23:02:13 UTC
Permalink
Post by David Froble
Post by Dirk Munk
Post by Arne Vajhøj
Post by Michael Moroney
Post by Arne Vajhøj
Commercially the market for >64 cores must be very small.
But what is the extra cost of instead of "up to 64" to support
"up to 256" or "up to 1024"?
There is a hurdle going beyond 64 CPUs since much of the SMP
support depends on the fact that there are 64 bits in a
natively addressable quadword. Bitmaps of CPUs are used in
many places, more than 64 CPUs means you can no longer use a
64 bit quadword for CPU bitfields..
Then change the definition to use something longer.
Better before first release than 5 years after.
I agree. I assume VSI hopes to attract new customers as well in the
future. In that case it will be very difficult to sell OpenVMS as a
server OS if it can't handle more then 64 cores, two high-end x86 CPUs.
Well, I'm not sure that I agree.
Perhaps much depends upon "much of the SMP support depends on the fact
that there are 64 bits in a natively addressable quadword". Perhaps
emphasis on the word "much".
Could another data structure be used? Of course. But at what extra
cost, if any? Why not a byte mask instead of a bit mask? It's very
unclear to me just how much additional overhead might be encountered
with a more complex data structure.
It is a data structure.

Going from simple type to array will probably require a little bit
more code. But I can not see that becoming a significant issue.
Post by David Froble
The bottom line is, I don't know, and I'm guessing most who are asking
for such also don't know. If there is no appreciable cost, other than
the initial implementation, sure, allow 65536 cores. You'll never see
it. I'm sure VSI would never test, or support, such.
65536 would require at least 65536 bits = 8192 bytes.
Post by David Froble
From my perspective, multiple cores for VMS can interact closely with
each other. Such sycronization is not without cost, sometimes
significant cost, and the curve as number of cores increase is not linear.
True.

But since other systems with high number of cores are becoming
available, then new applications are usually being designed to be able
to utilize many cores.

Arne
Michael Moroney
2017-03-09 23:59:25 UTC
Permalink
Post by David Froble
For an application or system that is going to assign a core some task, which can
be completely contained to that core (or cores), perhaps there is much less
overhead. I'm sure there are such tasks. I for one haven't seen them on VMS.
But, I don't get out much ....
Got a VMS V8.4 or later system with lots of idle cores?

SYSMAN> USE ACTIVE
SYSMAN> PARAM SET LCKMGR_MODE 4 ! if you have at least 4 cores
SYSMAN> WRITE ACTIVE

Running HP's TCPIP V5.7 or later?

$ @SYS$MANAGER:TCPIP$DEFINE_COMMANDS
$ sysconfig -r INET PPE_ENABLE=1

Does $ SHOW SYSTEM show something funny now?


The first enables the VMS Dedicated Lock Manager.
The second enables the Packet Processing Engine.

Both of them take over an entire core, not available for other processes.
For each of these, you'll see a process, in CUR state at priority 63
sucking up 100% of a core.

(to Dann Corbit: You may want to try both of these, they may help with
your issue if you have VMS V8.4 or later with TCPIP 5.7+ and spare cores)
David Froble
2017-03-10 01:52:44 UTC
Permalink
Post by Michael Moroney
Post by David Froble
For an application or system that is going to assign a core some task, which can
be completely contained to that core (or cores), perhaps there is much less
overhead. I'm sure there are such tasks. I for one haven't seen them on VMS.
But, I don't get out much ....
Got a VMS V8.4 or later system with lots of idle cores?
SYSMAN> USE ACTIVE
SYSMAN> PARAM SET LCKMGR_MODE 4 ! if you have at least 4 cores
SYSMAN> WRITE ACTIVE
Running HP's TCPIP V5.7 or later?
$ sysconfig -r INET PPE_ENABLE=1
Does $ SHOW SYSTEM show something funny now?
Yeah Michael, I'm aware of those nice capabilities. But, once a core is
dedicated, from some perspectives, can you still say they are running VMS? I
think not. They are for sure not running (maybe helping) any user jobs.
Post by Michael Moroney
The first enables the VMS Dedicated Lock Manager.
The second enables the Packet Processing Engine.
Can more than 1 core be assigned to the packet processing?
Post by Michael Moroney
Both of them take over an entire core, not available for other processes.
For each of these, you'll see a process, in CUR state at priority 63
sucking up 100% of a core.
But, what about the other 4091 cores Dirk so likes on Linux?
Post by Michael Moroney
(to Dann Corbit: You may want to try both of these, they may help with
your issue if you have VMS V8.4 or later with TCPIP 5.7+ and spare cores)
Dann's problem is I believe nothing to do with the DLM.
IanD
2017-03-09 11:19:41 UTC
Permalink
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we posted the December 2016 roadmap I mentioned that we had not yet fully digested the effects of the Alpha release. We now have and you will see some projects have been moved out as we expected they likely would.
The next State of the Port will be the first week in April.
Time for a learning break?

http://www-hpc.cea.fr/SummerSchools/SummerSchools2017-CS.ham

I thought the topic on code transformation and analysis using clang/llvm might be of interest to some...
gérard Calliet
2017-03-09 12:38:10 UTC
Permalink
Post by IanD
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we posted the December 2016 roadmap I mentioned that we had not yet fully digested the effects of the Alpha release. We now have and you will see some projects have been moved out as we expected they likely would.
The next State of the Port will be the first week in April.
Time for a learning break?
http://www-hpc.cea.fr/SummerSchools/SummerSchools2017-CS.ham
I thought the topic on code transformation and analysis using clang/llvm might be of interest to some...
Good. But the link is broken
Accès impossible ou refusé.
Service indisponible ou en cours de finition.
Merci de votre compréhension.
Accès impossible ou refusé.
Service indisponible ou en cours de finition.
Merci de votre compréhension.
---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus
Stephen Hoffman
2017-03-09 17:24:43 UTC
Permalink
Post by gérard Calliet
Post by IanD
http://www-hpc.cea.fr/SummerSchools/SummerSchools2017-CS.ham
I thought the topic on code transformation and analysis using
clang/llvm might be of interest to some...
Good. But the link is broken
If you've not already located the cited page...

http://www-hpc.cea.fr/SummerSchools/SummerSchools2017-CS.htm

That found via Google, searching for:

inurl:/SummerSchools/SummerSchools2017-CS

There's a whole lot of (other) material on llvm available via the llvm
web site for those interested in the topic, including the kaleidoscope
tutorial:

http://llvm.org/docs/tutorial/

and all sorts of llvm-based stuff elsewhere, including (for instance):

http://terralang.org
http://eli.thegreenplace.net/2014/05/01/modern-source-to-source-transformation-with-clang-and-libtooling/

https://www.rust-lang.org/en-US/
etc...
--
Pure Personal Opinion | HoffmanLabs LLC
IanD
2017-03-09 11:20:23 UTC
Permalink
Post by c***@gmail.com
The quarterly update to the roadmap is now on our website. When we posted the December 2016 roadmap I mentioned that we had not yet fully digested the effects of the Alpha release. We now have and you will see some projects have been moved out as we expected they likely would.
The next State of the Port will be the first week in April.
So looking forward to April now :-)
Loading...