Discussion:
What is the oldest computer that could be used today for real work?
(too old to reply)
Jason Evans
2021-09-05 09:55:25 UTC
Permalink
I know this is an odd question, so let me explain what I'm thinking.

First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?

For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?

These are the kinds of things that I would like to hear about on this group.

Jason
Grant Taylor
2021-09-05 16:28:17 UTC
Permalink
Post by Jason Evans
First of all, what is "real work"? Let's say that you're a
Linux/Unix/BSD system administrator who spends 90% of his day on the
command line. What is the oldest computer that he could get by with
to do his job?
The problem is the remaining 10%. (I'm re-using your numbers.)

IMM/RSA/iLO/LOM/iDRAC/etc consoles that are inherently GUI which are
invaluable when recovering systems during outages.

Don't forget that email clients /almost/ *need* to be GUI to display
more than simple text ~> attachments. -- We can't forget the venerable
Power Point slides that we need to look at before the next meeting.

I would be remiss if I didn't mention video chat, especially with work
from home that is quite common for the last ~18 months. Not to mention
conference rooms for geographically disperse team meetings.

I don't know about you, but I would have a problem justifying my
employment if I didn't participate in that 10%. And I'm quite sure that
CLI /only/ is not sufficient to do so.

I leave you with ...

Link - Terminal forever | CommitStrip
- https://www.commitstrip.com/en/2016/12/22/terminal-forever/
--
Grant. . . .
unix || die
Andreas Kohlbach
2021-09-05 17:11:40 UTC
Permalink
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
You can hook up the C64 to the internet today and for example browse the
web. Sure this is text based (almost like as if you would use a
text-based browser like lynx on a modern PC). It might need additional
modern hardware (WIFI cards exist fir the C64). Could be also done if
there is some kind of BBS which "sources" access to the internet without
the need of additional hardware, if you have a modem for the C64.

You can probably go back to 1977 and use an Apple 2 to do some simple word
processing or spreadsheets.
--
Andreas
J. Clarke
2021-09-06 04:06:43 UTC
Permalink
On Sun, 05 Sep 2021 22:54:18 -0400, Michael Trew
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
These are the kinds of things that I would like to hear about on this group.
Jason
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can still
do some basic functions like some kind of word processing. I have boxes
and boxes of 8" floppies as well.
https://en.wikipedia.org/wiki/IBM_System/23_Datamaster
You can run Unix from a teletype. Not something anyone in their right
mind wants to do these days but you can do it.

The real work in this case is running the Unix system and you already
have a computer if you do that.
Jason Evans
2021-09-06 06:30:46 UTC
Permalink
Post by J. Clarke
You can run Unix from a teletype. Not something anyone in their right
mind wants to do these days but you can do it.
Linux via ham radio RTTY would be stupid and awesome, lol.
Ahem A Rivet's Shot
2021-09-06 08:14:41 UTC
Permalink
On Mon, 6 Sep 2021 06:30:46 -0000 (UTC)
Post by Jason Evans
Post by J. Clarke
You can run Unix from a teletype. Not something anyone in their right
mind wants to do these days but you can do it.
Linux via ham radio RTTY would be stupid and awesome, lol.
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
radio) was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to run them
under messy dos - I never saw the CP/M version but 64K is awfully tight for
TCP/IP.

It was awesome and far from stupid.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
gareth evans
2021-09-06 10:55:48 UTC
Permalink
Post by Ahem A Rivet's Shot
On Mon, 6 Sep 2021 06:30:46 -0000 (UTC)
Post by Jason Evans
Post by J. Clarke
You can run Unix from a teletype. Not something anyone in their right
mind wants to do these days but you can do it.
Linux via ham radio RTTY would be stupid and awesome, lol.
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
radio) was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to run them
under messy dos - I never saw the CP/M version but 64K is awfully tight for
TCP/IP.
It was awesome and far from stupid.
One needs to be careful about terminology.

RTTY in Ham Radio terms means ITA No2, 5-unit start-stop stuff with
the awkward Figure Shift and Letter Shift keys.

Packet Radio was something else, I know not what, but certainly
8-bit character transmissions.

Gareth G4SDW
Jason Evans
2021-09-06 11:12:24 UTC
Permalink
Post by J. Clarke
You can run Unix from a teletype. Not something anyone in their
RTTY in Ham Radio terms means ITA No2, 5-unit start-stop stuff with the
awkward Figure Shift and Letter Shift keys.
Packet Radio was something else, I know not what, but certainly 8-bit
character transmissions.
Gareth G4SDW
When J Clarke mentioned teletype, I immediately thought of radioteletype
i.e. RTTY and that's why I mentioned it.

Jason KI4GMX
Jason Evans
2021-09-06 11:16:37 UTC
Permalink
Post by Ahem A Rivet's Shot
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet
radio) was it not. OK it was not Linux (that was still in the future)
but it did come with email, usenet, ftp and a multi-tasking kernel to
run them under messy dos - I never saw the CP/M version but 64K is
awfully tight for TCP/IP.
It was awesome and far from stupid.
I meanth "stupid" only in the amount of time and effort it would take to
use an old radioteletype machine as an interface with as a linux console.
It does sound very awesome, though!
Grant Taylor
2021-09-06 17:44:57 UTC
Permalink
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to
run them under messy dos - I never saw the CP/M version but 64K is
awfully tight for TCP/IP.
Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
services that provided similar function to the proper services?

Many BBSs, both radio and non-radio, of the time provided similar
functionality without using /Internet/ standard protocols for doing so.
It was awesome and far from stupid.
~chuckle~
--
Grant. . . .
unix || die
Ahem A Rivet's Shot
2021-09-06 19:07:53 UTC
Permalink
On Mon, 6 Sep 2021 11:44:57 -0600
Post by Grant Taylor
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to
run them under messy dos - I never saw the CP/M version but 64K is
awfully tight for TCP/IP.
Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
services that provided similar function to the proper services?
It was the real thing, there was a pretty good TCP/IP stack in there
and a multi-tasking kernel, the applications were pluggable at build time
but most settled on a variant of Elm for email backed by an SMTP server,
Tin for USENET (NNRP) and I forget where the usual ftp client originated.
When Demon Internet first started offering dial up connections with a
static IP address KA9Q was the standard offering for messy dos.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Sn!pe
2021-09-06 20:05:28 UTC
Permalink
Post by Ahem A Rivet's Shot
On Mon, 6 Sep 2021 11:44:57 -0600
Post by Grant Taylor
Erm KA9Q was originally TCP/IP over souped up RTTY (aka packet radio)
was it not. OK it was not Linux (that was still in the future) but
it did come with email, usenet, ftp and a multi-tasking kernel to
run them under messy dos - I never saw the CP/M version but 64K is
awfully tight for TCP/IP.
Was it Usenet (UUCP / NNTP) or FTP proper? Or was it other non-standard
services that provided similar function to the proper services?
It was the real thing, there was a pretty good TCP/IP stack in there
and a multi-tasking kernel, the applications were pluggable at build time
but most settled on a variant of Elm for email backed by an SMTP server,
Tin for USENET (NNRP) and I forget where the usual ftp client originated.
When Demon Internet first started offering dial up connections with a
static IP address KA9Q was the standard offering for messy dos.
It worked very well; I began by using KA9Q too, in 1994 with Demon.
--
^Ï^


My pet rock Gordon just is.
Grant Taylor
2021-09-06 20:54:48 UTC
Permalink
Post by Ahem A Rivet's Shot
It was the real thing, there was a pretty good TCP/IP stack in there
and a multi-tasking kernel, the applications were pluggable at build
time but most settled on a variant of Elm for email backed by an
SMTP server, Tin for USENET (NNRP) and I forget where the usual ftp
client originated. When Demon Internet first started offering dial
up connections with a static IP address KA9Q was the standard offering
for messy dos.
Interesting.

Thank you for confirming.

TIL :-)
--
Grant. . . .
unix || die
Grant Taylor
2021-09-06 17:42:28 UTC
Permalink
Post by Jason Evans
Linux via ham radio RTTY would be stupid and awesome, lol.
It's not ham radio RTTY, but it is darned close.

Link - Curious Marc tweets from a TTY.
- https://twitter.com/curious_marc/status/1253216773370867717
--
Grant. . . .
unix || die
Peter Flass
2021-09-06 17:56:09 UTC
Permalink
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
These are the kinds of things that I would like to hear about on this group.
Jason
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can still
do some basic functions like some kind of word processing. I have boxes
and boxes of 8" floppies as well.
https://en.wikipedia.org/wiki/IBM_System/23_Datamaster
This is kind of a bizarre question. Any computer could be used for ‘real
work” today. They did their thing years ago, and could still do,the same
kinds of things today: statistics, engineering calculations, payroll,
inventory, etc. I was going to say the IBM 1130, but just realized all
could. Obviously no internet, and things like graphics and relational
databases that require gobs of memory would be out.
--
Pete
Robin Vowels
2021-09-07 03:27:50 UTC
Permalink
Post by Peter Flass
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
These are the kinds of things that I would like to hear about on this group.
Jason
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can still
do some basic functions like some kind of word processing. I have boxes
and boxes of 8" floppies as well.
https://en.wikipedia.org/wiki/IBM_System/23_Datamaster
This is kind of a bizarre question. Any computer could be used for ‘real
work” today. They did their thing years ago, and could still do,the same
kinds of things today: statistics, engineering calculations, payroll,
inventory, etc. I was going to say the IBM 1130, but just realized all
could. Obviously no internet, and things like graphics and relational
databases that require gobs of memory would be out.
Early computers had "gobs of memory" via endless numbers of
punch cards, endless lengths of paper tape and/or magnetic tape.
Some even had graphics. Yesterday I came across a subroutine for
DEUCE, written in 1955, that rotated the display by 90 degrees.
On that same computer was an animated version of a mouse
finding its way around a maze; also of "hickory dickory dock"
with sound effects; and noughts and crosses [tic-tac-toe].
Andreas Kohlbach
2021-09-06 20:11:59 UTC
Permalink
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
These are the kinds of things that I would like to hear about on this group.
Jason
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can
still do some basic functions like some kind of word processing. I
have boxes and boxes of 8" floppies as well.
https://en.wikipedia.org/wiki/IBM_System/23_Datamaster
Read about this long ago. Is considered by many to be the "first
PC". Reminds me on the Apple Lisa which went the right direction but was
too expensive. So the little less capable but more successful McIntosh
was released.
--
Andreas
Michael Trew
2021-09-07 15:34:05 UTC
Permalink
Post by Andreas Kohlbach
Post by Jason Evans
I know this is an odd question, so let me explain what I'm thinking.
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
the oldest computer that he could get by with to do his job?
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
These are the kinds of things that I would like to hear about on this group.
Jason
I have an IBM Datamaster/System 23 in my basement that is functional
with its original dot matrix printer. I'd have to imagine it can
still do some basic functions like some kind of word processing. I
have boxes and boxes of 8" floppies as well.
https://en.wikipedia.org/wiki/IBM_System/23_Datamaster
Read about this long ago. Is considered by many to be the "first
PC". Reminds me on the Apple Lisa which went the right direction but was
too expensive. So the little less capable but more successful McIntosh
was released.
I have most all of the manuals as well. It was used as a database in a
radio station in the early 80's. I never actually sat down and figured
out how it works, but it does boot when you flip the switch; numbers
come up on the green screen.
Dennis Boone
2021-09-07 16:14:30 UTC
Permalink
Post by Michael Trew
I have most all of the manuals as well. It was used as a database in a
radio station in the early 80's. I never actually sat down and figured
out how it works, but it does boot when you flip the switch; numbers
come up on the green screen.
User programming is done in BASIC. It's a curious implementation, with
a lot of fairly powerful stuff for business applications, statement
labels, some sort-of cursor editing of statements. Most IBM supplied
utilities are not in BASIC; I think it may have been possible for some
ecosystem developers to get the tooling to do assembler or maybe
compiled development.

There were two types of base machine: the all-in-one type, and a floor
standing one with separate screen and keyboard. Peripherals included
several printers, a twinax-based network for interconnecting stations,
and a hard disk unit that could be shared across the network. 8085
processor, paged address space so that the machine can (and does) have
well over 64k of RAM, and quite a bit of ROM too.

IBM sold various software for them, including a menu-driven application
development system that wrote BASIC applications. There was at least
a small third party ecosystem.

De
John Goerzen
2021-09-06 03:39:29 UTC
Permalink
Post by Jason Evans
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
... that fits...
Post by Jason Evans
the oldest computer that he could get by with to do his job?
So I'm going to be "that guy" that says "it depends on what you mean by
computer."

So I have a DEC vt510 that I do still use. It has a serial connection to a
Raspberry Pi, from which I can ssh wherever. I actually enjoy using it as a
"focus mode" break. It was sold as an ANSI terminal. Is it a computer? Well,
it has an 8080 in it IIRC. I do actually use it for doing work on my job from
time to time too.

I also have a Linux box, more modern, that is a Micro PC I used to do backups.
It doesn't permit ssh or such for security reasons. My only way into it is via
serial console or local console. So the vt510 can hook up to that and it is
then doing actual work too. I also have older terminals.

What about older general-purpose machines? I've seen plenty of DOS still
kicking around. Various industrial machinery still uses DOS machines as
controllers or programmers. A lot of time they are running on more modern
hardware, but also a lot of time they wouldn't NEED to be; that's just what is
out there. So that takes us back firmly into the 80s.

The DEC PDP-10 was introduced in 1966 and was famously used by CompuServe up
until at least 2007, 41 years later.

Here's an article from 2008 about how Seattle still uses DEC VAXes (released
1977):
https://www.seattletimes.com/seattle-news/education/dinosaur-computer-stalls-seattle-schools-plans/

Here's an article from just last year about how Kansas is still using a
mainframe from 1977 to manage unemployment claims:
https://www.kctv5.com/coronavirus/kansas-department-of-labor-mainframe-is-from-1977/article_40459370-7ac4-11ea-b4db-df529463a7d4.html

No word on what precise type of mainframe that is.
https://www.dol.ks.gov/documents/20121/85583/KDOL+Modernization+Timeline.pdf/d186de09-851b-d996-d235-ad6fb9286fcb?version=1.0&t=1620335465573
gives a clue that it may be some sort of IBM something.
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
Post by Jason Evans
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
The display resolution may be tricky, but an old IBM PC certainly would.

- John
J. Clarke
2021-09-07 05:03:59 UTC
Permalink
On Mon, 6 Sep 2021 03:39:29 -0000 (UTC), John Goerzen
Post by John Goerzen
Post by Jason Evans
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
... that fits...
Post by Jason Evans
the oldest computer that he could get by with to do his job?
So I'm going to be "that guy" that says "it depends on what you mean by
computer."
So I have a DEC vt510 that I do still use. It has a serial connection to a
Raspberry Pi, from which I can ssh wherever. I actually enjoy using it as a
"focus mode" break. It was sold as an ANSI terminal. Is it a computer? Well,
it has an 8080 in it IIRC. I do actually use it for doing work on my job from
time to time too.
I also have a Linux box, more modern, that is a Micro PC I used to do backups.
It doesn't permit ssh or such for security reasons. My only way into it is via
serial console or local console. So the vt510 can hook up to that and it is
then doing actual work too. I also have older terminals.
What about older general-purpose machines? I've seen plenty of DOS still
kicking around. Various industrial machinery still uses DOS machines as
controllers or programmers. A lot of time they are running on more modern
hardware, but also a lot of time they wouldn't NEED to be; that's just what is
out there. So that takes us back firmly into the 80s.
The DEC PDP-10 was introduced in 1966 and was famously used by CompuServe up
until at least 2007, 41 years later.
Here's an article from 2008 about how Seattle still uses DEC VAXes (released
https://www.seattletimes.com/seattle-news/education/dinosaur-computer-stalls-seattle-schools-plans/
Here's an article from just last year about how Kansas is still using a
https://www.kctv5.com/coronavirus/kansas-department-of-labor-mainframe-is-from-1977/article_40459370-7ac4-11ea-b4db-df529463a7d4.html
No word on what precise type of mainframe that is.
https://www.dol.ks.gov/documents/20121/85583/KDOL+Modernization+Timeline.pdf/d186de09-851b-d996-d235-ad6fb9286fcb?version=1.0&t=1620335465573
gives a clue that it may be some sort of IBM something.
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
Post by John Goerzen
Post by Jason Evans
For example: Could you rig up a serial connection from a modern PC to a C64
to get a command prompt on the C64 to use as the interface for the command
line? Sure, it would demote the C64 to a "dumb terminal" but could it work?
The display resolution may be tricky, but an old IBM PC certainly would.
- John
John Goerzen
2021-09-07 13:06:16 UTC
Permalink
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."

Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.

I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.

... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."

I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.

- John
J. Clarke
2021-09-08 04:00:04 UTC
Permalink
On Tue, 7 Sep 2021 13:06:16 -0000 (UTC), John Goerzen
Post by John Goerzen
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."
Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Post by John Goerzen
I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.
... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."
I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.
From the 360 on, application-level backwards compatibility has been
maintained. I occasionally encounter code today that has dated
comments from the '70s. The OS is tuned for the specific hardware and
new features are provided, but application programmers don't generally
deal with that.

We just transferred our entire system to new hardware, was done over a
weekend. That's a system that manages a Fortune 100 financial
services company.

A common misconception among people who don't work with mainframes is
that the mainframe you have today is the same as the one that was
installed in the mid '60s. They don't understand that the modern
mainframe is just that, with numerous cores, vast quantities of RAM,
and very high clock speeds that can be sustained under any workload.
Ahem A Rivet's Shot
2021-09-08 07:10:36 UTC
Permalink
On Wed, 08 Sep 2021 00:00:04 -0400
Post by J. Clarke
We just transferred our entire system to new hardware, was done over a
weekend. That's a system that manages a Fortune 100 financial
services company.
In some circles they just throw new hardware into the racks and tell
the virtual swarm coordinator that runs their systems where to find the new
hardware or tell the coordinator to stop using an obsolete machine so they
can pull it. The systems never notice.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Thomas Koenig
2021-09-09 13:08:35 UTC
Permalink
Post by J. Clarke
On Tue, 7 Sep 2021 13:06:16 -0000 (UTC), John Goerzen
Post by John Goerzen
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."
Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? Bitsavers claims that

"The Model 145 has a variable-length CPU cycle time. Cycle times of
202.5, 247.5, 292.5, and 315 nanoseconds are implemented. The time
required for the CPU to perform operations is made up of combinations of
these cycles. The CPU fetches instructions from processor storage a
doubleword at a time, while data accesses, both fetches and stores, are
made on a word basis. Eight instruction bytes or four data bytes can be
fetched by the CPU in 540 nanoseconds."

Variable-length CPU cycle time sounds strange, but the clock ran at
somewhere between 3.2 and 5 MZh. Not sure what sort of Pi you
have, but even a 700 MHz ARMv6 should be able to run rings around
that old machine in emulation with a factor of more than 100 in
CPU cycle time.
John Levine
2021-09-10 01:35:34 UTC
Permalink
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.

These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-09-10 03:52:21 UTC
Permalink
On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
Post by John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
No. The pi is painfully slow running the emulator.
Post by John Levine
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
It's I/O is a USB port, Gigabit, and wifi. It can also implement I/O
with a USART but that's very limited bandwidth.
Thomas Koenig
2021-09-10 15:57:39 UTC
Permalink
Post by J. Clarke
On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
No. The pi is painfully slow running the emulator.
Slower than the original? How many cycles per S/360 instruction does
it take? If it was really slower than the original, it would have
to be more than 100 cycles per instruction. I find that hard to
believe.
Anne & Lynn Wheeler
2021-09-10 16:49:57 UTC
Permalink
Post by Thomas Koenig
Slower than the original? How many cycles per S/360 instruction does
it take? If it was really slower than the original, it would have
to be more than 100 cycles per instruction. I find that hard to
believe.
Endicott cons me into helping do ECPS for 138/148. I was told that
low/mid 370, 115-148 avg ten native instructions per 370 emulated instructions
(i.e. 80kips 370/115 had 800kips engine, 120kips 370/125 had 1.2mips
engine, etc) and the 138/148 had 6kbytes of available microcode storage.
I was to identity the 6k bytes highest executed kernel instructions
for moving to microcode (on roughtly byte-for-byte basis). old
archived post with the analysis
http://www.garlic.com/~lynn/94.html#21

6kbytes of kernel instructions pathlength accounted for 79.55% of kernel
execution time ... dropped directly into microcode would run 10times
faster. Also implemented for later 4331/4341.

In the early 80s I got permission to give howto ECPS presentations at
local user group (silicon valley) monthly baybunch meetings and would
get lots of questions from the Amdahl people.

They would say that IBM had started doing lots of trivial microcode
implementations for the 3033 which would be required for MVS to
run. Amdahl eventually responded with "macrocode" ... effectively
370-like instruction set that ran in microcode mode ... where Amdahl
could implement the 3033 microcodes changes much easier with much less
effort.

Note the low/mid range 370s had vertical instruction microcode
processors (i.e. programming like cics/risc processors). high-end 370
had horizontal microcode and typically expressed in the avg. number of
machine cycles per 370 instruction. The 370/165 ran 2.1 machine cycles
per 370 instruction. That was optimized for 370/168 to 1.6 machine
cycles per 370 instruction. The 3033 started out as 168-3 logic remapped
to 20% faster chips and the microcode was further optimized to one
machine cycle per 370 instruction. It was claimed that all those 3033
microcode tweaks ran same speed as 370 (and some cases slower).

Amdahl was then using macrocode to implement hypervisor support ...
subset of virtual machines support w/o needing vm370 ... which took IBM
several years to respond with PR/SM and LPAR in native horizontal
microcode (well after 3081 and well into 3090 product life).

some years later after retiring from IBM ... I was doing some stuff with
http://www.funsoft.com/

and their experience with emulating 370, avg. ten instructions per 370
was about the same as low&mid range 370s ... although they had some
other tweaks that could dynamically translate high-use instruction paths
directly into native code on-the-fly (getting 10:1 improvement).

I believe hercules is somewhat similar
https://en.wikipedia.org/wiki/Hercules_(emulator)
--
virtualization experience starting Jan1968, online at home since Mar1970
Andreas Kohlbach
2021-09-10 16:51:24 UTC
Permalink
Post by Thomas Koenig
Post by J. Clarke
On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
No. The pi is painfully slow running the emulator.
Slower than the original? How many cycles per S/360 instruction does
it take? If it was really slower than the original, it would have
to be more than 100 cycles per instruction. I find that hard to
believe.
Having the issue here with an (aged) AMD PC. It's since years (software
bloat over the years I assume) no longer able to emulated a Commodore 64
with full speed (around 1 MHz for the 6510 CPU), while the host is
supposed to run at 780 MHz (according to /proc/cpuinfo here in Linux), so
780 times faster. Yeas ago it was able to emulate the C64 at its max
speed, while I could run other tasks on the host at the same time.
--
Andreas
Thomas Koenig
2021-09-11 07:34:58 UTC
Permalink
Post by Andreas Kohlbach
Post by Thomas Koenig
Post by J. Clarke
On Fri, 10 Sep 2021 01:35:34 -0000 (UTC), John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
No. The pi is painfully slow running the emulator.
Slower than the original? How many cycles per S/360 instruction does
it take? If it was really slower than the original, it would have
to be more than 100 cycles per instruction. I find that hard to
believe.
Having the issue here with an (aged) AMD PC. It's since years (software
bloat over the years I assume) no longer able to emulated a Commodore 64
with full speed (around 1 MHz for the 6510 CPU), while the host is
supposed to run at 780 MHz (according to /proc/cpuinfo here in Linux), so
780 times faster.
That is a bit different. An emulator for a whole C-64 including
graphics and sound has to do much more work than an emulator for a
370/145 which did computation and I/O.
Post by Andreas Kohlbach
Yeas ago it was able to emulate the C64 at its max
speed, while I could run other tasks on the host at the same time.
"Other tasks" including running a browser?

I agree that modern software, also for Linux, has become incredibly
bloated. IIRC, the first Linux I ran at home was Slackware
0.99-something on a 486 with 4 MB running a simple window manager
and xterm. It wasn't as nice as the HP workstations I used
at the university, but it ran well enough.

Now... not a chance of getting things going with that setup.
Andreas Kohlbach
2021-09-11 19:14:56 UTC
Permalink
Post by Thomas Koenig
Post by Andreas Kohlbach
Having the issue here with an (aged) AMD PC. It's since years (software
bloat over the years I assume) no longer able to emulated a Commodore 64
with full speed (around 1 MHz for the 6510 CPU), while the host is
supposed to run at 780 MHz (according to /proc/cpuinfo here in Linux), so
780 times faster.
That is a bit different. An emulator for a whole C-64 including
graphics and sound has to do much more work than an emulator for a
370/145 which did computation and I/O.
Thought a complete machine is emulated in the other case too.

Sure depends on the amount of hardware the target machine has. And if a
for a specific machine dedicated emulator is used, which often simulates
parts of the hardware instead of truly emulating it, which speeds it up.

I prefer the "all-purpose" emulator MAME, which attempts to emulate all
parts of the hardware, although it is slower.

Because there are many emulators for the Raspberry PI I assume they can
emulate faster than my aging machine. Also that some retro mini-cabs run
on an ARM processor with similar specs as a PI.
Post by Thomas Koenig
Post by Andreas Kohlbach
Yeas ago it was able to emulate the C64 at its max
speed, while I could run other tasks on the host at the same time.
"Other tasks" including running a browser?
The destop-manager (Linux here) for example. The laptop here is also
"abused" as web- and other servers running at the same time.
--
Andreas
Anne & Lynn Wheeler
2021-09-10 03:56:21 UTC
Permalink
Post by John Levine
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
raspberry Pi 4 specs and benchmarks (2 yrs ago)
https://magpi.raspberrypi.org/articles/raspberry-pi-4-specs-benchmarks

SoC: Broadcom BCM2711B0 quad-core A72 (ARMv8-A) 64-bit @ 1.5GHz
GPU: Broadcom VideoCore VI
Networking: 2.4GHz and 5GHz 802.11b/g/n/ac wireless LAN
RAM: 1GB, 2GB, or 4GB LPDDR4 SDRAM
Bluetooth: Bluetooth 5.0, Bluetooth Low Energy (BLE)
GPIO: 40-pin GPIO header, populated
Storage: microSD
Ports: 2x micro-HDMI 2.0, 3.5mm analogue audio-video jack, 2x USB 2.0,
2x USB 3.0, Gigabit Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)
Dimensions: 88mm x 58mm x 19.5mm, 46g

linpack mips 925MIPS, 748MIPS, 2037MIPS
memory bandwidth (1MB blocks r&w) 4129/sec, 4427/sec
USB storage thruput (megabytes/sec r&w) 353mbytes/sec, 323mbytes/sec

more details
https://en.wikipedia.org/wiki/Raspberry_Pi
best picks Pi microSD cards (32gbytes)
https://www.tomshardware.com/best-picks/raspberry-pi-microsd-cards

===

by comparison, 145 would be .3MIPS and 512kbyte memory,

2314 capacity 29mbytes ... need 34 2314s/gbyte or 340 2314s for
10gbytes
https://www.ibm.com/ibm/history/exhibits/storage/storage_2314.html
2314 disk rate 312kbytes/sec ... ignoring channel program overhead, disk
access, etc, assuming that all four 145 channels would continuously be doing
disk i/o transfer at sustained 312kbytes/sec ... that is theoritical
1.2mbytes/sec

trivia: after transferring to San Jose Research (bldg28), I got roped
into playing disk engineer part time (across the street in
bldg14&15). The 3830 controller for 3330 & 3350 disk drives was replaced
with 3880 controller for 3380 disk drives. While 3880 had special
hardware data path for handling 3380 3mbyte/sec transfer ... it had a
microprocessor that was significantly slower than 3830 for everything
else ... which drastically drove up channel busy overhead ... especially
for the channel program chatter latency between processor and
controller.

The 3090 folks had configured number of channels, assuming the 3880
would be similar to 3830 but handling 3mbyte data transfer ... when they
found out how bad the 3880 channel busy really was ... they realized
they would have to drastically increase the number of channels. The
channel number increase required an extra (very expensive) TCM (there
were jokes that the 3090 office was going to charge the 3880 office for
the increase in 3090 manufacturing cost). Eventually marketing respun
big increase in number of channels (to handle the half-duplex chatter
channel busy overhead) as how great all the 3090 channels were.

Other triva: in 1980, IBM STL (lab) was bursting at the seams and they
were moving 300 people from the IMS DBMS development group to and
offsite bldg with dataprocessing back to STL datacenter. The group had
tried "remote" 3270 terminal support and found the human factors totally
unacceptable. I get con'ed into doing channel-extender support so they
can put local channel connected 3270 controllers at the offsite bldg
(with no perceived difference in human factors offsite and in STL).

The hardware vendor tries to get IBM to release my support, but there
were some people in POK playing with some serial stuff that get it
vetoed (they were worried that if it was in the market, it would harder
to justify releasing their stuff). Then in 1988, I'm asked to help LLNL
standardize some serial stuff they are laying with ... which quickly
becomes fibre channel standard (including some stuff I had done in
1980), initially 1gbit (100mbyte) full-duplex (2gbit, aka 200mbyte,
aggregate)

In 1990, the POK people get their stuff released with ES/9000 as ESCON
(when it is already obsolete, around 17mbyte aggregate). Later some of
the POK people start playing with fibre channel standard and define a
heavy weight protocol that drastically cuts the native throughput which
is finally releaseed as FICON.

The latest published benchmarks I can find is "peak I/O" for z196 that
used 104 FICON (running over 104 fibre channel) to get 2M IOPS. About
the same time there was a fibre channel announced for E5-2600 blade
claiming over million IOPS, two such fibre channel getting higher
(native) throughput than 104 FICON running over 104 fibre channel).
--
virtualization experience starting Jan1968, online at home since Mar1970
J. Clarke
2021-09-10 05:17:11 UTC
Permalink
On Thu, 09 Sep 2021 17:56:21 -1000, Anne & Lynn Wheeler
Post by Anne & Lynn Wheeler
Post by John Levine
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
raspberry Pi 4 specs and benchmarks (2 yrs ago)
https://magpi.raspberrypi.org/articles/raspberry-pi-4-specs-benchmarks
GPU: Broadcom VideoCore VI
Networking: 2.4GHz and 5GHz 802.11b/g/n/ac wireless LAN
RAM: 1GB, 2GB, or 4GB LPDDR4 SDRAM
Bluetooth: Bluetooth 5.0, Bluetooth Low Energy (BLE)
GPIO: 40-pin GPIO header, populated
Storage: microSD
Ports: 2x micro-HDMI 2.0, 3.5mm analogue audio-video jack, 2x USB 2.0,
2x USB 3.0, Gigabit Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)
Dimensions: 88mm x 58mm x 19.5mm, 46g
linpack mips 925MIPS, 748MIPS, 2037MIPS
memory bandwidth (1MB blocks r&w) 4129/sec, 4427/sec
USB storage thruput (megabytes/sec r&w) 353mbytes/sec, 323mbytes/sec
more details
https://en.wikipedia.org/wiki/Raspberry_Pi
best picks Pi microSD cards (32gbytes)
https://www.tomshardware.com/best-picks/raspberry-pi-microsd-cards
===
by comparison, 145 would be .3MIPS and 512kbyte memory,
2314 capacity 29mbytes ... need 34 2314s/gbyte or 340 2314s for
10gbytes
https://www.ibm.com/ibm/history/exhibits/storage/storage_2314.html
2314 disk rate 312kbytes/sec ... ignoring channel program overhead, disk
access, etc, assuming that all four 145 channels would continuously be doing
disk i/o transfer at sustained 312kbytes/sec ... that is theoritical
1.2mbytes/sec
trivia: after transferring to San Jose Research (bldg28), I got roped
into playing disk engineer part time (across the street in
bldg14&15). The 3830 controller for 3330 & 3350 disk drives was replaced
with 3880 controller for 3380 disk drives. While 3880 had special
hardware data path for handling 3380 3mbyte/sec transfer ... it had a
microprocessor that was significantly slower than 3830 for everything
else ... which drastically drove up channel busy overhead ... especially
for the channel program chatter latency between processor and
controller.
The 3090 folks had configured number of channels, assuming the 3880
would be similar to 3830 but handling 3mbyte data transfer ... when they
found out how bad the 3880 channel busy really was ... they realized
they would have to drastically increase the number of channels. The
channel number increase required an extra (very expensive) TCM (there
were jokes that the 3090 office was going to charge the 3880 office for
the increase in 3090 manufacturing cost). Eventually marketing respun
big increase in number of channels (to handle the half-duplex chatter
channel busy overhead) as how great all the 3090 channels were.
Other triva: in 1980, IBM STL (lab) was bursting at the seams and they
were moving 300 people from the IMS DBMS development group to and
offsite bldg with dataprocessing back to STL datacenter. The group had
tried "remote" 3270 terminal support and found the human factors totally
unacceptable. I get con'ed into doing channel-extender support so they
can put local channel connected 3270 controllers at the offsite bldg
(with no perceived difference in human factors offsite and in STL).
The hardware vendor tries to get IBM to release my support, but there
were some people in POK playing with some serial stuff that get it
vetoed (they were worried that if it was in the market, it would harder
to justify releasing their stuff). Then in 1988, I'm asked to help LLNL
standardize some serial stuff they are laying with ... which quickly
becomes fibre channel standard (including some stuff I had done in
1980), initially 1gbit (100mbyte) full-duplex (2gbit, aka 200mbyte,
aggregate)
In 1990, the POK people get their stuff released with ES/9000 as ESCON
(when it is already obsolete, around 17mbyte aggregate). Later some of
the POK people start playing with fibre channel standard and define a
heavy weight protocol that drastically cuts the native throughput which
is finally releaseed as FICON.
The latest published benchmarks I can find is "peak I/O" for z196 that
used 104 FICON (running over 104 fibre channel) to get 2M IOPS. About
the same time there was a fibre channel announced for E5-2600 blade
claiming over million IOPS, two such fibre channel getting higher
(native) throughput than 104 FICON running over 104 fibre channel).
I supposed I could compile Linpack under Z/OS on the pi and see what
it actually does. I'm not that ambitious though. Native on the pi
doesn't count.
Thomas Koenig
2021-09-10 15:54:10 UTC
Permalink
Post by John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).
Post by John Levine
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
A single USB2 port can do around 53 MB/s theoretical maximum, a factor
of approximately 10 vs. the 370/145. I didn't look up the speed
of the Pi's SSD.
Scott Lurndal
2021-09-10 20:45:40 UTC
Permalink
Post by Thomas Koenig
Post by John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).
But it is unlikely that a single drive was dense enough to drive
anywhere near that rate. Regardless of the channel speed, the
drive is limited by how fast it can get the data off the platter.

Burroughs disk channels had similar transfer rates and supported
multiple independently seeking drives on a single channel (up
to 16) to use the available bandwidth. The I/O controller on
the B4900 was rated at 8MB/sec across 32 channels.

USB3.0 on a raspberry pi crushes that by orders of magnitude.
Post by Thomas Koenig
Post by John Levine
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
A single USB2 port can do around 53 MB/s theoretical maximum, a factor
of approximately 10 vs. the 370/145. I didn't look up the speed
of the Pi's SSD.
The fastest NVME SSDs can read three GByte/second and write one Gbyte/second.

The fastest USB SSDs are limited to 600MByte/sec, but few can reach that speed.

As NVME simply requires a PCI express port, which is available on many raspberry
pi boards, the max I/O speed for a pi is the speed of a single PCI Express Gen 3
(1GByte/s) or Gen 4 (2Gbytes/s) lane depending on the pi. Might even see
Gen 5 in the next couple of years (4Gbytes/sec) in future Pi processors.
John Levine
2021-09-10 22:03:48 UTC
Permalink
Post by Scott Lurndal
Post by Thomas Koenig
The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).
But it is unlikely that a single drive was dense enough to drive
anywhere near that rate. Regardless of the channel speed, the
drive is limited by how fast it can get the data off the platter.
That's why there were four channels. Each channel can have a
disk transfer going. An IBM web page said a 2314 could transfer
312L bytes per second so the fastest burst speed would be four
times that, say 1.2M bytes/sec.

I'd think later on people would be more likely to use 3330 or 3340
disks, which were 800K bytes/sec so say total data rate of 3.2MB/sec.
Post by Scott Lurndal
USB3.0 on a raspberry pi crushes that by orders of magnitude.
Yeah, I would think so. No seek time on your SSD either.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
maus
2021-09-11 06:42:00 UTC
Permalink
Post by Scott Lurndal
Post by Thomas Koenig
Post by John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).
But it is unlikely that a single drive was dense enough to drive
anywhere near that rate. Regardless of the channel speed, the
drive is limited by how fast it can get the data off the platter.
The fastest NVME SSDs can read three GByte/second and write one Gbyte/second.
The fastest USB SSDs are limited to 600MByte/sec, but few can reach that speed.
As NVME simply requires a PCI express port, which is available on many raspberry
pi boards, the max I/O speed for a pi is the speed of a single PCI Express Gen 3
(1GByte/s) or Gen 4 (2Gbytes/s) lane depending on the pi. Might even see
Gen 5 in the next couple of years (4Gbytes/sec) in future Pi processors.
I have several Pi's, and only in the last have I found what I think is
a grievious error, installed heat sinks, turned it on. After a few
minutes I noticed a searing pain where my hand was leaning on one of
the heat sinks.

***@mail.com
J. Clarke
2021-09-11 12:55:54 UTC
Permalink
Post by maus
Post by Scott Lurndal
Post by Thomas Koenig
Post by John Levine
Post by J. Clarke
A brand new Z can emulate the 370/145. So can my Raspberry Pi if I
don't expect any performance.
Surely, a higher performance than the original? ...
CPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
The "Functional characteristics" document from 1972 from Bitsavers
gives a maximum rate per channel of 1.85 MB per second with a
word buffer installed, plus somewhat lower figures for four channels
for a total of 5.29 MB/s (which would be optimum).
But it is unlikely that a single drive was dense enough to drive
anywhere near that rate. Regardless of the channel speed, the
drive is limited by how fast it can get the data off the platter.
The fastest NVME SSDs can read three GByte/second and write one Gbyte/second.
The fastest USB SSDs are limited to 600MByte/sec, but few can reach that speed.
As NVME simply requires a PCI express port, which is available on many raspberry
pi boards, the max I/O speed for a pi is the speed of a single PCI Express Gen 3
(1GByte/s) or Gen 4 (2Gbytes/s) lane depending on the pi. Might even see
Gen 5 in the next couple of years (4Gbytes/sec) in future Pi processors.
I have several Pi's, and only in the last have I found what I think is
a grievious error, installed heat sinks, turned it on. After a few
minutes I noticed a searing pain where my hand was leaning on one of
the heat sinks.
I think this is mostly moot. A maxed out 360 would have under a gig
of DASD. On a pi 4 with 4 or 8 gig of RAM there's enough to buffer
the entire system, so our emulated 360, DASD and all, would be running
mostly RAM resident.

I think we forget how _immense_ the capacity of even a good _watch_ is
by '60s standards.
Andy Burns
2021-09-11 19:02:28 UTC
Permalink
Post by Scott Lurndal
As NVME simply requires a PCI express port, which is available on many raspberry
pi boards
The latest pi compute module has PCIe and various breakout boards make
it available as a normal X1 slot, but other than that, I thought to get
access to PCIe on any other pi required de-soldering the USB chip?
Scott Lurndal
2021-09-11 19:42:40 UTC
Permalink
Post by Andy Burns
Post by Scott Lurndal
As NVME simply requires a PCI express port, which is available on many raspberry
pi boards
The latest pi compute module has PCIe and various breakout boards make
it available as a normal X1 slot, but other than that, I thought to get
access to PCIe on any other pi required de-soldering the USB chip?
Hence "many" instead of "all".
Andy Burns
2021-09-12 10:45:01 UTC
Permalink
Post by Scott Lurndal
Post by Andy Burns
The latest pi compute module has PCIe and various breakout boards make
it available as a normal X1 slot, but other than that, I thought to get
access to PCIe on any other pi required de-soldering the USB chip?
Hence "many" instead of "all".
It'd be "a few" in my book.
J. Clarke
2021-09-12 16:24:23 UTC
Permalink
Post by Andy Burns
Post by Scott Lurndal
Post by Andy Burns
The latest pi compute module has PCIe and various breakout boards make
it available as a normal X1 slot, but other than that, I thought to get
access to PCIe on any other pi required de-soldering the USB chip?
Hence "many" instead of "all".
It'd be "a few" in my book.
I think "hardly any" comes closer.
Peter Flass
2021-09-08 18:50:37 UTC
Permalink
Post by John Goerzen
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."
Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.
I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.
... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."
I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.
- John
You can still run programs compiled on a 360 on the latest “z” box.
--
Pete
John Goerzen
2021-09-08 22:46:00 UTC
Permalink
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Dan Espen
2021-09-09 00:28:27 UTC
Permalink
Post by John Goerzen
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.

Before then, IBM kept introducing new incompatible models each one
programmed in it's assembly language. The promise of S/360 was that
you would never again have to throw out your massive investment in
software.

Object code will still run.
--
Dan Espen
John Levine
2021-09-09 01:59:49 UTC
Permalink
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
I thought the Unisys Clearpath machines still run Unival 1100 code from the 1960s.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-09-09 02:24:19 UTC
Permalink
Post by John Levine
Post by Dan Espen
Post by John Goerzen
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
I thought the Unisys Clearpath machines still run Unival 1100 code from the 1960s.
Unisys Clearpath is an emulator running on Intel. IBM implements the
Z in purpose-made hardware.
Grant Taylor
2021-09-09 05:14:42 UTC
Permalink
Post by J. Clarke
Unisys Clearpath is an emulator running on Intel. IBM implements
the Z in purpose-made hardware.
IBM implements it in microcode. Which is as much software as it is
hardware.
--
Grant. . . .
unix || die
J. Clarke
2021-09-09 05:59:37 UTC
Permalink
On Wed, 8 Sep 2021 23:14:42 -0600, Grant Taylor
Post by Grant Taylor
Post by J. Clarke
Unisys Clearpath is an emulator running on Intel. IBM implements
the Z in purpose-made hardware.
IBM implements it in microcode. Which is as much software as it is
hardware.
They did once. Do they still when they have from a 360 architecture
viewpoint vast quantities of silicon real estate to play with?
Grant Taylor
2021-09-09 17:53:50 UTC
Permalink
Post by J. Clarke
They did once. Do they still when they have from a 360 architecture
viewpoint vast quantities of silicon real estate to play with?
Absolutely.

If anything they do even more in microcode now than they used to.

The microcode has somewhat become an abstraction layer. The processor
underneath can do whatever it wants and rely on the microcode to be the
abstraction boundary.

There have been multiple episodes of the Terminal Talk podcast talk
about microcode, milicode, and other very low level codes that fall into
the more broad category of firmware.
--
Grant. . . .
unix || die
Scott Lurndal
2021-09-09 18:27:06 UTC
Permalink
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
Peter Flass
2021-09-09 19:23:06 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
--
Pete
Scott Lurndal
2021-09-09 21:10:28 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z” box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).
Dan Espen
2021-09-10 01:02:24 UTC
Permalink
Post by Scott Lurndal
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z†box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).
I see a date of 1969 for the B6500.
That gives the title back to S/360.

I had to support a project moving Unisys code to z-Arch.
We had persistent performance issues, the mainframe just couldn't deal
with loading lots of small programs while the app was running.
I see Unisys is naturally reentrant. That probably had a lot to do with
the problems we were having.
--
Dan Espen
Scott Lurndal
2021-09-10 03:19:55 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z†box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).
I see a date of 1969 for the B6500.
That gives the title back to S/360.
I wouldn't be surprised to find that B5000 applications
ran on the B6500 - Burroughs was good about backwards
compatability - as shown by the B3500 line which ran
the original binaries through end of life (last system
powered off in 2010 so far as I'm aware - 45 year run).

Someone pointed to this, which talks about the
Pasadena plant and the development of the B5000
line. I hadn't realized that Cliff Berry had
any connection to Burroughs when I was working
there; one my school's most famous Alumni.

http://www.digm.com/UNITE/2019/2019-Origins-Burroughs-Algol.pdf
Post by Dan Espen
I had to support a project moving Unisys code to z-Arch.
We had persistent performance issues, the mainframe just couldn't deal
with loading lots of small programs while the app was running.
The Burroughs systems were all designed to be very easy
to use and to program.
Post by Dan Espen
I see Unisys is naturally reentrant. That probably had a lot to do with
the problems we were having.
Yes, it was quite advanced for the day. The capability model
that Burroughs invented with the large systems line is being
investigated for new processor architectures today, see for example CHERI.
Peter Flass
2021-09-11 01:40:59 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z†box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).
I see a date of 1969 for the B6500.
That gives the title back to S/360.
I had to support a project moving Unisys code to z-Arch.
We had persistent performance issues, the mainframe just couldn't deal
with loading lots of small programs while the app was running.
I see Unisys is naturally reentrant. That probably had a lot to do with
the problems we were having.
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise. There were lots of tuning
techniques you could use to optimize the “lots of small programs,” too, but
they weren’t automatic. That is what CICS is really good at. I thought
UNIVAC TIP would be a dog compared to CICS, because it did just run lots
of small programs.
--
Pete
Dan Espen
2021-09-11 02:15:17 UTC
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Scott Lurndal
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest “z†box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
? I thought that the 5500’s successor systems weren’t object-compatible
with it. I don’t know about the degree of compatibility between the 6000s,
7000s, and 8000s. Id be happy to be corrected.
There was a step change between the B5500 and the B6500; after than they
were binary compatible (e-mode in the early 1980s added support for larger
memory, but still ran old codefiles).
I see a date of 1969 for the B6500.
That gives the title back to S/360.
I had to support a project moving Unisys code to z-Arch.
We had persistent performance issues, the mainframe just couldn't deal
with loading lots of small programs while the app was running.
I see Unisys is naturally reentrant. That probably had a lot to do with
the problems we were having.
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise. There were lots of tuning
techniques you could use to optimize the “lots of small programs,” too, but
they weren’t automatic. That is what CICS is really good at. I thought
UNIVAC TIP would be a dog compared to CICS, because it did just run lots
of small programs.
This was a C project and the Unisys code invoked lots of mains.
The IBM LE code to establish reentrancy (mainly building the WSA)
was a major player in the slowness. I'm guessing that Unisys
had more efficient ways of establishing reentrancy.
--
Dan Espen
Thomas Koenig
2021-09-11 07:36:50 UTC
Permalink
Post by Peter Flass
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise.
Fortran was not reentrant (at least not by default), it used the
standard OS/360 linkage convention.
John Levine
2021-09-12 00:02:22 UTC
Permalink
Post by Thomas Koenig
Post by Peter Flass
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise.
Fortran was not reentrant (at least not by default), it used the
standard OS/360 linkage convention.
You could write reentrant code that used the standard linkage scheme, but
it was fiddly and used more conventions about handling the dynamic storage areas.

Fortran and Cobol were never reentrant. PL/I could be if you used the
REENTRANT option in your source code. The PL/I programmers' guides
have examples of calling reentrant and non-reentrant assembler code.

My impression is that most reentrant code was written in assembler and preloaded
at IPL time to be used as shared libraries.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2021-09-13 18:39:52 UTC
Permalink
Post by Thomas Koenig
Post by Peter Flass
You could have reentrant programs on S/360, too, but they had to be coded
as reentrant. I believe all HLLs would generate reentrant code, unless you
deliberately wrote them to be otherwise.
Fortran was not reentrant (at least not by default), it used the
standard OS/360 linkage convention.
Okay, I guess just PL/I and assembler then. I’m not sure CICS supported
FORTRAN. The linkage convention isn’t the problem, FORTRAN and COBOL used
only static storage for data, rather than automatic per-task data. For PL/I
you just had to not modify STATIC storage (AUTOMATIC) is the default, or at
least be careful about how you modified it.
--
Pete
Dan Espen
2021-09-09 19:44:35 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest z box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.
--
Dan Espen
Scott Lurndal
2021-09-09 21:14:45 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest z box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.
Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
in the 1980s.



The family really started with the B5000



which was quickly superceded by the B5500:


Peter Flass
2021-09-11 01:40:58 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest z box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.
Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
in the 1980s.
http://youtu.be/rNBtjEBYFPk
White shirts and ties, sideburns, and cigarettes in the office.
Post by Scott Lurndal
The family really started with the B5000
http://youtu.be/K3q5n1mR9iM
http://youtu.be/KswWJ6zvBUs
--
Pete
Thomas Koenig
2021-09-11 11:36:40 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest z box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.
Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
in the 1980s.
http://youtu.be/rNBtjEBYFPk
White shirts and ties, sideburns, and cigarettes in the office.
I have a similar video from the early 1970s about a building I came
to work in a few decades earlier. Same style.

Unfortunately, I do not think I can share it, there are probably
all sorts of legal obstacles, such as the personality rights of
the persons who are shown working in it.
Scott Lurndal
2021-09-11 15:26:05 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by John Goerzen
You can still run programs compiled on a 360 on the latest z box.
I gotta say - that's darn impressive. I'm not aware of anything else that
maintains compatibility that long; am I missing anything?
Nope. S/360 in it's various flavors is the only survivor of that era.
Actually, that's not precisely true. The Burroughs B5500 still lives on
as the Unisys Clearpath systems, and still supports object files from
the 1960s.
hmm, I actually have been in contact with some of those systems but had
no idea they went back as far as 64.
Here's a video from 1968 on the B6500. I worked at that plant in Pasadena
in the 1980s.
http://youtu.be/rNBtjEBYFPk
White shirts and ties, sideburns, and cigarettes in the office.
It wasn't until 1986 that smoking in the Pasadena Burroughs plant
was fully banned. I moved into an office in 1985 and I had to scrub
every surface to remove the nicotine stains and odor.
Branimir Maksimovic
2021-09-18 03:57:06 UTC
Permalink
Post by Peter Flass
Post by John Goerzen
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."
Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.
I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.
... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."
I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.
- John
You can still run programs compiled on a 360 on the latest “z” box.
Why?
--
bmaxa now listens rock.mp3
John Levine
2021-09-18 04:07:50 UTC
Permalink
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Ahem A Rivet's Shot
2021-09-18 06:27:48 UTC
Permalink
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Grant Taylor
2021-09-18 06:49:00 UTC
Permalink
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?

If it's working just fine and is exhibiting no symptoms, why mess with it?
--
Grant. . . .
unix || die
J. Clarke
2021-09-18 09:47:56 UTC
Permalink
On Sat, 18 Sep 2021 00:49:00 -0600, Grant Taylor
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
However sometimes there are binaries with no source. Source was on a
tape or card deck that got archived and can no longer be found.
Peter Flass
2021-09-18 18:02:05 UTC
Permalink
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600, Grant Taylor
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
However sometimes there are binaries with no source. Source was on a
tape or card deck that got archived and can no longer be found.
Or at least messed up. I tried to rebuild the PL/I(F) compiler from source.
Several modules had minor problems, missing or extra END statements, but
one had a major problem: a large chunk of the program was missing. I spent
several days working from a disassembly to reconstruct the original.

IBM had a big fire at PID in Mechanicsburg, and a lot of sources went
missing. Digital Research lost the source to PL/I-86. The source for PL/C
hs not (yet) been found, although the executable still works fine.
--
Pete
J. Clarke
2021-09-18 20:07:26 UTC
Permalink
On Sat, 18 Sep 2021 11:02:05 -0700, Peter Flass
Post by Peter Flass
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600, Grant Taylor
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
However sometimes there are binaries with no source. Source was on a
tape or card deck that got archived and can no longer be found.
Or at least messed up. I tried to rebuild the PL/I(F) compiler from source.
Several modules had minor problems, missing or extra END statements, but
one had a major problem: a large chunk of the program was missing. I spent
several days working from a disassembly to reconstruct the original.
IBM had a big fire at PID in Mechanicsburg, and a lot of sources went
missing. Digital Research lost the source to PL/I-86. The source for PL/C
hs not (yet) been found, although the executable still works fine.
And I remember the time that NASTRAN got dropped down the stairwell at
a PPOE. Three floors with cards flying merrily the whole way. I
_think_ they were all found (I had to be somewhere and didn't get to
participate in the search).
Ahem A Rivet's Shot
2021-09-18 14:52:10 UTC
Permalink
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Peter Flass
2021-09-18 18:02:06 UTC
Permalink
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
I ran into this trying to recompile some code that was written for PL/I(F)
with the Enterprise compiler. Several constructs were rejected. This was in
gray areas where the documentation didn’t definitively allow or not allow
the code. After a while it became not worth it to me to make a lot of
changes to fit the new compiler.
--
Pete
Dan Espen
2021-09-18 21:46:11 UTC
Permalink
Post by Peter Flass
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
I ran into this trying to recompile some code that was written for PL/I(F)
with the Enterprise compiler. Several constructs were rejected. This was in
gray areas where the documentation didn’t definitively allow or not allow
the code. After a while it became not worth it to me to make a lot of
changes to fit the new compiler.
Similar here, large amounts of PL/I and a bit of it broke with
Enterprise PL/I. Strange how a new compiler can suddenly make
uninitialized variables start causing problems.

We had more than that though including some new compiler bugs.
--
Dan Espen
Dan Cross
2021-09-22 19:00:53 UTC
Permalink
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.

It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.

- Dan C.
J. Clarke
2021-09-22 21:54:39 UTC
Permalink
On Wed, 22 Sep 2021 19:00:53 -0000 (UTC),
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
Nothing "presumed" about it.
Dan Espen
2021-09-23 01:15:07 UTC
Permalink
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
--
Dan Espen
J. Clarke
2021-09-23 02:34:41 UTC
Permalink
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the new shiny
compiler that might get 10% better performance and might break the
application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
I wish our management understood that. I spend half my time
recovering from "updates".
Charlie Gibbs
2021-09-23 02:51:21 UTC
Permalink
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/

Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).

Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
--
/~\ Charlie Gibbs | Life is perverse.
\ / <***@kltpzyxm.invalid> | It can be beautiful -
X I'm really at ac.dekanfrus | but it won't.
/ \ if you read it the right way. | -- Lily Tomlin
J. Clarke
2021-09-23 03:28:19 UTC
Permalink
On Thu, 23 Sep 2021 02:51:21 GMT, Charlie Gibbs
Post by Charlie Gibbs
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/
Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
Push updates should be a criminal offense.
Scott Lurndal
2021-09-23 14:00:08 UTC
Permalink
Post by J. Clarke
On Thu, 23 Sep 2021 02:51:21 GMT, Charlie Gibbs
Post by Charlie Gibbs
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
Push updates should be a criminal offense.
Running critical infrastructure on windows should be a criminal offense.
Peter Flass
2021-09-23 14:32:04 UTC
Permalink
Post by Charlie Gibbs
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/
Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
I hate to say that’s what they get for using windows, but…
--
Pete
Dan Espen
2021-09-23 14:52:30 UTC
Permalink
Post by Peter Flass
Post by Charlie Gibbs
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/
Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
I hate to say that’s what they get for using windows, but…
All these stories about companies paying ransoms.
Seldom do they place the blame directly on Windows.

Too be fair, poor backup and recovery probably plays a role too.
--
Dan Espen
Peter Flass
2021-09-23 17:01:37 UTC
Permalink
Post by Dan Espen
Post by Peter Flass
Post by Charlie Gibbs
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/
Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
I hate to say that’s what they get for using windows, but…
All these stories about companies paying ransoms.
Seldom do they place the blame directly on Windows.
Too be fair, poor backup and recovery probably plays a role too.
I was going to say that windows is just a bigger target than Linux, but
Linux is used extensively in servers and mission-critical situations, and
you seldom hear about a successful attack targeting Linux. You’re right
about poor backup software and procedures.

Someone recently posted here about ransomware that just lay low and
corrupted backups for a while before it struck, but don’t good systems
checksum the backups and verify a good one? Duplicity does that, and also
does a test restore periodically. I have had occasion to restore some files
a few times, and I’m grateful to have it, although I also do some of my own
backups.

I did prefer the system I had previously, whose name I have forgotten, that
had a better UI, but they dropped support for individual users in favor of
corporate licenses.
--
Pete
Scott Lurndal
2021-09-23 17:05:56 UTC
Permalink
Post by Peter Flass
Post by Dan Espen
Too be fair, poor backup and recovery probably plays a role too.
I was going to say that windows is just a bigger target than Linux, but
Linux is used extensively in servers and mission-critical situations, and
you seldom hear about a successful attack targeting Linux. You’re right
about poor backup software and procedures.
Fundamentally, it devolves to Microsoft's choice to use HTML
in email and to allow executable content in mail, to forgo
any form of user security, et cetera.

Simple text is far safer, and forcing someone to manually cut & paste
URLs from a text mail (where the URL is unobfuscatable) to
a sandboxed browser would have been a far more secure paradigm.
Dan Espen
2021-09-23 17:12:17 UTC
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Peter Flass
Post by Charlie Gibbs
Post by Dan Espen
Post by Dan Cross
Post by J. Clarke
On Sat, 18 Sep 2021 00:49:00 -0600
Post by Grant Taylor
Post by Ahem A Rivet's Shot
I suppose the real question is why not recompile them to take
advantage of the newer hardware. I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
Why recompile something just for the sake of recompiling it?
If it's working just fine and is exhibiting no symptoms, why mess with it?
Yeah I get it, you might be depending on an old undocumented
compiler bug or you might fall foul of a new one so why risk the
new shiny compiler that might get 10% better performance and might
break the application.
By that logic, one should never upgrade anything if it can
be avoided. An operating system upgrade in particular would
be terribly fraught.
s/would be/is/
Especially if you have no mechanism for parallel testing
prior to the cutover.
Post by Dan Espen
Post by Dan Cross
It strikes me how much process we've built predicated on the
presumed difficulty of testing and qualifying software for
production use.
You only update software when the benefit justifies the cost.
Sadly, people now update software whenever the vendor tells them to
(or does it behind their back).
Last year I heard that a number of 911 sites went down (i.e. no
dial tone) for at least half an hour thanks to a buggy Windows
update that was pushed out to them.
I hate to say that’s what they get for using windows, but…
All these stories about companies paying ransoms.
Seldom do they place the blame directly on Windows.
Too be fair, poor backup and recovery probably plays a role too.
I was going to say that windows is just a bigger target than Linux, but
Linux is used extensively in servers and mission-critical situations, and
you seldom hear about a successful attack targeting Linux. You’re right
about poor backup software and procedures.
Someone recently posted here about ransomware that just lay low and
corrupted backups for a while before it struck, but don’t good systems
checksum the backups and verify a good one? Duplicity does that, and also
does a test restore periodically. I have had occasion to restore some files
a few times, and I’m grateful to have it, although I also do some of my own
backups.
I'm not and never have been a professional system admin.
It seems to me the system doing backups should only be connected to the
disk farm. That would make corrupting backups an unlikely event.
Post by Peter Flass
I did prefer the system I had previously, whose name I have forgotten, that
had a better UI, but they dropped support for individual users in favor of
corporate licenses.
Backup systems? For my home system cron driven rsync with periodic
changes to the backup USB sticks. Couldn't be much simpler.
Rsync just creates another copy, getting to the backup is just a matter
of copying.
--
Dan Espen
Thomas Koenig
2021-09-18 09:23:36 UTC
Permalink
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
Recompile?

You mean re-assemble?
Post by Ahem A Rivet's Shot
I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
AFAIK, people still use commercial software like Microsoft Windows.
One can presume that Microsoft has the source, but most users
certainly don't (and for the user, this amounts to the same thing).
Peter Flass
2021-09-18 18:02:05 UTC
Permalink
Post by Thomas Koenig
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
Recompile?
You mean re-assemble?
Post by Ahem A Rivet's Shot
I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
There was a lot of this during conversions from 1401 to S/360. People
tended to patch the 1401 object decks rather than change the source and
recompile. If the source hadn’t been lost, it likely didn’t reflect the
running program.
Post by Thomas Koenig
AFAIK, people still use commercial software like Microsoft Windows.
One can presume that Microsoft has the source, but most users
certainly don't (and for the user, this amounts to the same thing).
--
Pete
Dan Espen
2021-09-18 21:50:46 UTC
Permalink
Post by Peter Flass
Post by Thomas Koenig
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
Recompile?
You mean re-assemble?
Post by Ahem A Rivet's Shot
I know during Y2K work that a lot of
instances of lost source code came to light, are people still running
binaries for which there is no source ?
There was a lot of this during conversions from 1401 to S/360. People
tended to patch the 1401 object decks rather than change the source and
recompile. If the source hadn’t been lost, it likely didn’t reflect the
running program.
During my long career I ran into very few instances of missing source.
Only 1 comes to mind.

As I remember, operations would not accept an object deck with patches.
At the same time they accepted the new object deck they
secured the source code and listing.
--
Dan Espen
John Levine
2021-09-18 18:20:22 UTC
Permalink
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
A lot of 360 software was written in assembler. I gather a fair amount still is.
For some they still have the source, some they don't, but even if they do, it's assembler.

The newer hardware has bigger addresses and some more instructions but they don't run any faster.
If you look at the zSeries principles of operation you can see the many hacks they invented to
let old 24 bit addresss 360 code work with more modern 31 and 64 bit code.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-09-18 20:13:47 UTC
Permalink
On Sat, 18 Sep 2021 18:20:22 -0000 (UTC), John Levine
Post by John Levine
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
A lot of 360 software was written in assembler. I gather a fair amount still is.
For some they still have the source, some they don't, but even if they do, it's assembler.
The newer hardware has bigger addresses and some more instructions but they don't run any faster.
If you look at the zSeries principles of operation you can see the many hacks they invented to
let old 24 bit addresss 360 code work with more modern 31 and 64 bit code.
Something that ran adequately on a machine with a 10 MHz clock will
generally run so much more than adequately on a machine with a 5 GHz
clock that there's not much incentive to optimize anyway.
Thomas Koenig
2021-09-18 22:30:22 UTC
Permalink
Post by J. Clarke
On Sat, 18 Sep 2021 18:20:22 -0000 (UTC), John Levine
Post by John Levine
Post by Ahem A Rivet's Shot
On Sat, 18 Sep 2021 04:07:50 -0000 (UTC)
Post by John Levine
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they are still useful? Is this a trick question?
I suppose the real question is why not recompile them to take
advantage of the newer hardware.
A lot of 360 software was written in assembler. I gather a fair amount still is.
For some they still have the source, some they don't, but even if they do, it's assembler.
The newer hardware has bigger addresses and some more instructions but they don't run any faster.
If you look at the zSeries principles of operation you can see the many hacks they invented to
let old 24 bit addresss 360 code work with more modern 31 and 64 bit code.
Something that ran adequately on a machine with a 10 MHz clock will
generally run so much more than adequately on a machine with a 5 GHz
clock that there's not much incentive to optimize anyway.
There are a couple of things that could go wrong, though, especially
if the problem sizes have grown, as they tend to do.

Tradeoffs between disk speed and memory made in the 1980s may not
work as well when the relative performances of CPU and discs have
diverged as much as they did, and there is a factor of 10^n more
data to process, and all of a sudden you find there is this
n^2 algorithm hidden somewhere...
John Levine
2021-09-19 01:55:22 UTC
Permalink
Post by Thomas Koenig
You can still run programs compiled on a 360 on the latest “z” box.
There are a couple of things that could go wrong, though, especially
if the problem sizes have grown, as they tend to do.
Tradeoffs between disk speed and memory made in the 1980s may not
work as well when the relative performances of CPU and discs have
diverged as much as they did, and there is a factor of 10^n more
data to process, and all of a sudden you find there is this
n^2 algorithm hidden somewhere...
Nobody is claiming we still run *all* of the code written in the 1960s.

I gather there is some code where for financial reasons it has to produce
results the same as what it has produced for the past forty years, even
though the programmer who wrote it has retired or died, even though
the results may depend on funky details of the 360's ill-designed floating
point, or of shift-and-round-decimal instructions where for some reason
it uses a rounding digit of 6 rather than the normal 5.

It can be worth a lot to keep running the actual code rather than trying
to reverse engineer it and hope you got all the warts right for every case.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-09-19 02:08:38 UTC
Permalink
On Sun, 19 Sep 2021 01:55:22 -0000 (UTC), John Levine
Post by John Levine
Post by Thomas Koenig
Post by Peter Flass
You can still run programs compiled on a 360 on the latest “z” box.
There are a couple of things that could go wrong, though, especially
if the problem sizes have grown, as they tend to do.
Tradeoffs between disk speed and memory made in the 1980s may not
work as well when the relative performances of CPU and discs have
diverged as much as they did, and there is a factor of 10^n more
data to process, and all of a sudden you find there is this
n^2 algorithm hidden somewhere...
Nobody is claiming we still run *all* of the code written in the 1960s.
I gather there is some code where for financial reasons it has to produce
results the same as what it has produced for the past forty years, even
though the programmer who wrote it has retired or died, even though
the results may depend on funky details of the 360's ill-designed floating
point, or of shift-and-round-decimal instructions where for some reason
it uses a rounding digit of 6 rather than the normal 5.
It can be worth a lot to keep running the actual code rather than trying
to reverse engineer it and hope you got all the warts right for every case.
That's something that I live. If there's a mismatch we don't let it
slide, we learn the reason why. When you have assets under management
that look like the National Debt a little tiny mistake can be a huge
lawsuit.
Bob Eager
2021-09-19 06:22:00 UTC
Permalink
Post by John Levine
I gather there is some code where for financial reasons it has to
produce results the same as what it has produced for the past forty
years, even though the programmer who wrote it has retired or died, even
though the results may depend on funky details of the 360's ill-designed
floating point, or of shift-and-round-decimal instructions where for
some reason it uses a rounding digit of 6 rather than the normal 5.
Do you have a reference to anything on that rounding decision? It's
actually relevant to something I'm working on...
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
John Levine
2021-09-20 03:24:32 UTC
Permalink
Post by Bob Eager
Post by John Levine
I gather there is some code where for financial reasons it has to
produce results the same as what it has produced for the past forty
years, even though the programmer who wrote it has retired or died, even
though the results may depend on funky details of the 360's ill-designed
floating point, or of shift-and-round-decimal instructions where for
some reason it uses a rounding digit of 6 rather than the normal 5.
Do you have a reference to anything on that rounding decision? It's
actually relevant to something I'm working on...
Sorry, it's a real instruction but a hypothetical example.

The closest I got to this was back in the 1980s when I was working on a modelling package called Javelin
and I had to write the functions that computed bond prices and yields. The securities association published
a pamphlet with the algorithms and examples, and needless to say my code wasn't done until it got all the
examples exactly correct. Some of the calculations were rather odd like the ones that decreed that a year
has 360 days.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2021-09-18 18:02:04 UTC
Permalink
Post by Peter Flass
Post by John Goerzen
Post by J. Clarke
Post by John Goerzen
https://governor.kansas.gov/wp-content/uploads/2021/04/GintherTaxCouncilKDOT.pdf
hints that it may be an IBM System 370/Model 145.
I'd be very surprised if it actually was. When did IBM end
maintenance on those?
I have no more information, other than that link claims "The Kansas UI System
runs on a Mainframe that was installed in 1977."
Is it possible the hardware was upgraded to something that can emulate the
370/145, and that difference was lost on a non-technical author? Sure.
I have known other places to run mainframes an absurdly long time. I've seen it
in universities and, of course, there's the famous CompuServe PDP-10 story -
though presumably they had more technical know-how to keep their PDP-10s alive.
You are right; it does seem farfetched.
... so I did some more digging, and found
https://ldh.la.gov/assets/medicaid/mmis/docs/IVVRProcurementLibrary/Section3RelevantCorporateExperienceCorporateFinancialCondition.doc
which claims that the "legacy UI system applications run on the Kansas
Department of Administration's OBM OS/390 mainframe."
I know little of IBM's mainframe lineup, but
https://en.wikipedia.org/wiki/IBM_System/390 claims that the System/390 has some
level of compatibility with the S/370.
- John
You can still run programs compiled on a 360 on the latest “z” box.
Why?
Because they still work, and do what you need them to do. If it works,
leave it alone.
--
Pete
Scott Lurndal
2021-09-07 15:08:04 UTC
Permalink
Post by John Goerzen
Post by Jason Evans
First of all, what is "real work"? Let's say that you're a Linux/Unix/BSD
system administrator who spends 90% of his day on the command line. What is
The DEC PDP-10 was introduced in 1966 and was famously used by CompuServe up
until at least 2007, 41 years later.
Here's an article from 2008 about how Seattle still uses DEC VAXes (released
https://www.seattletimes.com/seattle-news/education/dinosaur-computer-stalls-seattle-schools-plans/
Here's an article from just last year about how Kansas is still using a
https://www.kctv5.com/coronavirus/kansas-department-of-labor-mainframe-is-from-1977/article_40459370-7ac4-11ea-b4db-df529463a7d4.html
Burroughs medium systems, introduced in 1965, were still running the city of
Santa Ana until 2010 (that system was donated to the Living Computer Museum
who ran it for a couple of years thereafter.

I've a Burroughs/Unisys T27 block-mode terminal hooked up to a medium systems simulator that
still runs today.
Loading...