Discussion:
iBM System/3 FORTRAN for engineering/science work?
(too old to reply)
undefined Hancock-4
2021-06-16 18:59:12 UTC
Permalink
Even the low-end models of IBM's System/360 proved too expensive for small business, so in 1969 IBM introduced the budget priced System/3. Notable was the tiny 96 column punched card.

According to the manual (on Bitsavers), the IBM System/3 did support a Fortran compiler. However, I think the hardware did not have floating point and was only oriented toward business processing. My _guess_ is that the System/3 would run Fortran programs rather slowly and Fortran or sci/eng work was rarely done.

Would anyone have any experience or knew of System/3 sites that used Fortran? If so, how did it work out for them? The impression I got was the vast majority of S/3 sites used RPG II developed for it.

I'm pretty sure the System/3 supported BASIC. While BASIC wasn't as good as Fortran, it could handle some number crunching work. That was certainly adequate for some users.

My guess is that extremely few customers bought a System/3 to do sci/eng work--there were too many other better choices available at the time. But it's certainly possible that some sites, while doing primarily business work, might have a eng/sci application here and there and may have run them, albeit slowly. Heck, they have the machine on site already, so use it.

IBM ads 1970-71 (Hard to believe this was 50 years ago):
https://archive.org/details/Nations-Business-1970-01/page/n17/mode/2up

https://archive.org/details/Nations-Business-1971-01/page/n29/mode/2up

https://archive.org/details/Nations-Business-1971-02/page/n5/mode/2up

https://archive.org/details/Nations-Business-1971-03/page/n13/mode/2up


(I don't know about the later S/34, S/36, and S/38 running Fortran or sci/eng applications. As more powerful machines, they may have had more capability, so it may not have been an issue.)
J. Clarke
2021-06-16 20:08:16 UTC
Permalink
On Wed, 16 Jun 2021 11:59:12 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Even the low-end models of IBM's System/360 proved too expensive for small business, so in 1969 IBM introduced the budget priced System/3. Notable was the tiny 96 column punched card.
According to the manual (on Bitsavers), the IBM System/3 did support a Fortran compiler. However, I think the hardware did not have floating point and was only oriented toward business processing. My _guess_ is that the System/3 would run Fortran programs rather slowly and Fortran or sci/eng work was rarely done.
Would anyone have any experience or knew of System/3 sites that used Fortran? If so, how did it work out for them? The impression I got was the vast majority of S/3 sites used RPG II developed for it.
I'm pretty sure the System/3 supported BASIC. While BASIC wasn't as good as Fortran, it could handle some number crunching work. That was certainly adequate for some users.
My guess is that extremely few customers bought a System/3 to do sci/eng work--there were too many other better choices available at the time. But it's certainly possible that some sites, while doing primarily business work, might have a eng/sci application here and there and may have run them, albeit slowly. Heck, they have the machine on site already, so use it.
https://archive.org/details/Nations-Business-1970-01/page/n17/mode/2up
https://archive.org/details/Nations-Business-1971-01/page/n29/mode/2up
https://archive.org/details/Nations-Business-1971-02/page/n5/mode/2up
https://archive.org/details/Nations-Business-1971-03/page/n13/mode/2up
(I don't know about the later S/34, S/36, and S/38 running Fortran or sci/eng applications. As more powerful machines, they may have had more capability, so it may not have been an issue.)
I don't know if it was true for System/3, but at least some AS/400
sites just used packaged applications. I have a working AS/400
downstairs (or at least it was working 20 years ago--I haven't tried
to run it in a long time--something about lugging the terminal down a
flight of narrow stairs) and to my annoyance it did not have RPG or
BASIC or any other programming language installed.
undefined Hancock-4
2021-06-26 19:40:22 UTC
Permalink
Post by J. Clarke
On Wed, 16 Jun 2021 11:59:12 -0700 (PDT), undefined Hancock-4
Even the low-end models of IBM's System/360 proved too expensive for small business, so in 1969 IBM introduced the budget priced System/3. Notable was the tiny 96 column punched card.
According to the manual (on Bitsavers), the IBM System/3 did support a Fortran compiler. However, I think the hardware did not have floating point and was only oriented toward business processing. My _guess_ is that the System/3 would run Fortran programs rather slowly and Fortran or sci/eng work was rarely done.
Would anyone have any experience or knew of System/3 sites that used Fortran? If so, how did it work out for them? The impression I got was the vast majority of S/3 sites used RPG II developed for it.
I'm pretty sure the System/3 supported BASIC. While BASIC wasn't as good as Fortran, it could handle some number crunching work. That was certainly adequate for some users.
My guess is that extremely few customers bought a System/3 to do sci/eng work--there were too many other better choices available at the time. But it's certainly possible that some sites, while doing primarily business work, might have a eng/sci application here and there and may have run them, albeit slowly. Heck, they have the machine on site already, so use it.
https://archive.org/details/Nations-Business-1970-01/page/n17/mode/2up
https://archive.org/details/Nations-Business-1971-01/page/n29/mode/2up
https://archive.org/details/Nations-Business-1971-02/page/n5/mode/2up
https://archive.org/details/Nations-Business-1971-03/page/n13/mode/2up
(I don't know about the later S/34, S/36, and S/38 running Fortran or sci/eng applications. As more powerful machines, they may have had more capability, so it may not have been an issue.)
I don't know if it was true for System/3, but at least some AS/400
sites just used packaged applications. I have a working AS/400
downstairs (or at least it was working 20 years ago--I haven't tried
to run it in a long time--something about lugging the terminal down a
flight of narrow stairs) and to my annoyance it did not have RPG or
BASIC or any other programming language installed.
While the AS/400 evolved from the S/3 midrange product line, it was a very different machine. There were other machines in between like the S/36 series.

Note, unlike the original S/3 and the S/360 et al, which had a defined architecture and instruction set (to this day), the AS/400 used something vague called Licensed Internal Code. One model could vary from the next.

The instruction set of the original System/3 is available on bitsavers. Not sure how many people bothered to program that machine in assembler since it was intended for smaller shops and easy to use. My impression was that most S/3 sites used RPG II or even canned routines. But the IBM literature said Fortran was available. How much it was used or how well it ran I don't know. Sometimes IBM would offer something that in reality didn't run very well on a given machine and got little use.
Grant Taylor
2021-06-26 21:14:54 UTC
Permalink
Post by undefined Hancock-4
Note, unlike the original S/3 and the S/360 et al, which had a
defined architecture and instruction set (to this day), the AS/400
used something vague called Licensed Internal Code. One model could
vary from the next.
I'm quite sure that the same concept, if not the same thing, was used on
many different IBM systems. I've heard tell of some S/360s implementing
instructions via microcode, a.k.a. Licensed Internal Code ~> LIC, that
other S/360s had in hardware.

I know that my P/390-E requires LIC to run. I've heard tell that
ES/9000s (a line of mainframes) required LIC to run.

I hear discussion of LIC on modern z/Series systems.

The LIC functions as a line / family specific abstraction layer that
allows the underlying hardware to fulfil the API (if you will allow me
the analogy) that the LIC provided to OS & other software running on the
system.
--
Grant. . . .
unix || die
John Levine
2021-06-27 01:27:04 UTC
Permalink
Post by Grant Taylor
Post by undefined Hancock-4
Note, unlike the original S/3 and the S/360 et al, which had a
defined architecture and instruction set (to this day), the AS/400
used something vague called Licensed Internal Code. One model could
vary from the next.
I'm quite sure that the same concept, if not the same thing, was used on
many different IBM systems. I've heard tell of some S/360s implementing
instructions via microcode, a.k.a. Licensed Internal Code ~> LIC, that
other S/360s had in hardware.
Sort of. The s/3, s/34, and s/36 were small commercial machines with conventional architectures.

The s/38 had a virtual instruction set which was translated to machine code the first time a progrm ran.
The as/400 and later i series were upward compatible with that virtual instruction set, despite having
very different underlying hardware.

The various models of s/360 all had the same instruction set visible to the programmer (give or take
the decimal and floating point features which were optional on small models), again with very different
hardware. The 360/30 was a heavily microcoded 8 bit machine, the /65 was a microcoded 32 bit machine,
the /75 was hard wired. On the microcoded models there were options to emulate 1400 and 709x machines.
Other than the emulators I don't think there was any licensed microcode.

In later years they did have licensed microcode accelerators and a variety of licensing games like
a Linux applcation processor which was a regular processor minus an instruction or two but didn't
count as a CPU for software licenses.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2021-06-29 00:28:45 UTC
Permalink
Post by Grant Taylor
Post by undefined Hancock-4
Note, unlike the original S/3 and the S/360 et al, which had a
defined architecture and instruction set (to this day), the AS/400
used something vague called Licensed Internal Code. One model could
vary from the next.
I'm quite sure that the same concept, if not the same thing, was used on
many different IBM systems. I've heard tell of some S/360s implementing
instructions via microcode, a.k.a. Licensed Internal Code ~> LIC, that
other S/360s had in hardware.
I know that my P/390-E requires LIC to run. I've heard tell that
ES/9000s (a line of mainframes) required LIC to run.
I hear discussion of LIC on modern z/Series systems.
The LIC functions as a line / family specific abstraction layer that
allows the underlying hardware to fulfil the API (if you will allow me
the analogy) that the LIC provided to OS & other software running on the
system.
But, if I remember correctly, the “architecture specification” for System i
is about the level of P-code, and is completely interpreted on every model.
--
Pete
John Levine
2021-06-29 22:47:24 UTC
Permalink
Post by Grant Taylor
The LIC functions as a line / family specific abstraction layer that
allows the underlying hardware to fulfil the API (if you will allow me
the analogy) that the LIC provided to OS & other software running on the
system.
But, if I remember correctly, the “architecture specification” for System i
is about the level of P-code, and is completely interpreted on every model.
It's not interpreted, it's compiled to machine code the first time a program
is run, but they go to great effort to make that invisible to the programmer.
I gather you can debug at the architecture instruction level.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
undefined Hancock-4
2021-06-29 19:55:25 UTC
Permalink
Post by Grant Taylor
I'm quite sure that the same concept, if not the same thing, was used on
many different IBM systems. I've heard tell of some S/360s implementing
instructions via microcode, a.k.a. Licensed Internal Code ~> LIC, that
other S/360s had in hardware.
From what I saw on the AS/400, I don't think "LIC" was comparable to microcode.

I think microcode was at a very low level. As far as I know, an application programmer never touched it,
if it was even accessible. Application programmers stuck with assembler. I don't think even specialty
developers touched it since each microcode was different for each model, which would create incompatibilities.

In contrast, I remember fiddling with settings on the AS/400 and getting to see the LIC that was generated
from my compilation. Accordingly, while LIC was still low level, I think it was at a higher level than S/360
microcode and not comparable.
Peter Flass
2021-06-29 00:28:43 UTC
Permalink
Post by undefined Hancock-4
Post by J. Clarke
On Wed, 16 Jun 2021 11:59:12 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Even the low-end models of IBM's System/360 proved too expensive for
small business, so in 1969 IBM introduced the budget priced System/3.
Notable was the tiny 96 column punched card.
According to the manual (on Bitsavers), the IBM System/3 did support a
Fortran compiler. However, I think the hardware did not have floating
point and was only oriented toward business processing. My _guess_ is
that the System/3 would run Fortran programs rather slowly and Fortran
or sci/eng work was rarely done.
Would anyone have any experience or knew of System/3 sites that used
Fortran? If so, how did it work out for them? The impression I got was
the vast majority of S/3 sites used RPG II developed for it.
I'm pretty sure the System/3 supported BASIC. While BASIC wasn't as
good as Fortran, it could handle some number crunching work. That was
certainly adequate for some users.
My guess is that extremely few customers bought a System/3 to do
sci/eng work--there were too many other better choices available at the
time. But it's certainly possible that some sites, while doing
primarily business work, might have a eng/sci application here and
there and may have run them, albeit slowly. Heck, they have the machine
on site already, so use it.
https://archive.org/details/Nations-Business-1970-01/page/n17/mode/2up
https://archive.org/details/Nations-Business-1971-01/page/n29/mode/2up
https://archive.org/details/Nations-Business-1971-02/page/n5/mode/2up
https://archive.org/details/Nations-Business-1971-03/page/n13/mode/2up
(I don't know about the later S/34, S/36, and S/38 running Fortran or
sci/eng applications. As more powerful machines, they may have had more
capability, so it may not have been an issue.)
I don't know if it was true for System/3, but at least some AS/400
sites just used packaged applications. I have a working AS/400
downstairs (or at least it was working 20 years ago--I haven't tried
to run it in a long time--something about lugging the terminal down a
flight of narrow stairs) and to my annoyance it did not have RPG or
BASIC or any other programming language installed.
While the AS/400 evolved from the S/3 midrange product line, it was a
very different machine. There were other machines in between like the S/36 series.
Note, unlike the original S/3 and the S/360 et al, which had a defined
architecture and instruction set (to this day), the AS/400 used something
vague called Licensed Internal Code. One model could vary from the next.
The instruction set of the original System/3 is available on bitsavers.
Not sure how many people bothered to program that machine in assembler
since it was intended for smaller shops and easy to use. My impression
was that most S/3 sites used RPG II or even canned routines. But the IBM
literature said Fortran was available. How much it was used or how well
it ran I don't know. Sometimes IBM would offer something that in reality
didn't run very well on a given machine and got little use.
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
--
Pete
undefined Hancock-4
2021-06-29 20:04:18 UTC
Permalink
Post by undefined Hancock-4
The instruction set of the original System/3 is available on bitsavers.
Not sure how many people bothered to program that machine in assembler
since it was intended for smaller shops and easy to use. My impression
was that most S/3 sites used RPG II or even canned routines. But the IBM
literature said Fortran was available. How much it was used or how well
it ran I don't know. Sometimes IBM would offer something that in reality
didn't run very well on a given machine and got little use.
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Yes, govt specs mandated a lot of unnecessary stuff*.

Sometimes a customer, or perhaps a consultant, would get the 'bright idea' of
using a non standard language to do some funky task. For example, Fortran in
an otherwise business environment (or COBOL in an eng/sci site).

I've seen lots of people try to run Fortran on their S/360 only to discover it
didn't have the universal instruction set and they had to find a machine that
did (often that turned out to be our site).

Anyway, people would have these exotic funky programs that usually proved
to be far too abstract and poorly written to be practical and accomplish anything.

For whatever reason, high management would be enamored with these funky
proposals and insist on trying them out, even if their IT staff studied it and
advised against it.

My guess is that that S/3 got Fortran for that reason in addition to govt specs.


* One frustration was COBOL on the IBM mainframe. IBM had a lot of extensions
that while officially non standard, were basically standard since everyone used them
(like COMP-3). But some govt specs required 'standard' ANSI COBOL and no IBM
extensions. That meant programmers had to get up to speed on unfamiliar stuff
and often the code ran slower.

I know of one site that for years stuck with 'standard' COBOL and finally bit the
bullet and used the IBM extensions that everyone else used. Productivity was
substantially improved.
Charlie Gibbs
2021-06-29 21:33:24 UTC
Permalink
Post by undefined Hancock-4
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Yes, govt specs mandated a lot of unnecessary stuff*.
The Univac 9300 had a totally useless COBOL compiler for that reason.

Politics made for strange hardware situations too. At one government
office, someone was taking visitors on a tour and proudly proclaimed,
"...and here is our computer room," and threw the door open on a room
that was totally empty. So they got a computer to fill the room,
where it sat unused until the shop I was working at picked it up
for a song.
Post by undefined Hancock-4
* One frustration was COBOL on the IBM mainframe. IBM had a lot of
extensions that while officially non standard, were basically standard
since everyone used them (like COMP-3). But some govt specs required
'standard' ANSI COBOL and no IBM extensions. That meant programmers
had to get up to speed on unfamiliar stuff and often the code ran
slower.
I know of one site that for years stuck with 'standard' COBOL and
finally bit the bullet and used the IBM extensions that everyone
else used. Productivity was substantially improved.
Mind you, you had to know when to use the extensions. I was once
called in to figure out why a COBOL program was running so slowly.
It did a lot of subscripting, and the genius who wrote it declared
all subscripts as COMP-3. Almost wore out the CVB instruction. :-)
Changing all subscripts to COMP-4 knocked 30% off the execution time.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
Peter Flass
2021-06-30 18:44:30 UTC
Permalink
Post by Charlie Gibbs
Post by undefined Hancock-4
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Yes, govt specs mandated a lot of unnecessary stuff*.
The Univac 9300 had a totally useless COBOL compiler for that reason.
Politics made for strange hardware situations too. At one government
office, someone was taking visitors on a tour and proudly proclaimed,
"...and here is our computer room," and threw the door open on a room
that was totally empty. So they got a computer to fill the room,
where it sat unused until the shop I was working at picked it up
for a song.
Post by undefined Hancock-4
* One frustration was COBOL on the IBM mainframe. IBM had a lot of
extensions that while officially non standard, were basically standard
since everyone used them (like COMP-3). But some govt specs required
'standard' ANSI COBOL and no IBM extensions. That meant programmers
had to get up to speed on unfamiliar stuff and often the code ran
slower.
I know of one site that for years stuck with 'standard' COBOL and
finally bit the bullet and used the IBM extensions that everyone
else used. Productivity was substantially improved.
Mind you, you had to know when to use the extensions. I was once
called in to figure out why a COBOL program was running so slowly.
It did a lot of subscripting, and the genius who wrote it declared
all subscripts as COMP-3.
At least they didn’t code (or default) them USAGE IS DISPLAY, which I saw
from rime romtime.

Almost wore out the CVB instruction. :-)
Post by Charlie Gibbs
Changing all subscripts to COMP-4 knocked 30% off the execution time.
--
Pete
undefined Hancock-4
2021-07-01 18:40:13 UTC
Permalink
Mind you, you had to know when to use the extensions. I was once
called in to figure out why a COBOL program was running so slowly.
It did a lot of subscripting, and the genius who wrote it declared
all subscripts as COMP-3. Almost wore out the CVB instruction. :-)
Changing all subscripts to COMP-4 knocked 30% off the execution time.
OMG, that CVB (and the inverse) were killers. I had exclusive use of the
machine over the weekend and experimented with a program. COMP-3
was terrible as a subscript. (As someone noted, DISPLAY was even worse).

The opposite was true, too. Binary was not good for business processing
since that slow conversion was necessary for all I/O.
Peter Flass
2021-06-30 18:44:29 UTC
Permalink
Post by undefined Hancock-4
Post by undefined Hancock-4
The instruction set of the original System/3 is available on bitsavers.
Not sure how many people bothered to program that machine in assembler
since it was intended for smaller shops and easy to use. My impression
was that most S/3 sites used RPG II or even canned routines. But the IBM
literature said Fortran was available. How much it was used or how well
it ran I don't know. Sometimes IBM would offer something that in reality
didn't run very well on a given machine and got little use.
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Yes, govt specs mandated a lot of unnecessary stuff*.
Sometimes a customer, or perhaps a consultant, would get the 'bright idea' of
using a non standard language to do some funky task. For example, Fortran in
an otherwise business environment (or COBOL in an eng/sci site).
We were a COBOL shop, but there was one funky stat program in FORTRAN to
analyze employee data or something. I think it was at least a tray of
cards. I don’t know who wrote it, or when, or even if it was ever run while
I was there, but it was the Director’s pet.
Post by undefined Hancock-4
I've seen lots of people try to run Fortran on their S/360 only to discover it
didn't have the universal instruction set and they had to find a machine that
did (often that turned out to be our site).
I’m surprised someone didn’t write FP emulation routines and distribute
them thru SHARE or something. I know this was later done for the Intel 386.
Post by undefined Hancock-4
Anyway, people would have these exotic funky programs that usually proved
to be far too abstract and poorly written to be practical and accomplish anything.
For whatever reason, high management would be enamored with these funky
proposals and insist on trying them out, even if their IT staff studied it and
advised against it.
I wrote a bunch of PL/I programs, but never convinced anyone to do any
production stuff in it.
Post by undefined Hancock-4
My guess is that that S/3 got Fortran for that reason in addition to govt specs.
--
Pete
Rich Alderson
2021-06-30 19:56:40 UTC
Permalink
Post by Peter Flass
I wrote a bunch of PL/I programs, but never convinced anyone to do any
production stuff in it.
Prior to moving into systems programming, I was a programmer/analyst in the
Financial Systems group for the University of Chicago Computation Center. The
accounting system was a suite of COBOL programs purchased from a commercial
vendor which ran on SVS (= OS/VS2 v1) on an Amdahl 470.

Our standard was to make mods for localization in COBOL, but to write any
external utilities in PL/I using the optimizing compiler. That's what got me
in the door: I had been writing PL/1 for 10 years already when I got the job.

(NB: When I first started programming, the language was called "PL/1" in the
IBM manuals; by the time I started using it professionally, years before the
UChicago job, they had gone to the "PL/I" version of the name.)
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Bob Eager
2021-06-30 22:15:09 UTC
Permalink
Post by Rich Alderson
(NB: When I first started programming, the language was called "PL/1" in the
IBM manuals; by the time I started using it professionally, years
before the UChicago job, they had gone to the "PL/I" version of the
name.)
One of my colleagues did his Ph.D. in the 1960s. He had also worked for
IBM. Part of his research was the construction of a general purpose macro
language and processor.

As a piss take of IBM, he called it ML/I. (Macro Language 1)

http://www.ml1.org.uk
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Thomas Koenig
2021-07-01 05:12:05 UTC
Permalink
Post by Bob Eager
One of my colleagues did his Ph.D. in the 1960s. He had also worked for
IBM. Part of his research was the construction of a general purpose macro
language and processor.
As a piss take of IBM, he called it ML/I. (Macro Language 1)
http://www.ml1.org.uk
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).

Or are there others? Joke languages like Intercal need not apply :-)
Bob Eager
2021-07-01 06:28:25 UTC
Permalink
Post by Thomas Koenig
Post by Bob Eager
One of my colleagues did his Ph.D. in the 1960s. He had also worked for
IBM. Part of his research was the construction of a general purpose
macro language and processor.
As a piss take of IBM, he called it ML/I. (Macro Language 1)
http://www.ml1.org.uk
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
Or are there others? Joke languages like Intercal need not apply :-)
There really aren't many others. I learned it in 1971, and still use it.
But ther is not much call for general purpose macro processors these
days. A few years ago I used it to process some files where others in a
Usenet group had all failed.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Ahem A Rivet's Shot
2021-07-01 12:28:12 UTC
Permalink
On Thu, 1 Jul 2021 05:12:05 -0000 (UTC)
Post by Thomas Koenig
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
XSLT and sendmail.cf are both less readable IMHO - XSLT control
flow is essentially come-from with wildcards.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
John Levine
2021-07-03 01:19:39 UTC
Permalink
Post by Ahem A Rivet's Shot
On Thu, 1 Jul 2021 05:12:05 -0000 (UTC)
Post by Thomas Koenig
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
XSLT and sendmail.cf are both less readable IMHO - XSLT control
flow is essentially come-from with wildcards.
Um, sendmail.cf uses m4, which is why it's opaque.

Back in the day Trac (the macro language, not the bug tracker) was somewhat popular
but it's basically dead now.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Ahem A Rivet's Shot
2021-07-03 09:14:23 UTC
Permalink
On Sat, 3 Jul 2021 01:19:39 -0000 (UTC)
Post by John Levine
Post by Ahem A Rivet's Shot
On Thu, 1 Jul 2021 05:12:05 -0000 (UTC)
Post by Thomas Koenig
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
XSLT and sendmail.cf are both less readable IMHO - XSLT control
flow is essentially come-from with wildcards.
Um, sendmail.cf uses m4, which is why it's opaque.
Er no, there's an m4 generator for sendmail.cf these days to
*simplify* it,
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Niklas Karlsson
2021-07-03 10:43:09 UTC
Permalink
Post by Ahem A Rivet's Shot
On Sat, 3 Jul 2021 01:19:39 -0000 (UTC)
Post by John Levine
Post by Ahem A Rivet's Shot
On Thu, 1 Jul 2021 05:12:05 -0000 (UTC)
Post by Thomas Koenig
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
XSLT and sendmail.cf are both less readable IMHO - XSLT control
flow is essentially come-from with wildcards.
Um, sendmail.cf uses m4, which is why it's opaque.
Er no, there's an m4 generator for sendmail.cf these days to
*simplify* it,
Right - sendmail.mc.

Niklas
--
For a time, I wrote data analysis code in C on VMS. I drank a lot of
tequila during that time.
-- Mark 'Kamikaze' Hughes in asr
Peter Flass
2021-07-01 18:25:39 UTC
Permalink
Post by Thomas Koenig
Post by Bob Eager
One of my colleagues did his Ph.D. in the 1960s. He had also worked for
IBM. Part of his research was the construction of a general purpose macro
language and processor.
As a piss take of IBM, he called it ML/I. (Macro Language 1)
http://www.ml1.org.uk
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
Or are there others? Joke languages like Intercal need not apply :-)
IBM’s PL/I preprocessor is fairly general-purpose. The macros are written
in an interpreted subset of PL/I.
--
Pete
undefined Hancock-4
2021-07-01 18:34:00 UTC
Permalink
Post by Peter Flass
IBM’s PL/I preprocessor is fairly general-purpose. The macros are written
in an interpreted subset of PL/I.
How popular was PL/I in its heyday? I knew of a few programmers who really loved
it and pushed it, but it never seemed to catch on that much.

How popular is it today? The people who liked it at my site have retired and its faded away.

IBM devoted a lot of scarce resources it didn't have to developing it as part of S/360.
Peter Flass
2021-07-01 22:56:46 UTC
Permalink
Post by undefined Hancock-4
Post by Peter Flass
IBM’s PL/I preprocessor is fairly general-purpose. The macros are written
in an interpreted subset of PL/I.
How popular was PL/I in its heyday? I knew of a few programmers who really loved
it and pushed it, but it never seemed to catch on that much.
It came along a little later than the S/360 COBOL compiler, it required
more resources, and some programmers were already familiar with COBOL from
earlier machines, so it was never as popular. I think it did somewhat
better in Europe than in North America. In its heyday it was supported by
quite a fre mainframe manufacturers.
.
Post by undefined Hancock-4
How popular is it today? The people who liked it at my site have retired
and its faded away.
I think it’s mostly “legacy” today. There are at least three compilers for
personal computers. I’d like to see it get more use, it’s a powerful
language, even in comparison with more recent languages.
Post by undefined Hancock-4
IBM devoted a lot of scarce resources it didn't have to developing it as part of S/360.
Yes, one system/one language for all purposes.
--
Pete
Robin Vowels
2021-07-02 03:57:13 UTC
Permalink
Post by Peter Flass
Post by Peter Flass
IBM’s PL/I preprocessor is fairly general-purpose. The macros are written
in an interpreted subset of PL/I.
How popular was PL/I in its heyday? I knew of a few programmers who really loved
it and pushed it, but it never seemed to catch on that much.
It came along a little later than the S/360 COBOL compiler, it required
more resources,
.
I do not have information on the resources required by IBM COBOL,
but PL/I-F ran in as little as 64K. I think that IBM PL/I-DOS
required a 32K machine.
DR PL/I ran in 64K byte PC.
.
Compared to FORTRAN in 1966, PL/I ran rings around it.
PL/I had superior syntax, superior error handling, and superior
debugging facilities.
Yet some FORTRAN users persisted in running programs that
crashed without any indication of where they crashed;
Multiple re-runs would be needed to track down the cause of an error.
With PL/I-F the cause of an error and the statement number where it
occurred were printed out. Furthermore, the values of variables
could also be printed.
.
When it came to production runs, PL/I had the ability to continue after
encountering bad data, carrying on to completion, without the need for
a call in the early hours to patch the program.
.
The ability to catch integer overflow (whether decimal or binary),
division by zero, floating-point overflow, subscript errors, and character
string errors is an important part of the language.
.
Post by Peter Flass
and some programmers were already familiar with COBOL from
earlier machines, so it was never as popular. I think it did somewhat
better in Europe than in North America. In its heyday it was supported by
quite a fre mainframe manufacturers.
.
How popular is it today? The people who liked it at my site have retired
and its faded away.
I think it’s mostly “legacy” today. There are at least three compilers for
personal computers. I’d like to see it get more use, it’s a powerful
language, even in comparison with more recent languages.
.
...
Charlie Gibbs
2021-07-02 23:28:41 UTC
Permalink
Post by Robin Vowels
I do not have information on the resources required by IBM COBOL,
but PL/I-F ran in as little as 64K. I think that IBM PL/I-DOS
required a 32K machine.
DR PL/I ran in 64K byte PC.
Compared to FORTRAN in 1966, PL/I ran rings around it.
FSVO "ran rings". My only contact with PL/I was at university,
where everything ran under MTS. Admittedly, we were more
concerned with compile time rather than execution time, but
when it came to compilations PL/I was a pig, eating up more
CPU time and memory (and funny money in our student accounts)
than any other language processor (although Assembler G came
close). Fortran G compiled like lightning. Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
Post by Robin Vowels
Post by Peter Flass
How popular is it today? The people who liked it at my site
have retired and its faded away.
I think it’s mostly “legacy” today. There are at least three
compilers for personal computers. I’d like to see it get more
use, it’s a powerful language, even in comparison with more
recent languages.
Apparently the Insurance Corporation of British Columbia
(set up by the incoming NDP government in the 1970s to take
over all motor vehicle insurance from the private sector)
had all their brand-new software written in PL/I on a
"cost is no object" basis. Of course, being the provincial
government, it was subject to the usual 100% overruns...
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
Robin Vowels
2021-07-03 01:26:14 UTC
Permalink
Post by Robin Vowels
I do not have information on the resources required by IBM COBOL,
but PL/I-F ran in as little as 64K. I think that IBM PL/I-DOS
required a 32K machine.
DR PL/I ran in 64K byte PC.
Compared to FORTRAN in 1966, PL/I ran rings around it.
FSVO "ran rings". My only contact with PL/I was at university,
where everything ran under MTS. Admittedly, we were more
concerned with compile time rather than execution time, but
when it came to compilations PL/I was a pig, eating up more
CPU time and memory (and funny money in our student accounts)
than any other language processor (although Assembler G came
close). Fortran G compiled like lightning.
.
What??!! IBM's FORTRAN G compiled at about the same speed as their PL/I-F
compiler.
FORTRAN H took about twice as long as FORTRAN G, and in addition
required 4 times as much memory as PL/I-F.
.
Link time for PL/I-F took slightly longer than FORTRAN G.
.
WATFOR and PL/C were both high-speed compilers & were
available for S/360 -- and much faster than the corresponding
IBM products.
.
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
Post by Robin Vowels
Post by Peter Flass
How popular is it today? The people who liked it at my site
have retired and its faded away.
I think it’s mostly “legacy” today. There are at least three
compilers for personal computers. I’d like to see it get more
use, it’s a powerful language, even in comparison with more
recent languages.
Apparently the Insurance Corporation of British Columbia
(set up by the incoming NDP government in the 1970s to take
over all motor vehicle insurance from the private sector)
had all their brand-new software written in PL/I on a
"cost is no object" basis. Of course, being the provincial
government, it was subject to the usual 100% overruns...
Anne & Lynn Wheeler
2021-07-05 23:39:18 UTC
Permalink
Post by Robin Vowels
What??!! IBM's FORTRAN G compiled at about the same speed as their PL/I-F
compiler.
FORTRAN H took about twice as long as FORTRAN G, and in addition
required 4 times as much memory as PL/I-F.
.
Link time for PL/I-F took slightly longer than FORTRAN G.
.
WATFOR and PL/C were both high-speed compilers & were
available for S/360 -- and much faster than the corresponding
IBM products.
I took two semester hr intro to fortran/computers, univ. ran 709
tape->tape with tapes moved between 709 and 1401 that ran as front-end
for unit record (card->tape, tape->printer/punch). Student fortran jobs
(30-60 cards) ran under second elapsed time.

Univ. was sold 360/67 originally for tss/360, but never quite came to
production fruition and so ran as 360/65 with os/360. within year of
taking intro class, I was hired fulltime responsible for the ibm
mainframe.

Initially student jobs ran well over a minute three step fortran G,
compile, link-edit, go/execute. I then installed HASP which cut elapsed
time about in half (nearly all job step overhead and file
open/close). First os/360 sysgen I did was release 9.5. I then started
tearing apart stage2 sysgen and reorganizing the cards to optimize
placement of files and PDS members for arm seek and PDS directory
multi-track search which improved student jobs by nearly another 2/3rds
... 12.9sec elapsed ... approx. 4.3sec/job-step.

It wasn't until installed single step, batch WATFOR monitor that student
jobs ran faster than 709. WATFOR ran approx. 20,000 card(statements) per
min (333/sec) on 360/65. Typically a card tray (2000+ cards, 30-60
student jobs) would be batched in single step ... 4.3sec/single-step +
2000/333 secs ... 10.3sec elapsed for batching 30-60 student jobs
... around .2secs/job. WATFOR would have still been slower than 709
(i.e. 4.5sec/job) if it wasn't for batching multiple jobs per
single-step execution (and w/o my careful sysgen and hasp, it would be
more like 20secs if ran single student job per invokation).

One of the guys up at IBM palo alto science center (that had done much
of the work for the 370/145 APL microcode assist ... got APL throughput
up close to 370/168) ... did a lot of optimization work on Fortran H
that originally was available inside IBM as Fortran Q ... and eventually
released to customers as Fortran HX.
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2021-07-04 01:40:35 UTC
Permalink
Post by Charlie Gibbs
Post by Robin Vowels
I do not have information on the resources required by IBM COBOL,
but PL/I-F ran in as little as 64K. I think that IBM PL/I-DOS
required a 32K machine.
DR PL/I ran in 64K byte PC.
Compared to FORTRAN in 1966, PL/I ran rings around it.
FSVO "ran rings". My only contact with PL/I was at university,
where everything ran under MTS. Admittedly, we were more
concerned with compile time rather than execution time, but
when it came to compilations PL/I was a pig, eating up more
CPU time and memory (and funny money in our student accounts)
than any other language processor (although Assembler G came
close).
I think FORTRAN G was the compiler that did everything via subroutine
calls. PL/I runtimes could be terrible if you used the wrong features.
Worse than COBOL with its DISPLAY/COMP-3 vs. binary. I’ve got to figure out
a way to make CONTROLLED storage more efficient.

Fortran G compiled like lightning. Forget about COBOL -
Post by Charlie Gibbs
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
Post by Robin Vowels
Post by Peter Flass
How popular is it today? The people who liked it at my site
have retired and its faded away.
I think it’s mostly “legacy” today. There are at least three
compilers for personal computers. I’d like to see it get more
use, it’s a powerful language, even in comparison with more
recent languages.
Apparently the Insurance Corporation of British Columbia
(set up by the incoming NDP government in the 1970s to take
over all motor vehicle insurance from the private sector)
had all their brand-new software written in PL/I on a
"cost is no object" basis. Of course, being the provincial
government, it was subject to the usual 100% overruns...
--
Pete
Quadibloc
2021-07-08 21:51:49 UTC
Permalink
Post by Charlie Gibbs
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
Although it's unfortunate that the PL/I compiler was not efficient,
you have just given here the best argument for PL/I's existence.

PL/I was a lot easier to learn for programmers used to FORTRAN
than COBOL was, yet it could also do the things that COBOL and
RPG could do, but FORTRAN could not.

So if IBM had only implemented a decent PL/I compiler, that
language could have fulfilled the plans IBM had for it.

John Savard
Peter Flass
2021-07-08 22:06:06 UTC
Permalink
Post by Quadibloc
Post by Charlie Gibbs
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
Although it's unfortunate that the PL/I compiler was not efficient,
you have just given here the best argument for PL/I's existence.
PL/I was a lot easier to learn for programmers used to FORTRAN
than COBOL was, yet it could also do the things that COBOL and
RPG could do, but FORTRAN could not.
So if IBM had only implemented a decent PL/I compiler, that
language could have fulfilled the plans IBM had for it.
They apparently still haven’t, although hardware has now grown powerful
enough that no one notices.

Here I am struggling to get the Iron Spring compiler to generate reasonably
efficient code. I wanted to look at IBM’s compiler to see how they handled
this particular situation. I looked at their generated “assembler” and just
about choked on my morning coffee.
--
Pete
Quadibloc
2021-07-09 00:19:47 UTC
Permalink
Post by Peter Flass
Here I am struggling to get the Iron Spring compiler to generate reasonably
efficient code. I wanted to look at IBM’s compiler to see how they handled
this particular situation. I looked at their generated “assembler” and just
about choked on my morning coffee.
In any case, thank you for providing this valuable tool for Linux and OS/2
users.

John Savard
Robin Vowels
2021-07-10 14:32:59 UTC
Permalink
Post by Quadibloc
Post by Charlie Gibbs
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
.
Post by Quadibloc
Although it's unfortunate that the PL/I compiler was not efficient,
.
Some have claimed this, and have compared run times for various
languages including FORTRAN and PL/I. A. B. Tucker ("Programming
Languages") is one.
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler.
The PL/I programs ran slower than they should have.
.
I investigated claims by D. J. Kewley, who compared run times of FORTRAN,
Pascal, and PL/I programs. The PL/I version did not use COMPLEX data types,
whereas the FORTRAN version did. Modifying the PL/I version program to use
COMPLEX and REORDER gave a 47% increase in execution speed.
.
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts.
.
Post by Quadibloc
you have just given here the best argument for PL/I's existence.
PL/I was a lot easier to learn for programmers used to FORTRAN
than COBOL was, yet it could also do the things that COBOL and
RPG could do, but FORTRAN could not.
So if IBM had only implemented a decent PL/I compiler, that
language could have fulfilled the plans IBM had for it.
Robin Vowels
2021-07-10 14:41:14 UTC
Permalink
Post by Robin Vowels
Post by Quadibloc
Post by Charlie Gibbs
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
.
Post by Quadibloc
Although it's unfortunate that the PL/I compiler was not efficient,
.
Some have claimed this, and have compared run times for various
languages including FORTRAN and PL/I. A. B. Tucker ("Programming
Languages") is one.
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler.
The PL/I programs ran slower than they should have.
.
I investigated claims by D. J. Kewley, who compared run times of FORTRAN,
Pascal, and PL/I programs. The PL/I version did not use COMPLEX data types,
whereas the FORTRAN version did. Modifying the PL/I version program to use
COMPLEX and REORDER gave a 47% increase in execution speed.
.
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts.
.
R. H. Prins criticized IBM's Enterprise PL/I compiler for
optimising poorly. He subsequently retracted that criticism when
he discovered that he had been compiling the program with OPT(0) --
that is, with no optimisation !
Dan Espen
2021-07-10 17:30:24 UTC
Permalink
Post by Robin Vowels
Post by Robin Vowels
Post by Quadibloc
Post by Charlie Gibbs
Forget about COBOL -
to the CS weenies, uttering its name was an even worse profanity
than GOTO. There wasn't even a COBOL compiler on the system.
(If you were desperate, you could submit a compile request for
overnight batch, where the operators would run it under OS/360
emulation.) Needless to say, RPG did not exist. Period.
.
Post by Quadibloc
Although it's unfortunate that the PL/I compiler was not efficient,
.
Some have claimed this, and have compared run times for various
languages including FORTRAN and PL/I. A. B. Tucker ("Programming
Languages") is one.
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler.
The PL/I programs ran slower than they should have.
.
I investigated claims by D. J. Kewley, who compared run times of FORTRAN,
Pascal, and PL/I programs. The PL/I version did not use COMPLEX data types,
whereas the FORTRAN version did. Modifying the PL/I version program to use
COMPLEX and REORDER gave a 47% increase in execution speed.
.
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts.
.
R. H. Prins criticized IBM's Enterprise PL/I compiler for
optimising poorly. He subsequently retracted that criticism when
he discovered that he had been compiling the program with OPT(0) --
that is, with no optimisation !
The last place I worked had a lot of PL/I, had very firm customer
commitments for CPU use and actively worked with IBM to keep the PL/I
compiler efficient. I'm not aware of any issues with mainframe PL/I
efficiency being lacking.
--
Dan Espen
Quadibloc
2021-07-10 20:29:36 UTC
Permalink
Post by Dan Espen
The last place I worked had a lot of PL/I, had very firm customer
commitments for CPU use and actively worked with IBM to keep the PL/I
compiler efficient. I'm not aware of any issues with mainframe PL/I
efficiency being lacking.
Although I think the post here was talking about the time it took to
compile, not the efficiency of the generated code.

John Savard
Dan Espen
2021-07-10 21:13:45 UTC
Permalink
Post by Quadibloc
Post by Dan Espen
The last place I worked had a lot of PL/I, had very firm customer
commitments for CPU use and actively worked with IBM to keep the PL/I
compiler efficient. I'm not aware of any issues with mainframe PL/I
efficiency being lacking.
Although I think the post here was talking about the time it took to
compile, not the efficiency of the generated code.
Yeah, I couldn't really tell what was meant.

I remember we made quite a stink about a new compiler version where the
compiler slowed down a lot. At first IBM said it's not important, the
compiler needs more time to generate better code. Soon after, they
fixed the problem.

Other than that, I don't remember the PL/I compiler being any slower to
compile than the other languages we used HLASM, C.
--
Dan Espen
Robin Vowels
2021-07-11 05:32:58 UTC
Permalink
Post by Quadibloc
Post by Dan Espen
The last place I worked had a lot of PL/I, had very firm customer
commitments for CPU use and actively worked with IBM to keep the PL/I
compiler efficient. I'm not aware of any issues with mainframe PL/I
efficiency being lacking.
.
Post by Quadibloc
Although I think the post here was talking about the time it took to
compile, not the efficiency of the generated code.
.
I do not have to hand comparative listings of FORTRAN and PL/I
compilations and executions. OTOMH they were comparable.
However, PL/I-F ran well in our 128k S/360. (It needed only 64K)
What didn't run was IBM FORTRAN-H, which needed 256K.
.
What also didn't run was FORTRAN-E, that was so full of bugs
that we had to withdraw it, and eventually we had to drop the entire
OS (Release 13) and go back to Release 11. The next OS that
we installed was Release 15 (or was it 16?), by which time
1100 bugs had been corrected !!
.
When we did have more core, we found that IBM's FORTRAN-H was
a very slow compiler, because it was an optimising compiler. That's
understandable. FORTRAN-G was the one we used most of the FORTRAN
offerings. Then we installed WATFOR, which also was possible with the
extra core.
Peter Flass
2021-07-11 19:46:29 UTC
Permalink
Post by Robin Vowels
Post by Quadibloc
Post by Dan Espen
The last place I worked had a lot of PL/I, had very firm customer
commitments for CPU use and actively worked with IBM to keep the PL/I
compiler efficient. I'm not aware of any issues with mainframe PL/I
efficiency being lacking.
.
Post by Quadibloc
Although I think the post here was talking about the time it took to
compile, not the efficiency of the generated code.
.
I do not have to hand comparative listings of FORTRAN and PL/I
compilations and executions. OTOMH they were comparable.
However, PL/I-F ran well in our 128k S/360. (It needed only 64K)
What didn't run was IBM FORTRAN-H, which needed 256K.
.
What also didn't run was FORTRAN-E, that was so full of bugs
that we had to withdraw it, and eventually we had to drop the entire
OS (Release 13) and go back to Release 11. The next OS that
we installed was Release 15 (or was it 16?), by which time
1100 bugs had been corrected !!
.
When we did have more core, we found that IBM's FORTRAN-H was
a very slow compiler, because it was an optimising compiler. That's
understandable. FORTRAN-G was the one we used most of the FORTRAN
offerings. Then we installed WATFOR, which also was possible with the
extra core.
May have been “15-16”. IIRC one release was so late they bundled it with
the next release and shipped them together.
--
Pete
Anne & Lynn Wheeler
2021-07-11 23:58:39 UTC
Permalink
Post by Robin Vowels
What also didn't run was FORTRAN-E, that was so full of bugs
that we had to withdraw it, and eventually we had to drop the entire
OS (Release 13) and go back to Release 11. The next OS that
we installed was Release 15 (or was it 16?), by which time
1100 bugs had been corrected !!
os/360 release 13 was 1st (2nd?) with MVT ... we had skipped from
release 11 directly to 14 ... then to combined release 15/16.

boeing huntsville had got a duplex 360/67 for tss/360 to drive 2250
graphic CAD work ... tss/360 never came to production fruition ... so
they tried running os/360 MVT (as two separate 360/65s) ... but as
mentioned here
http://www.garlic.com/~lynn/2011d.html#73
the primary justification for moving all 370s to virtual memory was that
MVT storage management was so bad ... that regions typically had to be
four times larger than actually used (typical 1mbyte 370/165 was
nominally limited to four concurrent regions) ... going to virtual
memory (16mbyte virtual address space) could increase the number of
regions by factor of four with little or no paging.

MVT storage management disaster increased the longer jobs ran ... and
long running 2250 graphic CAD work was nearly impossible. So boeing
huntsville added some virtual memory support to MVT (starting with
release 13) ... didn't do any paging ... the amount of virtual memory
was same as real storage ... but it addressed the enormous MVT storage
fragmentation problem for longer running jobs ... being able to reorg
addressable memory to create contiguous storage locations (aka part of
the same reason motivating moving all 370s to virtual memory).

first os/360 sysgen I did after being hired fulltime at the univ was
release 9.5 (mentioned upthread). Then for release 11, I tore apart
stage2 sysgen deck and reorganized all the cards to optimize placement
of files and pds members for optimized arm seek (and PDS directory
multi-track search) ... and stared doing as much as possible new system
build in production job stream (instead of stand-alone starter
system). Most notable thing I remember about release 15/16 is formating
disk, you could specify the cylinder location for vtoc (instead of
always cyl0, requiring seek to vtoc/cyl0 for any file lookup)
... allowing placement of vtoc in the middle of files on both sides
... cutting avg arm seek distance for high activity vtoc operations.

Then before I graduate, Boeing hires me fulltime into a small group in
the CFO office to help with formation of Boeing Computer Services (move
all dataprocessing into independent business unit to better monetize the
investment, including offering services to non-Boeing entities). I thot
Renton datacenter was possibly largest in the world ... something like
$200M-$300M in 360s (majority fortran engineering/scientific work)
... when I was hired, 360/65s were arriving faster than they could be
installed, boxes constantly being staged in the hallways around the
machine room.

Machine room did have one 360/75 that did some classified work. When
classified work was running there was black rope around the perimeter of
the system complex with guards stationed at corners ... and heavy black
velvet cloth draped over the front console lights and the 1403 printer
front window and rear stacker.
--
virtualization experience starting Jan1968, online at home since Mar1970
undefined Hancock-4
2021-07-13 22:03:52 UTC
Permalink
Post by Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2011d.html#73
the primary justification for moving all 370s to virtual memory was that
MVT storage management was so bad ... that regions typically had to be
four times larger than actually used (typical 1mbyte 370/165 was
nominally limited to four concurrent regions) ... going to virtual
memory (16mbyte virtual address space) could increase the number of
regions by factor of four with little or no paging.
I don't have my S/360 history handy, but wasn't a foundation to these
problems the focus on Future System instead of upgrading System/360?
If I recall correctly, they suddenly realized FS wasn't gonna work and S/360
was obsolete so they rushed out S/360 with some improvements, but not
a lot. First was the S/3x5 series, then later the improved S/3x8 series
which I think was a lot better.

An old boss told me virtual storage was not a big deal. It offered a little
improvement, but came with a cost. First, the operating system was a lot
more complete. Second, it was very easy to go too far creating paging
problems, or even thrashing where the system would grind to a halt.

It seemed to me in S/370 days demand for service exceeded computer horsepower to
provide it. Online processing was a resource hog in both CPU, memory, and disk.

We could handle five terminals on our S360-40 under MTCS. But 500 terminals
on our 158 was another story, basically it was too much. We'd upgrade, but as
soon as we upgraded we added another application.
Anne & Lynn Wheeler
2021-07-14 04:19:29 UTC
Permalink
Post by undefined Hancock-4
I don't have my S/360 history handy, but wasn't a foundation to these
problems the focus on Future System instead of upgrading System/360?
If I recall correctly, they suddenly realized FS wasn't gonna work and S/360
was obsolete so they rushed out S/360 with some improvements, but not
a lot. First was the S/3x5 series, then later the improved S/3x8 series
which I think was a lot better.
motivation for 370/165 (and all other 370s) to virtual memory took hold
before FS took over the company. Initial OS/VS2 release 1 was SVS ...
little different from running MVT in a CP67 16mbyte virtual machine
... except MVT built the virtual address table and was able to do page
faults (but not efficiently or optimized, since they expected little or
no page faults) ... biggest issue was fitting CP67 CCWTRANS into
EXCP/SVC0 channel program processing ... which created a copy of the
passed channel program replacing CCW virtual addresses with real.

part of discussion from archived post (from bit.listserv.ibm-man, could
also be found in the google newsgroup archives)
http://www.garlic.com/~lynn/2011d.html#73

Of course, the estimates for OS/VS were based on a misperception. The
Kingston estimate for OS/VS2 Release 1 (SVS) had an estimate for the
work needed for Release 2 (MVS), but it was couched as release 1 cost
plus a delta - in other words, the same cost as release 1 plus some
more. Since the Kingston resources were being redeployed to FS, that
meant that there weren't going to be enough people to do both. Since MVS
was supposed to be the glide path for FS (which would be OS/VS2 Release
3), this was unwelcome news. xxxxx and yyyyy modified the plan to reuse
some of the SVS resources plus people transitioning from OS/360. Bob
Evans did his part by cutting a year off the MVS development schedule.

... snip ...

I continued to work on 360/370 all through the FS period ... even
periodically ridiculing what they were doing, which wasn't exactly a
career enhancing activity (lot of stuff was never really worked
out ... just pie-in-the-sky waving hands).

long winded discussion of FS ... including when it finally imploded,
quick&dirty 3033 and 3081 efforts were kicked off in parallel
http://www.jfsowa.com/computer/memo125.htm

Note that 370 virtual memory architecture had a whole bunch of features
... however the 165 hardware people complained that if they had to
implement all the architecture features ... it would delay virtual
memory announce by six months ... as a result features were dropped to
get 370/165 virtual memory back on schedule ... and other platforms that
had already implemented the full architecture had to remove the dropped
features and any software developed that used the dropped features had
to be rewritten.

Future System disaster in the 70s ... from Ferguson & Morris,
"Computer Wars: The Post-IBM World", Time Books, 1993 .... reference
to the "Future System" project 1st half of the 70s:

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat

... snip ...

Major motivation for Future System was clone controllers ... an
objective was to make the interfaces so complex that the clone makers
wouldn't be able to follow ... some folklore was that 37x5
telecommuncation box was one of the few that tried to still meet the FS
objective (with VTAM & NCP).

Note, during FS, internal politics were shutting down 370 projects
... and the lack of new 370 products during the FS period is credited
with giving the clone processor makers (like Amdahl) their market
foothold (effort to thwart clone controllers enabled clone processor
makers).

clone trivia: At the univ. I had taken a two semestor hr intro to
fortran/computers ... then within a year, univ. hires me fulltime to be
responsible for ibm mainframe system (306/67, but ran as 360/65 with
os/360). Then three people from science center came out in jan1968 and
installed cp67 and univ. let me play with it on weekends. Original
terminal support had 2741 & 1052 terminal support with automagic
terminal type identification. Univ. had tty 33s and a 35 ... so I added
tty/ascii terminal support and extended automagic terminal type support
to tty/ascii. I then wanted to have single dialin phone number for
al terminals ... single "hunt group"
https://en.wikipedia.org/wiki/Line_hunting

didn't quite work ... IBM took shortcut in the terminal controller, why
it was possible to change the terminal type port interface with the SAD
CCW ... couldn't change the line speed (aka work for leased lines).
This help prompt the univ. to do a clone controller project ... build a
channel interface board for a Interdata/3 programmed to emulate a ibm
telecommuncation controller ... with the addition of being able of doing
dynamic line speed determination. This was then updated with an
Interdata/4 for the channel interface and clusters of Interdata/3
handling the ports. Interdata starts selling it as a clone controller
and four of us get written up (for some part of) IBM clone controller
business. Perkin-Elmer buys Interdata and continues to sell the box under
their own logo.
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2021-07-14 04:50:15 UTC
Permalink
Post by undefined Hancock-4
It seemed to me in S/370 days demand for service exceeded computer
horsepower to provide it. Online processing was a resource hog in
both CPU, memory, and disk.
We could handle five terminals on our S360-40 under MTCS. But 500
terminals on our 158 was another story, basically it was too much.
We'd upgrade, but as soon as we upgraded we added another application.
OS/360 systems had huge setup/teardown overhead ... including each file
open/close was enormous overhead (I discuss in more detail in the
FORTG->WATFOR in this thread). It was why CICS was so popular ... it
tried to require all its resources at startup (including large blocks of
storage where it ran its own storage allocation algorithms)
... including doing all file open at startup ... and then trying its
best to make minimum use of OS/360 ... mostly excp for doing actual file
i/o.

mid-70s, I also started to pontificate that while systems were getting
faster ... disk performance wasn't increasing as fast ... as well as
there being huge amount of bloat. In the early 80s, I was writing that
relative system disk performance had declined by an order of magnitude
from early 360 days to the early 80s (i.e. disks got 3-5 faster while
rest of system got 40-50 times faster). Disk executives took exception
and assigned the division performance group to refute my claims ... they
came back after a few weeks and basically said I had slightly
understated the problem. They then respun the analysis for a (IBM user
group) SHARE presentation on how to configure disks to improve system
throughput.... 16Aug1984, Share 63, B874. old posts with some
intro/summarry
http://www.garlic.com/~lynn/2002i.html#18
http://www.garlic.com/~lynn/2006f.html#3
http://www.garlic.com/~lynn/2006o.html#68

I had compared a 360/67 with 768kbytes memory, 2314 disks and 2301
paging device ... to 3081 with 32mbytes, 3380 disks and 2305 paging
devices ... even thot processor and memory increased 30-50 times, the
number of users only increased four times (which was about the change in
number of random access from 2314 to 3380), if disks had kept with up
the rest of the system, number of users should have increased by factor
of 30-50 times ... instead of only four times.

trivia: previous post in thread mentions univ. hires me fulltime. along
the way Univ. library got ONR (office naval research) grant to do online
catalog ... some of the money went to getting IBM 2321 datacell. Effort
was also selected to be betatest site for the original CICS product ...
which was added to my tasks. Took awhile but shot a problem where CICS
didn't document it had hardcoded some BDAM file options and it turned
out the library had created BDAM files with different options ... and
CICS would fail on startup not able to open the files.

lots of CICS history, gone 404 but lives on at wayback machine
http://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
and
http://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
--
virtualization experience starting Jan1968, online at home since Mar1970
undefined Hancock-4
2021-07-16 18:33:26 UTC
Permalink
Post by Anne & Lynn Wheeler
mid-70s, I also started to pontificate that while systems were getting
faster ... disk performance wasn't increasing as fast ... as well as
there being huge amount of bloat. In the early 80s, I was writing that
relative system disk performance had declined by an order of magnitude
from early 360 days to the early 80s (i.e. disks got 3-5 faster while
rest of system got 40-50 times faster). Disk executives took exception
and assigned the division performance group to refute my claims ... they
came back after a few weeks and basically said I had slightly
understated the problem. They then respun the analysis for a (IBM user
group) SHARE presentation on how to configure disks to improve system
throughput.... 16Aug1984, Share 63, B874. old posts with some
intro/summarry
Not sure if this is comparable since it's the Univac 90/30 (sort of S/370 equivalent),
but I had the machine to myself on a weekend and ran some tests of multi-programming
performance. Our machine had the equivalent of 3330 drives. They were top loaders
and we could look down through the glass and watch the head action.

One thing was obvious--having multiple programs use the same drive was
a performance disaster. The disk drive ran like an overloaded washing machine,
shaking like crazy and the heads went in and out. Run time was bad.

But even running two programs simultaneously using two separate drives
was slow. Indeed, it'd be faster (wall clock) to run the two programs
sequentially rather than simultaneously. I'm not sure why that was, perhaps
the disk channel was shared among the drives, perhaps the CPU or
supervisor couldn't handle multi-programming very well (even though
supposedly the OS could handle up to five regions and we had plenty
of memory).

(We had tape drives that seemed pretty fast, but we did nothing on tape
except backups).

I know others here who used the Univac 90/30 were very positive on them,
and I knew of other sites that loved them. But my observation wasn't as
positive. As mentioned, I have no idea of the price/performance, other
than Univac was supposedly a lot cheaper than the equivalent powered
IBM mainframe, especially a new one. So, perhaps as a low end machine,
equivalent to a S/370-125 (not virtual) it could do the job economically.

We were speaking of optimization. The output of the 90/30's COBOL
compiler had the assembler and it was definitely not optimized, and I
don't believe the machine even offered that option. (I don't know about
other Univacs).

My impression was that because the 90/30 was a byte oriented machine,
like S/360, it didn't really fit in Univac's product line of word oriented machines.
For instance, a lot of Univac's programmers attempted to do stuff not allowed
on byte machines, like having spaces in a numeric field and attempting to load
an ISAM file by inserts, not a copy.

Here's the system brochure:
http://bitsavers.org/pdf/univac/series_90/Univac_90_30_System_Brochure_Mar74.pdf

Bitsavers has a few manuals:
http://bitsavers.org/pdf/univac/series_90/
Charlie Gibbs
2021-07-16 22:40:54 UTC
Permalink
Post by undefined Hancock-4
Post by Anne & Lynn Wheeler
mid-70s, I also started to pontificate that while systems were getting
faster ... disk performance wasn't increasing as fast ... as well as
there being huge amount of bloat. In the early 80s, I was writing that
relative system disk performance had declined by an order of magnitude
from early 360 days to the early 80s (i.e. disks got 3-5 faster while
rest of system got 40-50 times faster). Disk executives took exception
and assigned the division performance group to refute my claims ... they
came back after a few weeks and basically said I had slightly
understated the problem. They then respun the analysis for a (IBM user
group) SHARE presentation on how to configure disks to improve system
throughput.... 16Aug1984, Share 63, B874. old posts with some
intro/summarry
Not sure if this is comparable since it's the Univac 90/30 (sort of S/370 equivalent),
IIRC the non-privileged instruction set was bit-for-bit identical with that
of the 360/50. The 370 instructions came later, with the System 80 line.
Post by undefined Hancock-4
but I had the machine to myself on a weekend and ran some tests of
multi-programming performance. Our machine had the equivalent of
3330 drives. They were top loaders and we could look down through
the glass and watch the head action.
Ah, you had the 8430s - lucky guy. Most sites I worked on originally
had 8416s (28MB/spindle), which looked much like the 8430, complete
with the glass top. These were later replaced by the 8418 - twice the
capacity, but an opaque lid meant you couldn't watch the heads thrash
anymore. (I did, however, find a display on the front panel which
showed the length of the current seek, and could identify problems
that way.)
Post by undefined Hancock-4
One thing was obvious--having multiple programs use the same drive was
a performance disaster. The disk drive ran like an overloaded washing
machine, shaking like crazy and the heads went in and out. Run time
was bad.
But even running two programs simultaneously using two separate drives
was slow. Indeed, it'd be faster (wall clock) to run the two programs
sequentially rather than simultaneously. I'm not sure why that was,
perhaps the disk channel was shared among the drives, perhaps the CPU
or supervisor couldn't handle multi-programming very well (even though
supposedly the OS could handle up to five regions and we had plenty
of memory).
The supervisor was pretty disk-bound. It spent a lot of time rolling
transient code into and out of memory, and many system utilities had
lots of overlays. This is no doubt a consequence of trying to squeeze
too much programming into too little memory. Salesmen were constantly
low-balling memory requirements, and customers who scrimped and got
128K of memory soon wound up shelling out the bucks to go to 192K.

I found that if the jobs weren't too disk-bound (a bit of clever design
helped here), three concurrent jobs was the point of diminishing returns.

You also had to watch out for CPU-bound jobs (e.g. sorts). OS/3 did not
time-slice jobs of equal priority, so if one of your jobs was CPU-bound
(or went into a loop), other running jobs of equal or lower priority
came to a screeching halt. A later release of OS/3 added a sysgen option
to change the default priority to something other than the lowest possible
value; that way you could run most jobs at default priority, while
specifying a lower priority for CPU-bound steps.
Post by undefined Hancock-4
(We had tape drives that seemed pretty fast, but we did nothing on tape
except backups).
I know others here who used the Univac 90/30 were very positive on them,
and I knew of other sites that loved them. But my observation wasn't as
positive. As mentioned, I have no idea of the price/performance, other
than Univac was supposedly a lot cheaper than the equivalent powered
IBM mainframe, especially a new one. So, perhaps as a low end machine,
equivalent to a S/370-125 (not virtual) it could do the job economically.
They must have done something right - they sold a lot of them.
Post by undefined Hancock-4
We were speaking of optimization. The output of the 90/30's COBOL
compiler had the assembler and it was definitely not optimized, and I
don't believe the machine even offered that option. (I don't know about
other Univacs).
My impression was that because the 90/30 was a byte oriented machine,
like S/360, it didn't really fit in Univac's product line of word oriented machines.
It didn't try; it was a totally different product line.
Post by undefined Hancock-4
For instance, a lot of Univac's programmers attempted to do stuff
not allowed on byte machines, like having spaces in a numeric field
and attempting to load an ISAM file by inserts, not a copy.
Ouch. I remember watching a machine doing that. It was running so
painfully slowly that I found it faster to cancel the job and re-write
the program.
Post by undefined Hancock-4
http://bitsavers.org/pdf/univac/series_90/Univac_90_30_System_Brochure_Mar74.pdf
http://bitsavers.org/pdf/univac/series_90/
I'm just finishing up scanning the wall of OS/3 (90/30 and System 80)
manuals that I was issued when I worked at Univac. I'll be uploading
over 300 documents (3.5GB) to Bitsavers Real Soon Now.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
Robin Vowels
2021-07-17 06:13:03 UTC
Permalink
Post by undefined Hancock-4
Post by Anne & Lynn Wheeler
mid-70s, I also started to pontificate that while systems were getting
faster ... disk performance wasn't increasing as fast ... as well as
there being huge amount of bloat. In the early 80s, I was writing that
relative system disk performance had declined by an order of magnitude
from early 360 days to the early 80s (i.e. disks got 3-5 faster while
rest of system got 40-50 times faster). Disk executives took exception
and assigned the division performance group to refute my claims ... they
came back after a few weeks and basically said I had slightly
understated the problem. They then respun the analysis for a (IBM user
group) SHARE presentation on how to configure disks to improve system
throughput.... 16Aug1984, Share 63, B874. old posts with some
intro/summarry
Not sure if this is comparable since it's the Univac 90/30 (sort of S/370 equivalent),
but I had the machine to myself on a weekend and ran some tests of multi-programming
performance. Our machine had the equivalent of 3330 drives. They were top loaders
and we could look down through the glass and watch the head action.
One thing was obvious--having multiple programs use the same drive was
a performance disaster. The disk drive ran like an overloaded washing machine,
shaking like crazy and the heads went in and out. Run time was bad.
But even running two programs simultaneously using two separate drives
was slow. Indeed, it'd be faster (wall clock) to run the two programs
sequentially rather than simultaneously. I'm not sure why that was, perhaps
the disk channel was shared among the drives, perhaps the CPU or
supervisor couldn't handle multi-programming very well (even though
supposedly the OS could handle up to five regions and we had plenty
of memory).
.
A lot depended on the blocking factor.
The ICL System 4 thrashed the disc drives because there was no blocking
of 80-byte records.
The drives began failing after about 5 years of that punishment.
Records were subsequently de-blanked and blocked to 386(?) bytes
with considerable improvement in performance, but full-track
blocking would have been even better.
.
Post by undefined Hancock-4
(We had tape drives that seemed pretty fast, but we did nothing on tape
except backups).
I know others here who used the Univac 90/30 were very positive on them,
and I knew of other sites that loved them. But my observation wasn't as
positive. As mentioned, I have no idea of the price/performance, other
than Univac was supposedly a lot cheaper than the equivalent powered
IBM mainframe, especially a new one. So, perhaps as a low end machine,
equivalent to a S/370-125 (not virtual) it could do the job economically.
We were speaking of optimization. The output of the 90/30's COBOL
compiler had the assembler and it was definitely not optimized, and I
don't believe the machine even offered that option. (I don't know about
other Univacs).
My impression was that because the 90/30 was a byte oriented machine,
like S/360, it didn't really fit in Univac's product line of word oriented machines.
For instance, a lot of Univac's programmers attempted to do stuff not allowed
on byte machines, like having spaces in a numeric field and attempting to load
an ISAM file by inserts, not a copy.
http://bitsavers.org/pdf/univac/series_90/Univac_90_30_System_Brochure_Mar74.pdf
http://bitsavers.org/pdf/univac/series_90/
Charlie Gibbs
2021-07-17 19:12:27 UTC
Permalink
Post by Robin Vowels
A lot depended on the blocking factor.
The ICL System 4 thrashed the disc drives because there was no blocking
of 80-byte records.
The drives began failing after about 5 years of that punishment.
Records were subsequently de-blanked and blocked to 386(?) bytes
with considerable improvement in performance, but full-track
blocking would have been even better.
Assuming you have enough memory for those big buffers...
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
Robin Vowels
2021-07-18 04:33:59 UTC
Permalink
Post by Charlie Gibbs
Post by Robin Vowels
A lot depended on the blocking factor.
The ICL System 4 thrashed the disc drives because there was no blocking
of 80-byte records.
The drives began failing after about 5 years of that punishment.
Records were subsequently de-blanked and blocked to 386(?) bytes
with considerable improvement in performance, but full-track
blocking would have been even better.
.
Post by Charlie Gibbs
Assuming you have enough memory for those big buffers...
.
Plenty of memory fir that.
You can't get any work out of thrashing disc drives.
Charlie Gibbs
2021-07-18 15:20:20 UTC
Permalink
Post by Robin Vowels
Post by Charlie Gibbs
Post by Robin Vowels
A lot depended on the blocking factor.
The ICL System 4 thrashed the disc drives because there was no blocking
of 80-byte records.
The drives began failing after about 5 years of that punishment.
Records were subsequently de-blanked and blocked to 386(?) bytes
with considerable improvement in performance, but full-track
blocking would have been even better.
Assuming you have enough memory for those big buffers...
Plenty of memory fir that.
You can't get any work out of thrashing disc drives.
True, but one shop I worked in went to the other extreme:
full-track blocking for every file. One system consisted
of a program that created half a dozen work files (even
though some were redundant). Six output files - plus an
input file - with a track size of 10K meant that 70K of
the machine's 192K went to buffers alone. It certainly
didn't make anything else run faster since there was no
memory left to run more.

And rather than just sorting the work files and printing
reports as successive steps, another shop standard was to
kick off separate jobs to sort and print each one. Now
_there_ was thrashing for you - just moved to the job
scheduler. I wrote one such system for them, and found
that the entire job would take about 20 minutes just
to schedule, let alone run. I said to hell with their
standards, wrote it in a sane fashion, and got test runs
through in a minute or two. They were furious, of course,
but I pointed out that they were perfectly free to change it
back to their time-consuming way once I was finished testing.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
undefined Hancock-4
2021-07-20 19:05:46 UTC
Permalink
full-track blocking for every file. One system consisted
of a program that created half a dozen work files (even
though some were redundant). Six output files - plus an
input file - with a track size of 10K meant that 70K of
the machine's 192K went to buffers alone. It certainly
didn't make anything else run faster since there was no
memory left to run more.
Yes, on older machines too high a blocking factor caused problems, too.
They used to warn us not max out on blocking factors. Memory was not
unlimited, as you say.

In the old days we calculated optimum blocksizes based on our logical
record length and the device we were writing to. Disk drives had different
lengths.

I once experimented with a S/360 tape drive. I found a block size of
about 5,000 characters was the max, anything beyond that would
generate I/O errors and waste time with retries.

Later machines could handle maximum blocks. Indeed, in later years
we coded BLKSIZE=0 in our JCL which would maximize blocksize
for whatever device we were creating a file on. As data centers got
bigger and had a mix of devices, the dynamic automatic blocking
made it easier for us, since we didn't have to worry about the device
type.

Amazing, in the early days we were intimately familiar with the
characteristics of say the 2311 vs. the 2314 or various tape drives.
Our coding depended on it to optimize performance and memory.
But now, devices evolve and we don't even know what we're writing to.
The system handles it all automatically, with automatic file allocation.

Sometimes we had older jobs with hardcoded low blocksizes. They
ran poorly. Upgrading it was an easy way to improve performance.

In a few cases, such as files being sent out or specialty devices,
blocksize and other issues still mattered. JCL still allows to set
specifics if desired.
Thomas Koenig
2021-07-21 05:29:38 UTC
Permalink
Post by undefined Hancock-4
In the old days we calculated optimum blocksizes based on our logical
record length and the device we were writing to. Disk drives had different
lengths.
I remember the recommendation for FB80 files switching from a
blocksize of 3200 bytes to a blocksize of 3120 bytes when the
computer center switched generations of mainframes and therefore
disk type, as well.

30 years ago I could probably have told you the disk drive
types this was optimum for. Not any more :-)
Charlie Gibbs
2021-07-21 17:14:19 UTC
Permalink
Post by Thomas Koenig
Post by undefined Hancock-4
In the old days we calculated optimum blocksizes based on our logical
record length and the device we were writing to. Disk drives had different
lengths.
I remember the recommendation for FB80 files switching from a
blocksize of 3200 bytes to a blocksize of 3120 bytes when the
computer center switched generations of mainframes and therefore
disk type, as well.
30 years ago I could probably have told you the disk drive
types this was optimum for. Not any more :-)
I wrote a utility that would take a record size and generate
a table of optimum blocking factors for a 2311 or 2314 drive.
My memory has faded somewhat too, but I suspect you were
working with neither of them. :-) I seem to recall that
1680 was a good block size for 80-byte records on the 2314.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
undefined Hancock-4
2021-07-23 19:19:33 UTC
Permalink
Post by Charlie Gibbs
I wrote a utility that would take a record size and generate
a table of optimum blocking factors for a 2311 or 2314 drive.
My memory has faded somewhat too, but I suspect you were
working with neither of them. :-) I seem to recall that
1680 was a good block size for 80-byte records on the 2314.
Another challenge was allocating space for VSAM files, which could
be trickier than sequential or even older ISAM files. I don't think
the parameters of the IDCAMS programs, like control interval
sizes, were automated and more a guesswork.

Indeed, it seemed to me many programmers didn't care for
VSAM, preferring to use databases (i.e. DB2, IMS, etc).
But VSAM had a lot of advantages, including performance
since it had a lot less overhead than a formal database system.

Our 90/30 had ISAM.
Charlie Gibbs
2021-07-24 21:35:17 UTC
Permalink
Post by undefined Hancock-4
Post by Charlie Gibbs
I wrote a utility that would take a record size and generate
a table of optimum blocking factors for a 2311 or 2314 drive.
My memory has faded somewhat too, but I suspect you were
working with neither of them. :-) I seem to recall that
1680 was a good block size for 80-byte records on the 2314.
Another challenge was allocating space for VSAM files, which could
be trickier than sequential or even older ISAM files. I don't think
the parameters of the IDCAMS programs, like control interval
sizes, were automated and more a guesswork.
Indeed, it seemed to me many programmers didn't care for
VSAM, preferring to use databases (i.e. DB2, IMS, etc).
But VSAM had a lot of advantages, including performance
since it had a lot less overhead than a formal database system.
Our 90/30 had ISAM.
Did you get to the point where MIRAM files were introduced?
They were much nicer, and allowed indexing on up to 5 keys.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
undefined Hancock-4
2021-07-26 20:13:11 UTC
Permalink
Post by Charlie Gibbs
Post by undefined Hancock-4
Our 90/30 had ISAM.
Did you get to the point where MIRAM files were introduced?
They were much nicer, and allowed indexing on up to 5 keys.
No, that was after my time.

On our IBM mainframe, we made use of VSAM alternate indexes
to allow for multiple non-unique keys (e.g. searching by name, DOB, etc).
Very efficient and worked well.

As mentioned, VSAM was kind of disparaged, everyone pushed for
a modern system like SQL. But our systems programmers and database
administrators, who monitored performance, noted that our VSAM system was
very efficient while they had problems with some SQL systems. Even on
new powerful machines, a sloppy SQL query could flog the machine.

Geez, we have one VSAM system now 45 years old, and will probably reach 50.
Peter Flass
2021-07-21 18:08:31 UTC
Permalink
Post by undefined Hancock-4
full-track blocking for every file. One system consisted
of a program that created half a dozen work files (even
though some were redundant). Six output files - plus an
input file - with a track size of 10K meant that 70K of
the machine's 192K went to buffers alone. It certainly
didn't make anything else run faster since there was no
memory left to run more.
Yes, on older machines too high a blocking factor caused problems, too.
They used to warn us not max out on blocking factors. Memory was not
unlimited, as you say.
When designing a system we considered the program with the most files and
figured how much memory would be required for double-buffering. We’d
usually start looking at half-track blocking, then look at single-buffering
on some datasets if we couldn’t do double. Systems design was fun in the
olden days.
Post by undefined Hancock-4
In the old days we calculated optimum blocksizes based on our logical
record length and the device we were writing to. Disk drives had different
lengths.
Switching from 2311 to 2314 meant recalculating everything. There were a
lot of programs to do this. I wrote one, and only this year found out I
could have gotten one from SHARE.
Post by undefined Hancock-4
I once experimented with a S/360 tape drive. I found a block size of
about 5,000 characters was the max, anything beyond that would
generate I/O errors and waste time with retries.
At first I misread this as disk. I don’t recall a lot of problems with
tape, but maybe we didn’t use huge blick sizes.
Post by undefined Hancock-4
Later machines could handle maximum blocks. Indeed, in later years
we coded BLKSIZE=0 in our JCL which would maximize blocksize
for whatever device we were creating a file on. As data centers got
bigger and had a mix of devices, the dynamic automatic blocking
made it easier for us, since we didn't have to worry about the device
type.
Blksize=0 only came along much later, maybe as part of SMS on MVS/XA or
maybe /ESA. It was better than sliced bread, but wouldn’t have worked on
machines with limited storage where we counted every byte.
Post by undefined Hancock-4
Amazing, in the early days we were intimately familiar with the
characteristics of say the 2311 vs. the 2314 or various tape drives.
Our coding depended on it to optimize performance and memory.
But now, devices evolve and we don't even know what we're writing to.
The system handles it all automatically, with automatic file allocation.
All the fun is gone :-(
Post by undefined Hancock-4
Sometimes we had older jobs with hardcoded low blocksizes. They
ran poorly. Upgrading it was an easy way to improve performance.
In a few cases, such as files being sent out or specialty devices,
blocksize and other issues still mattered. JCL still allows to set
specifics if desired.
--
Pete
undefined Hancock-4
2021-07-23 19:16:13 UTC
Permalink
When designing a system we considered the program with the most files and
figured how much memory would be required for double-buffering. We’d
usually start looking at half-track blocking, then look at single-buffering
on some datasets if we couldn’t do double. Systems design was fun in the
olden days.
On the smaller S/360 and equivalent competitors (e.g. RCA Spectra),
physical size was limited but files could be big. Careful design
was required, as you describe, to fit everything in without flogging
the machine.

In my opinion, sometimes data centers would attempt to do too much
on their machines of the time--loading too much data volume or program
complexity onto a machine that simply wasn't up to handling it. Programmers
would pull all sorts of tricks to squeeze it all in, and be on standby if something
blew up--which happened often.

And it seemed that no sooner that they got a bigger machine then they loaded
another new bulky application on it which continued the old problems.

Indeed, I think these problems continued until roughly 1990 (depending on a
site) when hardware finally got cheap enough that they could buy adequately
sized and powered machines (enough memory, I/O channels, disk and space,
and CPU speed).
Post by undefined Hancock-4
In the old days we calculated optimum blocksizes based on our logical
record length and the device we were writing to. Disk drives had different
lengths.
Switching from 2311 to 2314 meant recalculating everything. There were a
lot of programs to do this. I wrote one, and only this year found out I
could have gotten one from SHARE.
Yes, every time the data center upgraded conversions were necessary
to reflect the new hardware, even if it was the same brand of computer.
Post by undefined Hancock-4
I once experimented with a S/360 tape drive. I found a block size of
about 5,000 characters was the max, anything beyond that would
generate I/O errors and waste time with retries.
At first I misread this as disk. I don’t recall a lot of problems with
tape, but maybe we didn’t use huge blick sizes.
We were using a cheapo S/360 tape drive. I think the later models
on S/370 with 6250 bpi worked better. And of course the later
cartridges were even better.
Post by undefined Hancock-4
Later machines could handle maximum blocks. Indeed, in later years
we coded BLKSIZE=0 in our JCL which would maximize blocksize
for whatever device we were creating a file on. As data centers got
bigger and had a mix of devices, the dynamic automatic blocking
made it easier for us, since we didn't have to worry about the device
type.
Blksize=0 only came along much later, maybe as part of SMS on MVS/XA or
maybe /ESA. It was better than sliced bread, but wouldn’t have worked on
machines with limited storage where we counted every byte.
SMS had a lot of advantages, especially for the system programmers, but
there were some growing pains. It took a lot of overhead and wouldn't
have been possible on older machines. But overall it worked out well,
including automated backups.

I think SMS introduced compression onto IBM mainframes which could
save a lot of space. PKZIP on steroids. Transparent to application people.

Indeed, I think one function of SMS was to automatically offload little
used files to slower medium freeing high speed stuff for more important
files. Again, transparent.
Post by undefined Hancock-4
Amazing, in the early days we were intimately familiar with the
characteristics of say the 2311 vs. the 2314 or various tape drives.
Our coding depended on it to optimize performance and memory.
But now, devices evolve and we don't even know what we're writing to.
The system handles it all automatically, with automatic file allocation.
All the fun is gone :-(
There was a sense of pride in knowing how to use the reference cards
and optimize settings to get improved performance and space usage.
Likewise in coding for efficiency.

Indeed, that was fun.

On the flip side, as mentioned above when systems were squeezed to
the hilt, failures were common and a nuisance to fix. Getting a phone
call at 3 a.m. to come in (no home terminals then) was not much fun
and happened too often.

I think at least one data center programmers got tired of the 3 a.m.
phone calls and pressured management to spend the money to
upgrade the machine to reduce troubles.

At least a few data centers suffered massive resignations when
staff just got tired of problems.
Charlie Gibbs
2021-07-24 21:35:18 UTC
Permalink
Post by undefined Hancock-4
When designing a system we considered the program with the most files
and figured how much memory would be required for double-buffering.
We’d usually start looking at half-track blocking, then look at
single-buffering on some datasets if we couldn’t do double. Systems
design was fun in the olden days.
On the smaller S/360 and equivalent competitors (e.g. RCA Spectra),
physical size was limited but files could be big. Careful design
was required, as you describe, to fit everything in without flogging
the machine.
In my opinion, sometimes data centers would attempt to do too much
on their machines of the time--loading too much data volume or program
complexity onto a machine that simply wasn't up to handling it.
Programmers would pull all sorts of tricks to squeeze it all in,
and be on standby if something blew up--which happened often.
Part of this was due to computer salesmen lowballing hardware
requirements. Once the machine had been sold and installed,
it was usually too late to back out, so the customer sucked it
up and paid for the additional hardware that, had they known,
they would have needed all along.
Post by undefined Hancock-4
And it seemed that no sooner that they got a bigger machine then they
loaded another new bulky application on it which continued the old
problems.
Parkinson's Law states that work expands so as to fill the time
available for its completion. There are various corollaries,
such as programs expanding to fill available memory, data files
expanding to fill available drives, etc.
Post by undefined Hancock-4
Indeed, I think these problems continued until roughly 1990 (depending
on a site) when hardware finally got cheap enough that they could buy
adequately sized and powered machines (enough memory, I/O channels,
disk and space, and CPU speed).
Alas, there is a difference between "could" and "did". Even in the
last few years we've run into problems when customers scrimped on
disk space: not just with inadequate drives but also with bad
partitioning schemes.
x
Post by undefined Hancock-4
There was a sense of pride in knowing how to use the reference cards
and optimize settings to get improved performance and space usage.
Likewise in coding for efficiency.
Indeed, that was fun.
On the flip side, as mentioned above when systems were squeezed to
the hilt, failures were common and a nuisance to fix. Getting a phone
call at 3 a.m. to come in (no home terminals then) was not much fun
and happened too often.
BTDTGTS (been there, done that, got the scars)
That's why, for a long time, I didn't have a telephone at home.
It had to be a _real_ emergency before someone would drive over,
bang on the door, wake up the landlady, and have her get me.
Post by undefined Hancock-4
I think at least one data center programmers got tired of the 3 a.m.
phone calls and pressured management to spend the money to
upgrade the machine to reduce troubles.
At least a few data centers suffered massive resignations when
staff just got tired of problems.
You just had to pray that management was intelligent enough to
take the hint.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
undefined Hancock-4
2021-07-26 20:09:32 UTC
Permalink
Post by Charlie Gibbs
Post by undefined Hancock-4
I think at least one data center programmers got tired of the 3 a.m.
phone calls and pressured management to spend the money to
upgrade the machine to reduce troubles.
At least a few data centers suffered massive resignations when
staff just got tired of problems.
You just had to pray that management was intelligent enough to
take the hint.
Unfortunately, a lot of data center managements were not intelligent
enough and allowed massive disruptions. For some reason, high
corporate management supported them until it was too late.
A fix usually required very expensive consultants to dig through
everything. At least one large bank was forced to sell out to another
since its customers got too pissed off from the mess, but this was
not unique. Sometimes the trainwreck made the newspaper with
resultant bad publicity.
Robin Vowels
2021-07-21 05:49:04 UTC
Permalink
Post by Robin Vowels
Post by Charlie Gibbs
Post by Robin Vowels
A lot depended on the blocking factor.
The ICL System 4 thrashed the disc drives because there was no blocking
of 80-byte records.
The drives began failing after about 5 years of that punishment.
Records were subsequently de-blanked and blocked to 386(?) bytes
with considerable improvement in performance, but full-track
blocking would have been even better.
Assuming you have enough memory for those big buffers...
Plenty of memory fir that.
You can't get any work out of thrashing disc drives.
.
full-track blocking for every file. One system consisted
of a program that created half a dozen work files (even
though some were redundant). Six output files - plus an
input file - with a track size of 10K meant that 70K of
the machine's 192K went to buffers alone. It certainly
didn't make anything else run faster since there was no
memory left to run more.
.
Another useful improvement, in the days of PCP, was to specify
a number of buffers. I think that we used about 8 buffers, which
gave a useful improvement when reading cards and simultaneously
printing.
undefined Hancock-4
2021-07-23 19:01:04 UTC
Permalink
Post by Robin Vowels
Another useful improvement, in the days of PCP, was to specify
a number of buffers. I think that we used about 8 buffers, which
gave a useful improvement when reading cards and simultaneously
printing.
In the QuickBasic compiler there was a LEN option for file I/O.
Setting this high would greatly improve reading and writing speed,
especially with floppy drives since a bigger block would be written.
I don't think it saved any space (sector drives), but on floppies it
reduced the start/stop action, allowing more continuous writing.

(This was not available on the interpreted QBASIC version).
undefined Hancock-4
2021-07-13 21:55:45 UTC
Permalink
Post by Robin Vowels
When we did have more core, we found that IBM's FORTRAN-H was
a very slow compiler, because it was an optimising compiler. That's
understandable. FORTRAN-G was the one we used most of the FORTRAN
offerings. Then we installed WATFOR, which also was possible with the
extra core.
COBOL compiler optimizing is a tricky business. I ran tests with the optimize option
on and off. Sometimes it did improve performance, but other times not especially.

Certain usages require the optimize turned off. I don't know why, but the sysprog
had it off for that reason.

The Fortran world is tricky. Unlike the business world where a completed program
may run week after week for years and thus should be optimized, some sci/eng
efforts are research and the program may be run only a few times in production.
More likely it might be recompiled frequently as the researcher discovers different
things and tweaks his program accordingly. (Situations vary, of course).

My compsci prof said WATFOR was a great compiler--compiled fast yet produced
decent code.

The IBM history says the 1401 could be used as a teststand for the 7090, including
Fortran work. But 1401 users told me Fortran took forever to compile on a 1401.
Thomas Koenig
2021-07-14 10:03:42 UTC
Permalink
Post by undefined Hancock-4
The Fortran world is tricky. Unlike the business world where a completed program
may run week after week for years and thus should be optimized, some sci/eng
efforts are research and the program may be run only a few times in production.
That is one end of the spectrum.

The other end is a the program burning millions of Euros of CPU
time on a supercomputer cluster. I know one guy who did this
for his PhD thesis.

And the business world gave us something like Teams.
undefined Hancock-4
2021-07-13 22:11:15 UTC
Permalink
Post by Robin Vowels
When we did have more core, we found that IBM's FORTRAN-H was
a very slow compiler, because it was an optimising compiler.
IBM published a "Programmers Guide" alongside its language reference.
However, how many people bothered to read it I don't know, but in my travels over
the years at different sites I didn't see anyone on the application side
check it out.

But the Programmers Guide provided valuable tips for coding and setup
for performance, efficiency, and maintainability. Unlike the language
reference, it was written in a more casual understandable style.

Here are examples for COBOL:
https://www.ibm.com/docs/en/SS6SG3_4.2.0/com.ibm.entcobol.doc_4.2/PGandLR/igy3pg50.pdf

https://www.ibm.com/docs/en/ssw_ibm_i_72/rzase/sc092540.pdf

PL/I
https://www.ibm.com/docs/en/SSUFAU_1.0.0/com.ibm.ent.pl1.zos.doc/pg.pdf


I don't know if IBM issued any equivalent publication for Assembler users.
All I ever saw was the Principle of Operations.
Robin Vowels
2021-07-14 04:42:27 UTC
Permalink
Post by undefined Hancock-4
Post by Robin Vowels
When we did have more core, we found that IBM's FORTRAN-H was
a very slow compiler, because it was an optimising compiler.
.
Post by undefined Hancock-4
IBM published a "Programmers Guide" alongside its language reference.
However, how many people bothered to read it I don't know, but in my travels over
the years at different sites I didn't see anyone on the application side
check it out.
.
Both the LRM and the PG were/are important reference manuals, and
at our site were available on the counter for both consulting staff and to users.
Consulting staff had to be knowledgeable with both.
There were enough multiple copies for consultancy staff to have their own copies.
.
Post by undefined Hancock-4
But the Programmers Guide provided valuable tips for coding and setup
for performance, efficiency, and maintainability. Unlike the language
reference, it was written in a more casual understandable style.
https://www.ibm.com/docs/en/SS6SG3_4.2.0/com.ibm.entcobol.doc_4.2/PGandLR/igy3pg50.pdf
https://www.ibm.com/docs/en/ssw_ibm_i_72/rzase/sc092540.pdf
PL/I
https://www.ibm.com/docs/en/SSUFAU_1.0.0/com.ibm.ent.pl1.zos.doc/pg.pdf
I don't know if IBM issued any equivalent publication for Assembler users.
All I ever saw was the Principle of Operations.
Quadibloc
2021-07-14 07:39:56 UTC
Permalink
Post by undefined Hancock-4
But the Programmers Guide provided valuable tips for coding and setup
for performance, efficiency, and maintainability. Unlike the language
reference, it was written in a more casual understandable style.
What I remembered of the FORTRAN Programmer's Guide is that it
gave technical details of the use of the compiler - things like calling
sequences - which were useful for certain advanced programming
tasks.

I had not noticed that it was an approachable guide to writing better
programs, as you describe these books.

John Savard
undefined Hancock-4
2021-07-20 19:07:23 UTC
Permalink
Post by Quadibloc
Post by undefined Hancock-4
But the Programmers Guide provided valuable tips for coding and setup
for performance, efficiency, and maintainability. Unlike the language
reference, it was written in a more casual understandable style.
What I remembered of the FORTRAN Programmer's Guide is that it
gave technical details of the use of the compiler - things like calling
sequences - which were useful for certain advanced programming
tasks.
I had not noticed that it was an approachable guide to writing better
programs, as you describe these books.
It had a lot of info on the compiling and programming environments,
but that was useful for all programmers, not just advanced ones.

The Guides on bitsavers certainly contain sections on writing better programs
as do the modern ones.
gah4
2021-07-28 05:07:40 UTC
Permalink
Post by Quadibloc
Post by undefined Hancock-4
But the Programmers Guide provided valuable tips for coding and setup
for performance, efficiency, and maintainability. Unlike the language
reference, it was written in a more casual understandable style.
What I remembered of the FORTRAN Programmer's Guide is that it
gave technical details of the use of the compiler - things like calling
sequences - which were useful for certain advanced programming
tasks.
I had not noticed that it was an approachable guide to writing better
programs, as you describe these books.
It seems that instead of (usual) a hash table, Fortran H uses six balanced
trees. One suggestion in the PG is to distribute variable names over
the 1 to 6 characters about equally, for faster compilation.

Writing better programs, but not necessarily more readable.

John Levine
2021-07-11 02:23:28 UTC
Permalink
Post by Robin Vowels
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler. ...
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts. ...
PL/I makes it too easy to write bad programs, particularly if you don't know the folklore about what is
efficient and what is not.

One time I wrote a program that used an array of 12 bit fields and the code from PL/I F was stupendously
bad. As I recall, each access converted the value to decimal and back, and no, I didn't
do picture formatting or anything like that. I expect that if I'd made them 16 bit fields and just used
the low 12 bits it would have been OK, but I ran out of time.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Robin Vowels
2021-07-11 04:23:28 UTC
Permalink
Post by John Levine
Post by Robin Vowels
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler. ...
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts. ...
PL/I makes it too easy to write bad programs, particularly if you don't know the folklore about what is
efficient and what is not.
.
I do not believe that to be the case.
The PL/I Programmer's Guide contained/contains information about efficient
constructs.
.
The Jaliks example cited is one that was specifically covered in the
PL/I Programmer's Guide.
.
Post by John Levine
One time I wrote a program that used an array of 12 bit fields and the code from PL/I F was stupendously
bad. As I recall, each access converted the value to decimal and back,
.
The conversion to and from BIT strings is via binary, not decimal.
.
Post by John Levine
and no, I didn't
do picture formatting or anything like that. I expect that if I'd made them 16 bit fields and just used
the low 12 bits it would have been OK, but I ran out of time.
John Levine
2021-07-11 18:46:19 UTC
Permalink
Post by Robin Vowels
Post by John Levine
One time I wrote a program that used an array of 12 bit fields and the code from PL/I F was stupendously
bad. As I recall, each access converted the value to decimal and back,
.
The conversion to and from BIT strings is via binary, not decimal.
Yes, I know, which is why I was rather surprised to see those decimal instructions. I assume that nobody
used bit strings in lengths that weren't powers of 2 so the code was poorly debugged.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2021-07-11 19:46:28 UTC
Permalink
Post by John Levine
Post by Robin Vowels
Unfortunately, he omitted to provide the REORDER keyword
for the PROCEDURE statement in the case of IBM's PL/I compiler. ...
P. J. Jaliks compared COBOL and PL/I. His PL/I version ran slower by
2-3 times of because of inappropriate variable declarations and
failure to disable fixed-point interrupts. ...
PL/I makes it too easy to write bad programs, particularly if you don't
know the folklore about what is
efficient and what is not.
One time I wrote a program that used an array of 12 bit fields and the
code from PL/I F was stupendously
bad. As I recall, each access converted the value to decimal and back, and no, I didn't
do picture formatting or anything like that. I expect that if I'd made
them 16 bit fields and just used
the low 12 bits it would have been OK, but I ran out of time.
Doesn’t seem like it should have converted to decimal anyway.

Developing Iron Spring PL/I, one of the first things I did was try to have
bit manipulations not be TOO outrageous, since the compiler itself does a
lot of bit twiddling. Still some stuff needs to be cleaned up there, but
most the code is decent.
--
Pete
Bob Eager
2021-07-01 20:22:53 UTC
Permalink
Post by Peter Flass
Post by Thomas Koenig
Post by Bob Eager
One of my colleagues did his Ph.D. in the 1960s. He had also worked
for IBM. Part of his research was the construction of a general
purpose macro language and processor.
As a piss take of IBM, he called it ML/I. (Macro Language 1)
http://www.ml1.org.uk
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
Or are there others? Joke languages like Intercal need not apply :-)
IBM’s PL/I preprocessor is fairly general-purpose. The macros are
written in an interpreted subset of PL/I.
Indeed. He mentions that in one of the write-ups. But ML/I does rather
more and is highly portable.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Rich Alderson
2021-07-01 23:00:00 UTC
Permalink
Post by Thomas Koenig
Post by Bob Eager
One of my colleagues did his Ph.D. in the 1960s. He had also worked for
IBM. Part of his research was the construction of a general purpose macro
language and processor.
As a piss take of IBM, he called it ML/I. (Macro Language 1)
http://www.ml1.org.uk
Seems far more readable than m4 (which is the serious programming
language that I would consier most unreadable).
Or are there others? Joke languages like Intercal need not apply :-)
TECO macros, especially the MIT PDP-10 dialect of TECO in which the original
version of EMACS was written. Looks like line noise...

Old .sig:

Rich Alderson Last LOTS Tops-20 Systems Programmer, 1984-1991
Current maintainer, MIT TECO EMACS (v. 170)
last name @ XKL dot COM Customer Interface, XKL LLC
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Robin Vowels
2021-07-01 01:23:19 UTC
Permalink
Post by Rich Alderson
Post by Peter Flass
I wrote a bunch of PL/I programs, but never convinced anyone to do any
production stuff in it.
Prior to moving into systems programming, I was a programmer/analyst in the
Financial Systems group for the University of Chicago Computation Center. The
accounting system was a suite of COBOL programs purchased from a commercial
vendor which ran on SVS (= OS/VS2 v1) on an Amdahl 470.
Our standard was to make mods for localization in COBOL, but to write any
external utilities in PL/I using the optimizing compiler. That's what got me
in the door: I had been writing PL/1 for 10 years already when I got the job.
(NB: When I first started programming, the language was called "PL/1" in the
IBM manuals;
.
No, IBM always called it "PL/I", even from the first manuals for PL/I-F,
with one exception, I think for AS400.
.
Before the 360 compilers, it was transiently named FORTRAN V and NPL among others.
.
Post by Rich Alderson
by the time I started using it professionally, years before the
UChicago job, they had gone to the "PL/I" version of the name.)
Robin Vowels
2021-07-01 01:10:27 UTC
Permalink
Post by Peter Flass
Post by undefined Hancock-4
Post by undefined Hancock-4
The instruction set of the original System/3 is available on bitsavers.
Not sure how many people bothered to program that machine in assembler
since it was intended for smaller shops and easy to use. My impression
was that most S/3 sites used RPG II or even canned routines. But the IBM
literature said Fortran was available. How much it was used or how well
it ran I don't know. Sometimes IBM would offer something that in reality
didn't run very well on a given machine and got little use.
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Yes, govt specs mandated a lot of unnecessary stuff*.
Sometimes a customer, or perhaps a consultant, would get the 'bright idea' of
using a non standard language to do some funky task. For example, Fortran in
an otherwise business environment (or COBOL in an eng/sci site).
We were a COBOL shop, but there was one funky stat program in FORTRAN to
analyze employee data or something. I think it was at least a tray of
cards. I don’t know who wrote it, or when, or even if it was ever run while
I was there, but it was the Director’s pet.
Post by undefined Hancock-4
I've seen lots of people try to run Fortran on their S/360 only to discover it
didn't have the universal instruction set and they had to find a machine that
did (often that turned out to be our site).
I’m surprised someone didn’t write FP emulation routines and distribute
them thru SHARE or something. I know this was later done for the Intel 386.
.
Yes. IBM did it for PL/I for OS/2, namely, emulate floating-point for the 386
when it did not have the Math coprocessor.
Post by Peter Flass
Post by undefined Hancock-4
Anyway, people would have these exotic funky programs that usually proved
to be far too abstract and poorly written to be practical and accomplish anything.
For whatever reason, high management would be enamored with these funky
proposals and insist on trying them out, even if their IT staff studied it and
advised against it.
I wrote a bunch of PL/I programs, but never convinced anyone to do any
production stuff in it.
Post by undefined Hancock-4
My guess is that that S/3 got Fortran for that reason in addition to govt specs.
undefined Hancock-4
2021-07-01 18:36:48 UTC
Permalink
Post by Peter Flass
We were a COBOL shop, but there was one funky stat program in FORTRAN to
analyze employee data or something. I think it was at least a tray of
cards. I don’t know who wrote it, or when, or even if it was ever run while
I was there, but it was the Director’s pet.
That was very common. I think almost every site had a situation like that. "Director's
pet" as you say. Frustrating for the rest of us.

(I used to wonder how a Director or Asst Director, who probably were crackerjack
programmers in their day, could latch on to some dingbat scheme.)
Charlie Gibbs
2021-07-02 23:28:41 UTC
Permalink
Post by undefined Hancock-4
Post by Peter Flass
We were a COBOL shop, but there was one funky stat program in FORTRAN
to analyze employee data or something. I think it was at least a tray
of cards. I don’t know who wrote it, or when, or even if it was ever
run while I was there, but it was the Director’s pet.
That was very common. I think almost every site had a situation like
that. "Director's pet" as you say. Frustrating for the rest of us.
(I used to wonder how a Director or Asst Director, who probably were
crackerjack programmers in their day, could latch on to some dingbat
scheme.)
Not all of them were crackerjack programmers. And hell hath no fury
like a director who sees what you've done to his precious spaghetti code.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
gareth evans
2021-07-03 09:32:15 UTC
Permalink
Post by Charlie Gibbs
Not all of them were crackerjack programmers. And hell hath no fury
like a director who sees what you've done to his precious spaghetti code.
Took a job once where the self-taught "software engineer" was leaving,
leaving a mixture of GOTO and structures but none of it indented and all
tucked in to the LHS margin.

Typically, over 90 procedures without comments per module.

When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.

My advice; if you have a self-taught boy wonder who wishes to move on,
then double or triple his salary to keep him, for that will be cheaper
for you in the long run!
Peter Flass
2021-07-04 01:40:36 UTC
Permalink
Post by gareth evans
Post by Charlie Gibbs
Not all of them were crackerjack programmers. And hell hath no fury
like a director who sees what you've done to his precious spaghetti code.
Took a job once where the self-taught "software engineer" was leaving,
leaving a mixture of GOTO and structures but none of it indented and all
tucked in to the LHS margin.
Typically, over 90 procedures without comments per module.
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Post by gareth evans
My advice; if you have a self-taught boy wonder who wishes to move on,
then double or triple his salary to keep him, for that will be cheaper
for you in the long run!
We had a guy like that (still see his name in Linkedin occasionally). He
claimed to be an assembler programmer, but I think all he knew about it was
how to spell it. He worked for months on a fairly simple project, and when
he left we scrapped it and rewrote it in a couple of days.
--
Pete
gareth evans
2021-07-04 08:13:29 UTC
Permalink
Post by Peter Flass
He worked for months on a fairly simple project, and when
he left we scrapped it and rewrote it in a couple of days.
Reminds me of one contract client where I ended up staying
for 4 years, off and on.

Person leaving had spent 6 weeks unsuccessfully trying to
implement a serial port driver on Windows using C. At the
interview I queried this and said that it was a matter of a few minutes
to achieve the same thing in Visual Basic, and I did that
at the interview!

Despite my background in electronics and computing, with a
strong interest in assembler and machine-orientation (qv :-) )
as opposed
to application-orientation, if a client had a project
involving disks, printer, serial port and keyboard, I'd
always recommend Visual Basic to them.

IMHO Microsoft went off the rails after VB6 by changing
VB to be .NET compliant so destroying the ease of use.
undefined Hancock-4
2021-07-07 19:46:08 UTC
Permalink
Post by Peter Flass
Post by gareth evans
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Yes, that was typical procedure when faced with spaghetti.
Of course, if it was 6,000 lines of mush, it took a while and
wasn't easy, but all the more reason.

By the way, 'structured programming' was touted as the way to eliminate
the mess of too many crazy GO TOs. But structured programming could
be poorly written too, with all sorts of excessive crazy calls to subroutines.
Too many layers or overlays is just as bad as GO TOs.

Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
gareth evans
2021-07-07 20:19:28 UTC
Permalink
Post by undefined Hancock-4
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
I think that is symptomatic of someone who might have a year's
experience, but consisting of only one week's real experience
then repeated a further 51 times.
Peter Flass
2021-07-08 22:01:10 UTC
Permalink
Post by gareth evans
Post by undefined Hancock-4
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
I think that is symptomatic of someone who might have a year's
experience, but consisting of only one week's real experience
then repeated a further 51 times.
My first FORTRAN stuff in college was like that. They were all (actual)
one-shots: spend a week writing, run once or twice to get output, and then
toss it, so the names didn’t matter. My first programming job everyone
emphasized using long(ish) meaningful variable names.

Of course FORTRAN used, what, six-character names, so it was hard to be too
meaningful.
--
Pete
J. Clarke
2021-07-07 21:06:15 UTC
Permalink
On Wed, 7 Jul 2021 12:46:08 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Post by Peter Flass
Post by gareth evans
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Yes, that was typical procedure when faced with spaghetti.
Of course, if it was 6,000 lines of mush, it took a while and
wasn't easy, but all the more reason.
By the way, 'structured programming' was touted as the way to eliminate
the mess of too many crazy GO TOs. But structured programming could
be poorly written too, with all sorts of excessive crazy calls to subroutines.
Too many layers or overlays is just as bad as GO TOs.
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
And management, not understanding the importance of such things,
insists that "outdated" docs be archived never to be seen again or
tossed, so you have to back into the damned things.
Charlie Gibbs
2021-07-08 05:15:43 UTC
Permalink
Post by undefined Hancock-4
Post by Peter Flass
Post by gareth evans
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Yes, that was typical procedure when faced with spaghetti.
Of course, if it was 6,000 lines of mush, it took a while and
wasn't easy, but all the more reason.
Make that 4,000 lines of mush; once I started to make sense of
what the code was doing, I'd typically throw out 30% of it.
(In extreme cases, 50%.)
Post by undefined Hancock-4
By the way, 'structured programming' was touted as the way to
eliminate the mess of too many crazy GO TOs. But structured
programming could be poorly written too, with all sorts of
excessive crazy calls to subroutines. Too many layers or
overlays is just as bad as GO TOs.
This is something the zealots never realized. They seemed to take
the number of subroutine calls as some sort of figure of merit.
Since a subroutine call is just a GOTO paired with a "come from",
a maze of calls to tiny (sometimes one-line!) subroutines is still
spaghetti code; you're just running double strands.

As I would point out, the undisciplined use of subroutine calls
is nearly as bad as the undisciplined use of GOTOs.

I heard of virtual memory systems thrashing themselves to death
because of the complete lack of locality of code - especially if
overlays were involved.
Post by undefined Hancock-4
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
To be completely fair, it can be a bit hard when you're working with
an assembler that only supports 4-character labels. In my first job
I discovered that it was a bit of a standard to use labels like A100,
A110, etc. (The convention was to keep the labels in sequence.)
I quickly outgrew that - it was easy enough to come up with something
a bit more mnemonic. The breaking point, though, was when I had to
move a large block of code, which would have required renumbering
all those labels. I decided enough was enough.

Years later I came across COBOL programs (and textbooks) which
advocated putting such prefixes in front of labels - a practice
which I steadfastly avoided.
--
/~\ Charlie Gibbs | They don't understand Microsoft
\ / <***@kltpzyxm.invalid> | has stolen their car and parked
X I'm really at ac.dekanfrus | a taxi in their driveway.
/ \ if you read it the right way. | -- Mayayana
Peter Flass
2021-07-08 22:01:11 UTC
Permalink
Post by Charlie Gibbs
Post by undefined Hancock-4
Post by Peter Flass
Post by gareth evans
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Yes, that was typical procedure when faced with spaghetti.
Of course, if it was 6,000 lines of mush, it took a while and
wasn't easy, but all the more reason.
Make that 4,000 lines of mush; once I started to make sense of
what the code was doing, I'd typically throw out 30% of it.
(In extreme cases, 50%.)
Post by undefined Hancock-4
By the way, 'structured programming' was touted as the way to
eliminate the mess of too many crazy GO TOs. But structured
programming could be poorly written too, with all sorts of
excessive crazy calls to subroutines. Too many layers or
overlays is just as bad as GO TOs.
This is something the zealots never realized. They seemed to take
the number of subroutine calls as some sort of figure of merit.
Since a subroutine call is just a GOTO paired with a "come from",
a maze of calls to tiny (sometimes one-line!) subroutines is still
spaghetti code; you're just running double strands.
As I would point out, the undisciplined use of subroutine calls
is nearly as bad as the undisciplined use of GOTOs.
I heard of virtual memory systems thrashing themselves to death
because of the complete lack of locality of code - especially if
overlays were involved.
Post by undefined Hancock-4
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
To be completely fair, it can be a bit hard when you're working with
an assembler that only supports 4-character labels. In my first job
I discovered that it was a bit of a standard to use labels like A100,
A110, etc. (The convention was to keep the labels in sequence.)
I quickly outgrew that - it was easy enough to come up with something
a bit more mnemonic. The breaking point, though, was when I had to
move a large block of code, which would have required renumbering
all those labels. I decided enough was enough.
Years later I came across COBOL programs (and textbooks) which
advocated putting such prefixes in front of labels - a practice
which I steadfastly avoided.
Ah yes, the dreaded 010-READ-INPUT-FILE paragraphs and suchlike.
--
Pete
Dan Espen
2021-07-08 23:39:36 UTC
Permalink
Post by Peter Flass
Post by Charlie Gibbs
Post by undefined Hancock-4
Post by Peter Flass
Post by gareth evans
When a bug surfaced, it could take several hours to unravel the code,
resulting in the complaint by the non-computer-competent MD that the
person who left could fix any fault within a few minutes, but you
can't be much good of it takes you more than a day.
The first thing I’d do if I had to pick up code like that would be to
comment it, at least to mark the procedures, and indent it.
Yes, that was typical procedure when faced with spaghetti.
Of course, if it was 6,000 lines of mush, it took a while and
wasn't easy, but all the more reason.
Make that 4,000 lines of mush; once I started to make sense of
what the code was doing, I'd typically throw out 30% of it.
(In extreme cases, 50%.)
Post by undefined Hancock-4
By the way, 'structured programming' was touted as the way to
eliminate the mess of too many crazy GO TOs. But structured
programming could be poorly written too, with all sorts of
excessive crazy calls to subroutines. Too many layers or
overlays is just as bad as GO TOs.
This is something the zealots never realized. They seemed to take
the number of subroutine calls as some sort of figure of merit.
Since a subroutine call is just a GOTO paired with a "come from",
a maze of calls to tiny (sometimes one-line!) subroutines is still
spaghetti code; you're just running double strands.
As I would point out, the undisciplined use of subroutine calls
is nearly as bad as the undisciplined use of GOTOs.
I heard of virtual memory systems thrashing themselves to death
because of the complete lack of locality of code - especially if
overlays were involved.
Post by undefined Hancock-4
Back in the 1960s and 1970s there was a shortage of programmers.
Management, desperate to automate some function, made use of
unskilled or untalented people, leaving a mess for others to untangle.
Undecipherable field names was another problem. What does K3X mean?
K5X? K4Z?
To be completely fair, it can be a bit hard when you're working with
an assembler that only supports 4-character labels. In my first job
I discovered that it was a bit of a standard to use labels like A100,
A110, etc. (The convention was to keep the labels in sequence.)
I quickly outgrew that - it was easy enough to come up with something
a bit more mnemonic. The breaking point, though, was when I had to
move a large block of code, which would have required renumbering
all those labels. I decided enough was enough.
Years later I came across COBOL programs (and textbooks) which
advocated putting such prefixes in front of labels - a practice
which I steadfastly avoided.
Ah yes, the dreaded 010-READ-INPUT-FILE paragraphs and suchlike.
Great stuff.

I once decided to fix a large C program by changing all the little
function names into the "a100_initialize" pattern. Then, of course,
rearranging the functions into called order.

I expected the owning C programmer to object but he was happy with
the change. He couldn't navigate the code either.
--
Dan Espen
Louis Krupp
2021-06-30 23:58:33 UTC
Permalink
Post by Peter Flass
Post by undefined Hancock-4
Post by J. Clarke
On Wed, 16 Jun 2021 11:59:12 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Even the low-end models of IBM's System/360 proved too expensive for
small business, so in 1969 IBM introduced the budget priced System/3.
Notable was the tiny 96 column punched card.
According to the manual (on Bitsavers), the IBM System/3 did support a
Fortran compiler. However, I think the hardware did not have floating
point and was only oriented toward business processing. My _guess_ is
that the System/3 would run Fortran programs rather slowly and Fortran
or sci/eng work was rarely done.
Would anyone have any experience or knew of System/3 sites that used
Fortran? If so, how did it work out for them? The impression I got was
the vast majority of S/3 sites used RPG II developed for it.
I'm pretty sure the System/3 supported BASIC. While BASIC wasn't as
good as Fortran, it could handle some number crunching work. That was
certainly adequate for some users.
My guess is that extremely few customers bought a System/3 to do
sci/eng work--there were too many other better choices available at the
time. But it's certainly possible that some sites, while doing
primarily business work, might have a eng/sci application here and
there and may have run them, albeit slowly. Heck, they have the machine
on site already, so use it.
https://archive.org/details/Nations-Business-1970-01/page/n17/mode/2up
https://archive.org/details/Nations-Business-1971-01/page/n29/mode/2up
https://archive.org/details/Nations-Business-1971-02/page/n5/mode/2up
https://archive.org/details/Nations-Business-1971-03/page/n13/mode/2up
(I don't know about the later S/34, S/36, and S/38 running Fortran or
sci/eng applications. As more powerful machines, they may have had more
capability, so it may not have been an issue.)
I don't know if it was true for System/3, but at least some AS/400
sites just used packaged applications. I have a working AS/400
downstairs (or at least it was working 20 years ago--I haven't tried
to run it in a long time--something about lugging the terminal down a
flight of narrow stairs) and to my annoyance it did not have RPG or
BASIC or any other programming language installed.
While the AS/400 evolved from the S/3 midrange product line, it was a
very different machine. There were other machines in between like the S/36 series.
Note, unlike the original S/3 and the S/360 et al, which had a defined
architecture and instruction set (to this day), the AS/400 used something
vague called Licensed Internal Code. One model could vary from the next.
The instruction set of the original System/3 is available on bitsavers.
Not sure how many people bothered to program that machine in assembler
since it was intended for smaller shops and easy to use. My impression
was that most S/3 sites used RPG II or even canned routines. But the IBM
literature said Fortran was available. How much it was used or how well
it ran I don't know. Sometimes IBM would offer something that in reality
didn't run very well on a given machine and got little use.
A lot of times to meet a customer (government) specification that
required, e.g. FORTRAN, even though the customer probably never used it.
Once upon a time, Harris Corporation made minicomputers designed for
number-crunching. One day, their management decided to get into the data
processing business, so they got a company that wrote academic
administrative software to agree to sell their package only with a
Harris computer. The computer's instruction set wasn't designed for data
processing, and the Harris COBOL compiler generated library calls to do
basically everything -- moves, arithmetic, etc -- except for branches.
The resulting runtime performance was bad.

One of my favorite projects was writing a Harris assembler program to
read and print Burroughs printer backup tapes. As an offline printer,
the machine did well.

And it had the coolest lights on the operator's console.

Louis
Rich Alderson
2021-06-18 18:31:16 UTC
Permalink
Would anyone have any experience or knew of System/3 sites that used Fortra=
n? If so, how did it work out for them? The impression I got was the vast=
majority of S/3 sites used RPG II developed for it.
RPG III. RPG II is the System/360 version.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Dan Espen
2021-06-18 20:00:36 UTC
Permalink
Post by Rich Alderson
Would anyone have any experience or knew of System/3 sites that used Fortra=
n? If so, how did it work out for them? The impression I got was the vast=
majority of S/3 sites used RPG II developed for it.
RPG III. RPG II is the System/360 version.
Not the way I remember it, and Wikipedia agrees with me.
RPG II on S/3, S/34, S/36

RPG III on S/38, AS/400.

RPG IV for OS/400.

I had to rescue a failing RPG project on a Sys/34.
One of the first things we did was order the COBOL compiler.
I don't have any impression of the speed of the processor, just
that it was always fast enough for our application.
--
Dan Espen
undefined Hancock-4
2021-06-26 19:47:45 UTC
Permalink
Post by Dan Espen
Would anyone have any experience or knew of System/3 sites that used Fortra=
n? If so, how did it work out for them? The impression I got was the vast=
majority of S/3 sites used RPG II developed for it.
RPG III. RPG II is the System/360 version.
Not the way I remember it, and Wikipedia agrees with me.
RPG II on S/3, S/34, S/36
RPG III on S/38, AS/400.
RPG IV for OS/400.
Yes.

Then they came out with a Visual RPG.

I don't know what is done these days on the i series machines or whatever has succeeded the AS/400. Indeed, IBM's website is very vague about their product lines, now talking about "solutions". Makes and models are very vague. (Cars are like that too, these days, Toyota offers a Corolla, but then they offer many trim lines that are confusing. Same with Camry).
Post by Dan Espen
I had to rescue a failing RPG project on a Sys/34.
One of the first things we did was order the COBOL compiler.
I don't have any impression of the speed of the processor, just
that it was always fast enough for our application.
I think the COBOL language on the AS/400 was nice. But I'm not sure how well it ran. We had COBOL on a AS/400 but it ran very slow. It may have been the machine was just underpowered for what we wanted to do on it, I didn't get into the details.

I was not a big fan of the AS/400 line. But I must admit its users loved it. There used to be a magazine, Midrange Computing, almost a cult following. And I must guess that setting up a new installation cold was probably easier with an AS/400 from a Z series, probably a lot of stuff was automated, and far less configuration and system programming was needed. Of course, if the site got big enough, then performance may have become an issue--large sites have do need some configuration to keep things orderly. (And there was an AS/400 Performance Manual).
Dan Espen
2021-06-26 20:40:23 UTC
Permalink
Post by undefined Hancock-4
Post by Dan Espen
Would anyone have any experience or knew of System/3 sites that used Fortra=
n? If so, how did it work out for them? The impression I got was the vast=
majority of S/3 sites used RPG II developed for it.
RPG III. RPG II is the System/360 version.
Not the way I remember it, and Wikipedia agrees with me.
RPG II on S/3, S/34, S/36
RPG III on S/38, AS/400.
RPG IV for OS/400.
Yes.
Then they came out with a Visual RPG.
I don't know what is done these days on the i series machines or
whatever has succeeded the AS/400. Indeed, IBM's website is very
vague about their product lines, now talking about "solutions". Makes
and models are very vague. (Cars are like that too, these days,
Toyota offers a Corolla, but then they offer many trim lines that are
confusing. Same with Camry).
Post by Dan Espen
I had to rescue a failing RPG project on a Sys/34.
One of the first things we did was order the COBOL compiler.
I don't have any impression of the speed of the processor, just
that it was always fast enough for our application.
I think the COBOL language on the AS/400 was nice. But I'm not sure how well it ran. We had COBOL on a AS/400 but it ran very slow. It may have been the machine was just underpowered for what we wanted to do on it, I didn't get into the details.
I was not a big fan of the AS/400 line. But I must admit its users
loved it. There used to be a magazine, Midrange Computing, almost a
cult following. And I must guess that setting up a new installation
cold was probably easier with an AS/400 from a Z series, probably a
lot of stuff was automated, and far less configuration and system
programming was needed. Of course, if the site got big enough, then
performance may have become an issue--large sites have do need some
configuration to keep things orderly. (And there was an AS/400
Performance Manual).
This same company with the S/34 had a pretty large mainframe.
Every month the mainframe would do a manufacturing cost calculation.
This run was well known for taking forever to complete. Sometimes
a day or 2.

I implemented my own costing algorithms on the S/34 and we'd
complete the calculation in 15 or 20 minutes. I think I just had a
better algorithm but I never considered the machine slow.

I was not a fan of the direction the AS/400 (and S/38) went off in.
I liked the S/34 because of it's simplicity. The AS/400 retained the
good parts but had too much stuff you'd only find on an AS/400.

I once ported a load of S/34 code to a Wang/VS system. Easy.
If I was starting with an AS/400 it might not have been so easy.

The COBOL compiler was a dream. Never once did I confront a dump.

We put S/34s in 4 plants and a central office.
The existing clerical staff was able to do everything the system needed for
day to day operation. Nothing like a mainframe.
--
Dan Espen
Grant Taylor
2021-06-26 21:20:04 UTC
Permalink
Post by Dan Espen
The existing clerical staff was able to do everything the system
needed for day to day operation. Nothing like a mainframe.
Yep. IMHO AS/400s a.k.a. i/Series were notorious for putting them in
offices, connect a number of terminals and mostly forgetting about them.

State Farm had an AS/400 in every single agent's office in the U.S.A.
from the mid '90s through at least 2010. (I don't know about after
that.) They would link back to mainframes in multi (many) state
regional offices via leased line and / or dial up connections. My
understanding is that the '400s were almost set them and forget them.
--
Grant. . . .
unix || die
Kerr-Mudd, John
2021-06-27 10:28:10 UTC
Permalink
On Sat, 26 Jun 2021 15:20:04 -0600
Post by Grant Taylor
Post by Dan Espen
The existing clerical staff was able to do everything the system
needed for day to day operation. Nothing like a mainframe.
Yep. IMHO AS/400s a.k.a. i/Series were notorious for putting them in
offices, connect a number of terminals and mostly forgetting about them.
State Farm had an AS/400 in every single agent's office in the U.S.A.
from the mid '90s through at least 2010. (I don't know about after
that.) They would link back to mainframes in multi (many) state
regional offices via leased line and / or dial up connections. My
understanding is that the '400s were almost set them and forget them.
Does anyone recall these computers?
https://en.wikipedia.org/wiki/IBM_8100
--
Bah, and indeed Humbug.
Anne & Lynn Wheeler
2021-06-27 23:40:28 UTC
Permalink
Post by Kerr-Mudd, John
Does anyone recall these computers?
https://en.wikipedia.org/wiki/IBM_8100
communication group greatly underpowered line of UC processors ... also
used in 37x5 boxes (science center had tried to get them to use
significantly better peachtree processor developed for series/1 in 37x5
boxes). Later, at one point Evans asks my wife to audit 8100 ... short
time later, 8100 was "decommitted".
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2021-06-29 00:28:46 UTC
Permalink
Post by Kerr-Mudd, John
On Sat, 26 Jun 2021 15:20:04 -0600
Post by Grant Taylor
Post by Dan Espen
The existing clerical staff was able to do everything the system
needed for day to day operation. Nothing like a mainframe.
Yep. IMHO AS/400s a.k.a. i/Series were notorious for putting them in
offices, connect a number of terminals and mostly forgetting about them.
State Farm had an AS/400 in every single agent's office in the U.S.A.
from the mid '90s through at least 2010. (I don't know about after
that.) They would link back to mainframes in multi (many) state
regional offices via leased line and / or dial up connections. My
understanding is that the '400s were almost set them and forget them.
Does anyone recall these computers?
https://en.wikipedia.org/wiki/IBM_8100
Vaguely recall reading about it, never worked on it. Sounded like a real
fog to me, I think you had to compile its programs on the mainframe.
--
Pete
Dennis Boone
2021-06-27 21:00:02 UTC
Permalink
Post by Grant Taylor
State Farm had an AS/400 in every single agent's office in the U.S.A.
from the mid '90s through at least 2010. (I don't know about after
that.) They would link back to mainframes in multi (many) state
regional offices via leased line and / or dial up connections. My
understanding is that the '400s were almost set them and forget them.
Last I was in my agent's office (year and a half? covid...), they were
still using 5250 emulation for at least some applications, but I don't
know where it connected.

De
Peter Flass
2021-06-29 00:28:44 UTC
Permalink
Post by undefined Hancock-4
Post by Dan Espen
Would anyone have any experience or knew of System/3 sites that used Fortra=
n? If so, how did it work out for them? The impression I got was the vast=
majority of S/3 sites used RPG II developed for it.
RPG III. RPG II is the System/360 version.
Not the way I remember it, and Wikipedia agrees with me.
RPG II on S/3, S/34, S/36
RPG III on S/38, AS/400.
RPG IV for OS/400.
Yes.
Then they came out with a Visual RPG.
I don't know what is done these days on the i series machines or whatever
has succeeded the AS/400. Indeed, IBM's website is very vague about
their product lines, now talking about "solutions". Makes and models are
very vague. (Cars are like that too, these days, Toyota offers a
Corolla, but then they offer many trim lines that are confusing. Same with Camry).
This is an unfortunate trend that the remaining computer manufacturers
started some years ago. At one point I was looking for some documentation
from UNISYS, and it was like an archeological dig to find it.
--
Pete
Loading...