Discussion:
Micros as number crunchers
(too old to reply)
Thomas Koenig
2020-04-18 12:59:47 UTC
Permalink
[F'up]

Wow.

Seems like somebody actually took the time and effort to do
some assembly version of Linpack routines on a few micros (C64,
BBC Micro, plus a few others) and see how fast they are, both in
their native Basic dialects and hand-coded assembly which used
the native floating point format.

http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf

One thing that's also interesting from this is that the floating
point format of these machines was actually not bad - a 40 bit
word using a 32 bit mantissa actually gives much better roundoff
results than today's 32 bit single precision real variables.

Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Thomas Koenig
2020-04-18 13:02:17 UTC
Permalink
Post by Thomas Koenig
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Correction: That number was for BASIC. For optimized assembly,
it was around 330 times slower.
Jon Elson
2020-04-22 21:48:25 UTC
Permalink
Post by Thomas Koenig
Post by Thomas Koenig
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Correction: That number was for BASIC. For optimized assembly,
it was around 330 times slower.
A VAX 11/780 in about 1981 was about $200K, or $250 K for a complete system.

Jon
J. Clarke
2020-04-18 14:07:51 UTC
Permalink
On Sat, 18 Apr 2020 12:59:47 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
[F'up]
Wow.
Seems like somebody actually took the time and effort to do
some assembly version of Linpack routines on a few micros (C64,
BBC Micro, plus a few others) and see how fast they are, both in
their native Basic dialects and hand-coded assembly which used
the native floating point format.
http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf
One thing that's also interesting from this is that the floating
point format of these machines was actually not bad - a 40 bit
word using a 32 bit mantissa actually gives much better roundoff
results than today's 32 bit single precision real variables.
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
Quadibloc
2020-04-18 17:30:16 UTC
Permalink
Post by J. Clarke
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
There _was_ the 486 SX which skipped the hardware floating-point.

John Savard
Gordon Henderson
2020-04-18 18:10:44 UTC
Permalink
Post by Quadibloc
Post by J. Clarke
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
There _was_ the 486 SX which skipped the hardware floating-point.
SX -> Sux, DX -> Delux, as I recall..

-Gordon
J. Clarke
2020-04-18 18:51:02 UTC
Permalink
On Sat, 18 Apr 2020 18:10:44 -0000 (UTC), Gordon Henderson
Post by Gordon Henderson
Post by Quadibloc
Post by J. Clarke
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
There _was_ the 486 SX which skipped the hardware floating-point.
SX -> Sux, DX -> Delux, as I recall..
IIRC that was an effort to make lemonade--they had a run that had a
defect in the floating point so they cut whatever they needed to to
turn it off and stamped them "SX". Then it turned out that there was
a market for them so they started making them without the floating
point--IIRC the major market was laptops where every little bit of
power reduction helped.
Thomas Koenig
2020-04-19 10:46:23 UTC
Permalink
Post by J. Clarke
Post by Thomas Koenig
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
Sure.

A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.

There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.

Why are eight-bit bytes so common today?
J. Clarke
2020-04-19 12:14:14 UTC
Permalink
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Post by J. Clarke
Post by Thomas Koenig
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Just a note but any decent machine today has 64-bit floating point,
and any Intel later than the 386 has 80-bit available.
Sure.
A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
Why are eight-bit bytes so common today?
The major reason is that that is what is expected. With regard to
floating point most processors today with hardware floating point
implement the IEEE 754 format, which defines 32, 64, and 128 bit
formats. Note that Intel's original floating point was 80 bit and
that is still available on their processors.
Thomas Koenig
2020-04-19 13:04:33 UTC
Permalink
Post by J. Clarke
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Why are eight-bit bytes so common today?
The major reason is that that is what is expected.
Ok, but why has it become ubiquitous?

In mainframe times, there were lots of different architectures.
36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
and the 32-bit, byte-oriented IBM /360.

Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
a mini).

One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
8080, Z80, 6502, 6800, 68000 etc...

Micros? Based on one-chip microprocessors or RISC designs, all of
which are 32-bit based, as far as I know.

So, we see a convergence towards 8-bit (or even powers of two)
over the years. What drove this?

Was it the Arpanet? Octet-based from the start, as far as I know.
J. Clarke
2020-04-19 14:21:26 UTC
Permalink
On Sun, 19 Apr 2020 13:04:33 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Post by J. Clarke
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Why are eight-bit bytes so common today?
The major reason is that that is what is expected.
Ok, but why has it become ubiquitous?
In mainframe times, there were lots of different architectures.
36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
and the 32-bit, byte-oriented IBM /360.
Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
a mini).
One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
8080, Z80, 6502, 6800, 68000 etc...
Micros? Based on one-chip microprocessors or RISC designs, all of
which are 32-bit based, as far as I know.
So, we see a convergence towards 8-bit (or even powers of two)
over the years. What drove this?
Was it the Arpanet? Octet-based from the start, as far as I know.
You're looking for a technological reason when I think the real reason
is more related to marketing and sales and accidents of history.

IBM wanted something with an 8-bit bus for the PC because they had a
stock of 8-bit glue chips that they wanted to use up. And whatever
got on the IBM PC was going to be dominant.

ARM had a similar requirement--their first target was a second
processor for a 6502 machine for the BBC. How they became dominant
in cell phones is not clear to me but I suspect their marketing model
is a big part of it--they don't make chips, they sell designs that can
be adjusted to fit customer needs or embedded in system-on-a-chip
designs.

I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits. To be cost competitive, chip manufacturers needed a large
market to amortize the cost of the fab--if they weren't Intel they
were pretty well locked out of the desktop PC market so the next
target was engineering workstations, which typically ran Unix, so they
had to have an architecture for which Unix was easily ported.

Other micros just didn't succeed.
Douglas Miller
2020-04-19 14:50:28 UTC
Permalink
Post by J. Clarke
...
I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits. To be cost competitive, chip manufacturers needed a large
market to amortize the cost of the fab--if they weren't Intel they
were pretty well locked out of the desktop PC market so the next
target was engineering workstations, which typically ran Unix, so they
had to have an architecture for which Unix was easily ported.
Other micros just didn't succeed.
I don't have the same perspective of history. 8-bit was well established long before the IBM-PC. The industry was already coalescing on 8-bit in the early 70's. The reasons certain microprocessors "succeeded" and others did less are varied, and not always based on "better technology".

Even mainframes, with core memory, (50's and 60's) were often (always?) using 8-bit wide memory (buses) (although +1 for parity was usually necessary). Some just assigned a couple bits to "punctuation" and so typically had data/address values that were a multiple of 6 bits.

I could be wrong, as I never worked on the architecture, but the PDP-11 may have had an 18-bit address width (originally 16, then 18 or 22 with MMU hardware), but it was still a byte (8-bit) oriented machine. 8, 16 and 32-bit data.
Niklas Karlsson
2020-04-20 11:09:15 UTC
Permalink
Post by Douglas Miller
Post by J. Clarke
I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits.
...
Post by Douglas Miller
I could be wrong, as I never worked on the architecture, but the
PDP-11 may have had an 18-bit address width (originally 16, then 18 or
22 with MMU hardware), but it was still a byte (8-bit) oriented
machine. 8, 16 and 32-bit data.
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.

Niklas
--
If books were designed by Microsoft, the Anarchist's Cookbook would
explode when you read it.
-- Mark 'Kamikaze' Hughes, asr
Thomas Koenig
2020-04-20 11:15:36 UTC
Permalink
Post by Niklas Karlsson
Post by Douglas Miller
Post by J. Clarke
I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits.
...
Post by Douglas Miller
I could be wrong, as I never worked on the architecture, but the
PDP-11 may have had an 18-bit address width (originally 16, then 18 or
22 with MMU hardware), but it was still a byte (8-bit) oriented
machine. 8, 16 and 32-bit data.
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.
... which is why octal plays such an important role in Unix and C,
instead of hexadecimal.
John Levine
2020-04-20 14:49:26 UTC
Permalink
Post by Thomas Koenig
Post by Niklas Karlsson
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.
It hopped to the PDP-11 very early, wasn't any 18-bitisms I could see in 1975.
Post by Thomas Koenig
... which is why octal plays such an important role in Unix and C,
instead of hexadecimal.
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Scott Lurndal
2020-04-20 16:01:30 UTC
Permalink
Post by John Levine
Post by Thomas Koenig
Post by Niklas Karlsson
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.
It hopped to the PDP-11 very early, wasn't any 18-bitisms I could see in 1975.
Post by Thomas Koenig
... which is why octal plays such an important role in Unix and C,
instead of hexadecimal.
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
John Levine
2020-04-21 00:19:32 UTC
Permalink
Post by Scott Lurndal
Post by John Levine
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
Perhaps but octal was extremely common in documentation and software
for binary computers until the eight bit byte-addressed IBM 360 came
along.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Dan Espen
2020-04-21 01:33:30 UTC
Permalink
Post by John Levine
Post by Scott Lurndal
Post by John Levine
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
Perhaps but octal was extremely common in documentation and software
for binary computers until the eight bit byte-addressed IBM 360 came
along.
google says:

The octal numbers are not as common as they used to be. However, Octal
is used when the number of bits in one word is a multiple of 3.

So, it doesn't look like octal would be any good at representing
character strings. You'd need 9 bit bytes.

Never liked octal, it's even worse than hexidecimal.

Dumps on a 1401 were really easy to read in comparison.
--
Dan Espen
Peter Flass
2020-04-21 20:19:27 UTC
Permalink
Post by Dan Espen
Post by John Levine
Post by Scott Lurndal
Post by John Levine
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
Perhaps but octal was extremely common in documentation and software
for binary computers until the eight bit byte-addressed IBM 360 came
along.
The octal numbers are not as common as they used to be. However, Octal
is used when the number of bits in one word is a multiple of 3.
So, it doesn't look like octal would be any good at representing
character strings. You'd need 9 bit bytes.
Never liked octal, it's even worse than hexidecimal.
I especially hated it because representing 16- or 32-bit numbers always
left an odd high-order bit (or two). It was probably great for representing
18-bit numbers.
Post by Dan Espen
Dumps on a 1401 were really easy to read in comparison.
I would imagine so.
--
Pete
Ray
2020-04-22 22:31:30 UTC
Permalink
Post by Scott Lurndal
Post by John Levine
Post by Thomas Koenig
Post by Niklas Karlsson
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.
It hopped to the PDP-11 very early, wasn't any 18-bitisms I could see in 1975.
Post by Thomas Koenig
... which is why octal plays such an important role in Unix and C,
instead of hexadecimal.
Nope. DEC always used octal in their PDP-11 software and
documentation. Since it had 8 registers, three-bit octal digits in
the opcodes were handy.
I suspect that the PDP-11 choice of Octal was a function of the fact
that both PDP-10 and PDP-8 (and -5, et alia) had word bit-lengths congruent
to zero modulo three.
This has just reminded me of a funny thing from the 80's. After I had
spent a few months programing in octal on PDP 11/43's and 11/73's and
thrown all my 8's and 9's out of the window... I found I had a problem
helping my 2 boys with their homework because my decimal skills had
faded a little. Grin

RayH (the other one)
J. Clarke
2020-04-20 23:21:44 UTC
Permalink
Post by Niklas Karlsson
Post by Douglas Miller
Post by J. Clarke
I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits.
...
Post by Douglas Miller
I could be wrong, as I never worked on the architecture, but the
PDP-11 may have had an 18-bit address width (originally 16, then 18 or
22 with MMU hardware), but it was still a byte (8-bit) oriented
machine. 8, 16 and 32-bit data.
The PDP-11 was indeed byte oriented, but Unix originally started on the
18-bit PDP-7.
It started as hand-coded assembler and was hardly portable. Unix
didn't really become popular until after the C rewrite that made it
portable. And that wasn't on the PDP-7.
Ted Nolan <tednolan>
2020-04-19 17:45:49 UTC
Permalink
Post by J. Clarke
On Sun, 19 Apr 2020 13:04:33 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Post by J. Clarke
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Why are eight-bit bytes so common today?
The major reason is that that is what is expected.
Ok, but why has it become ubiquitous?
In mainframe times, there were lots of different architectures.
36-bit IBM scientific computers, 36-bit PDP-10, 36-bit Univac (I
just missed that at University), 60-bit CDCs, 48 bit Burroughs, ...
and the 32-bit, byte-oriented IBM /360.
Minicomputers? 18-bit PDP-1, 12-bit PDP-8, 16-bit PDP-11, 32-bit
VAX and 32-bit Eclipse, and 16-bit System/32 (if you can call that
a mini).
One-chip microprocessors? Starting with the 4-bit 4004, 8 bit 8008,
8080, Z80, 6502, 6800, 68000 etc...
Micros? Based on one-chip microprocessors or RISC designs, all of
which are 32-bit based, as far as I know.
So, we see a convergence towards 8-bit (or even powers of two)
over the years. What drove this?
Was it the Arpanet? Octet-based from the start, as far as I know.
You're looking for a technological reason when I think the real reason
is more related to marketing and sales and accidents of history.
IBM wanted something with an 8-bit bus for the PC because they had a
stock of 8-bit glue chips that they wanted to use up. And whatever
got on the IBM PC was going to be dominant.
ARM had a similar requirement--their first target was a second
processor for a 6502 machine for the BBC. How they became dominant
in cell phones is not clear to me but I suspect their marketing model
is a big part of it--they don't make chips, they sell designs that can
be adjusted to fit customer needs or embedded in system-on-a-chip
designs.
I suspect Unix had something to do with it--while Unix was originally
developed on an 18 bit architecture, by the time it had been revised
into a portable form it was pretty much locked into multiples of 8
bits. To be cost competitive, chip manufacturers needed a large
market to amortize the cost of the fab--if they weren't Intel they
were pretty well locked out of the desktop PC market so the next
target was engineering workstations, which typically ran Unix, so they
had to have an architecture for which Unix was easily ported.
Other micros just didn't succeed.
I recall running Unix on the BBN C70, a machine with 10 bit bytes.
I even got BSD vi to compile on and run on it, which surprised me a little.
--
columbiaclosings.com
What's not in Columbia anymore..
Quadibloc
2020-04-21 00:01:59 UTC
Permalink
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.

My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.

One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.

Therefore, since it was a binary machine, it had a word size of 64 bits, so that
bit addressing would work easily: the last six bits pointed to the bit within a
word. While the System/360 didn't have that feature, they may have thought they
might need to add it later.

When designing the System/360, another thought was its use for business
computations. The IBM 1401 stored numbers as printable character strings. This
wasted two bits out of every six, since you only need four bits to represent a
decimal digit.

The IBM 360 could handle both binary numbers and packed decimal numbers, where
two decimal digits, each four bits long, were in every 8-bit byte. When you
*unpack* a decimal number, you get its printable character form as a string of
EBCDIC digits.

So this meant it could do decimal arithmetic and instead of changing 33% waste
to 50% waste, it eliminated the waste entirely. (Well, not _entirely_, as ten
possible digits don't use all sixteen possibilities of four bits. IBM got around
to dealing with that, through Chen-Ho encoding, and its later variant Densely
Packed Decimal, DPD, but that is another story.)

That was the rationale used for the 8-bit byte at the time the 360 was being
designed. Perhaps because of the STRETCH and bit addressing, a machine with a
48-bit word that could use _both_ 6-bit characters and 8-bit characters was not
considered.

John Savard
Peter Flass
2020-04-21 00:45:06 UTC
Permalink
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
--
Pete
Peter Flass
2020-04-21 00:54:01 UTC
Permalink
Post by Peter Flass
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
OTOH it’s nice to have instructions to manipulate bits without masking and
shifting. The PDP-10 had good instructions, but, AFAIK, they were limited
to a single word. The Altos had BITBLT (IIRC). It’s not simple to move an
odd-length bit strings where the source and destination begin on different
bits in a word or byte.
--
Pete
Scott Lurndal
2020-04-21 14:43:40 UTC
Permalink
Post by Peter Flass
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
OTOH it’s nice to have instructions to manipulate bits without masking and
shifting. The PDP-10 had good instructions, but, AFAIK, they were limited
to a single word. The Altos had BITBLT (IIRC). It’s not simple to move an
odd-length bit strings where the source and destination begin on different
bits in a word or byte.
ARMv8 has BFI (Bit Field Insert) and BFM (Bit Field Move) instructions.
Charlie Gibbs
2020-04-21 15:58:58 UTC
Permalink
Post by Scott Lurndal
Post by Peter Flass
Post by Peter Flass
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the
stage for the PDP-11.
One thing I didn't mention was that before the 360, IBM made a
high-performance scientific computer called the STRETCH. One unusual
feature it had was that some instructions actually addressed memory
by the *bit* rather than by the word or the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
OTOH it’s nice to have instructions to manipulate bits without masking and
shifting. The PDP-10 had good instructions, but, AFAIK, they were limited
to a single word. The Altos had BITBLT (IIRC). It’s not simple to move an
odd-length bit strings where the source and destination begin on different
bits in a word or byte.
ARMv8 has BFI (Bit Field Insert) and BFM (Bit Field Move) instructions.
One of my favourite mnemonics is the 68020's BFFFO
(Bit Field Find First One).
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Peter Flass
2020-04-21 20:19:29 UTC
Permalink
Post by Scott Lurndal
Post by Peter Flass
Post by Peter Flass
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
OTOH it’s nice to have instructions to manipulate bits without masking and
shifting. The PDP-10 had good instructions, but, AFAIK, they were limited
to a single word. The Altos had BITBLT (IIRC). It’s not simple to move an
odd-length bit strings where the source and destination begin on different
bits in a word or byte.
ARMv8 has BFI (Bit Field Insert) and BFM (Bit Field Move) instructions.
Sounds good, I’ll have to check them out.
--
Pete
Questor
2020-04-22 17:39:08 UTC
Permalink
Post by Peter Flass
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
The iAPX432 did this too. IMHO it adds a lot of complexity for little or no
benefit.
OTOH it's nice to have instructions to manipulate bits without masking and
shifting. The PDP-10 had good instructions, but, AFAIK, they were limited
to a single word.
Yes, and that was never a problem. I never saw an application with a stream of
odd length bytes such that there was a lot of wasted space in each word. Text
was mostly seven-bit ASCII (one bit left over in each word) or occasionally
SIXBIT. Otherwise you were taking and placing fields from/to records. The
PDP-10 was very much a word-oriented machine, and consequently aligning things
to word boundries is very advantageous. Fields bigger than a single word were
usually just multiple words.
The Altos had BITBLT (IIRC). It's not simple to move an
odd-length bit strings where the source and destination begin on different
bits in a word or byte.
Dan Espen
2020-04-21 01:17:54 UTC
Permalink
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the
stage for the PDP-11.
One thing I didn't mention was that before the 360, IBM made a
high-performance scientific computer called the STRETCH. One unusual
feature it had was that some instructions actually addressed memory by
the *bit* rather than by the word or the byte.
Therefore, since it was a binary machine, it had a word size of 64
bits, so that bit addressing would work easily: the last six bits
pointed to the bit within a word. While the System/360 didn't have
that feature, they may have thought they might need to add it later.
S/360 was pretty good at bit access. You did bit shifting in registers
and sometimes used EX to give a bit count to the shift instruction.
There were lots of bit test/set instructions.

I'm not reading POPs right now but I don't think IBM has added any
storage to storage bit shifting stuff since S/360 days.
Post by Quadibloc
When designing the System/360, another thought was its use for
business computations. The IBM 1401 stored numbers as printable
character strings. This wasted two bits out of every six, since you
only need four bits to represent a decimal digit.
BCD uses 4 bits for digits as it should. I was going to say BCD wasted
a bit for the sign (it uses 2 bits), but I guess we needed plus, minus,
and unsigned.

So, BCD was pretty good at storing and using numbers.
Post by Quadibloc
The IBM 360 could handle both binary numbers and packed decimal
numbers, where two decimal digits, each four bits long, were in every
8-bit byte. When you *unpack* a decimal number, you get its printable
character form as a string of EBCDIC digits.
So this meant it could do decimal arithmetic and instead of changing
33% waste to 50% waste, it eliminated the waste entirely. (Well, not
_entirely_, as ten possible digits don't use all sixteen possibilities
of four bits.
Other waste was the 4 bit sign and the possible unneeded extra 4 bits
for the required odd number of digits.

Still, IBMs packed decimal was highly efficient for it's intended use.
Post by Quadibloc
IBM got around to dealing with that, through Chen-Ho encoding, and its
later variant Densely Packed Decimal, DPD, but that is another story.)
Never heard of DPD but looked it up. Never heard of any IBM system
using it.
Post by Quadibloc
That was the rationale used for the 8-bit byte at the time the 360 was
being designed. Perhaps because of the STRETCH and bit addressing, a
machine with a 48-bit word that could use _both_ 6-bit characters and
8-bit characters was not considered.
I worked as a programmer though the IBM 14xx to S/360 era.

I remember lower case support was a big factor in the 6 to 8 bit change.
So was IBMs aborted attempt to support ASCII which was already at 7
bits. So, some increase was already built into S/360.

IBMs lower case support was slow in coming. I do remember places having
129 keypunches with lower case support. The printers could do lower
case but lower case trains were uncommon.

The first IBM CRT the 2260 was upper case only. So were the first
3270s. I did a mainframe project in 70-71 where we had to use Bunker
Ramo CRTS to get the required lower case support.

I remember a mainframe C/370 project around 1995. When we first started
looking at printed core dumps we couldn't see any lower case in the
character dumps. We had to change an option somewhere in the guts of
MVS to tell MVS to print lower case wasn't unprintable. As we shipped
this stuff to customers we soon learned to ask our customers to make the
same change. So as late as 1995 some number of sites didn't do much
with lower case.

Then there is the infamous a/A key on real 3270s. Turn this to "A" and
any lower case in your data looks like upper case. Why would you want
to see what your data actually looked like.
--
Dan Espen
Quadibloc
2020-04-21 05:34:06 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
IBM got around to dealing with that, through Chen-Ho encoding, and its
later variant Densely Packed Decimal, DPD, but that is another story.)
Never heard of DPD but looked it up. Never heard of any IBM system
using it.
The current z/Architecture systems from IBM support Decimal Floating Point, and in
IBM's format (as opposed to Intel's) DPD is used.

John Savard
Dan Espen
2020-04-21 12:43:41 UTC
Permalink
Post by Quadibloc
Post by Dan Espen
Post by Quadibloc
IBM got around to dealing with that, through Chen-Ho encoding, and its
later variant Densely Packed Decimal, DPD, but that is another story.)
Never heard of DPD but looked it up. Never heard of any IBM system
using it.
The current z/Architecture systems from IBM support Decimal Floating Point, and in
IBM's format (as opposed to Intel's) DPD is used.
This article describes it as a compression format:

https://en.wikipedia.org/wiki/Densely_packed_decimal

I knew about Decimal Float, just wasn't aware that DPD was used.
A little more searching leads me here:

https://en.wikipedia.org/wiki/Decimal_floating_point

So there I see a reference to DPD.

A while back I extended a debugger so that after a failure
the debugger would describe the source and destination fields in their
native format, so packed numbers were displayed as numbers if they were
valid. Similar for binary. When I got to floats,
I looked around and didn't see anything for copying.
Given that I've never used floats, I just ignored the
issue. Seeing this, I'm glad I did.
--
Dan Espen
r***@gmail.com
2020-04-22 02:32:02 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the
stage for the PDP-11.
One thing I didn't mention was that before the 360, IBM made a
high-performance scientific computer called the STRETCH. One unusual
feature it had was that some instructions actually addressed memory by
the *bit* rather than by the word or the byte.
Therefore, since it was a binary machine, it had a word size of 64
bits, so that bit addressing would work easily: the last six bits
pointed to the bit within a word. While the System/360 didn't have
that feature, they may have thought they might need to add it later.
S/360 was pretty good at bit access. You did bit shifting in registers
and sometimes used EX to give a bit count to the shift instruction.
There were lots of bit test/set instructions.
I'm not reading POPs right now but I don't think IBM has added any
storage to storage bit shifting stuff since S/360 days.
Post by Quadibloc
When designing the System/360, another thought was its use for
business computations. The IBM 1401 stored numbers as printable
character strings. This wasted two bits out of every six, since you
only need four bits to represent a decimal digit.
BCD uses 4 bits for digits as it should. I was going to say BCD wasted
a bit for the sign (it uses 2 bits), but I guess we needed plus, minus,
and unsigned.
So, BCD was pretty good at storing and using numbers.
Post by Quadibloc
The IBM 360 could handle both binary numbers and packed decimal
numbers, where two decimal digits, each four bits long, were in every
8-bit byte. When you *unpack* a decimal number, you get its printable
character form as a string of EBCDIC digits.
So this meant it could do decimal arithmetic and instead of changing
33% waste to 50% waste, it eliminated the waste entirely. (Well, not
_entirely_, as ten possible digits don't use all sixteen possibilities
of four bits.
Other waste was the 4 bit sign and the possible unneeded extra 4 bits
for the required odd number of digits.
Still, IBMs packed decimal was highly efficient for it's intended use.
Post by Quadibloc
IBM got around to dealing with that, through Chen-Ho encoding, and its
later variant Densely Packed Decimal, DPD, but that is another story.)
Never heard of DPD but looked it up. Never heard of any IBM system
using it.
Post by Quadibloc
That was the rationale used for the 8-bit byte at the time the 360 was
being designed. Perhaps because of the STRETCH and bit addressing, a
machine with a 48-bit word that could use _both_ 6-bit characters and
8-bit characters was not considered.
I worked as a programmer though the IBM 14xx to S/360 era.
I remember lower case support was a big factor in the 6 to 8 bit change.
So was IBMs aborted attempt to support ASCII which was already at 7
bits. So, some increase was already built into S/360.
IBMs lower case support was slow in coming. I do remember places having
129 keypunches with lower case support. The printers could do lower
case but lower case trains were uncommon.
Sites could use any of the available print trains.
Upper and Lower case was just one of them, and our site had this chain too.
Post by Dan Espen
The first IBM CRT the 2260 was upper case only. So were the first
3270s. I did a mainframe project in 70-71 where we had to use Bunker
Ramo CRTS to get the required lower case support.
I remember a mainframe C/370 project around 1995. When we first started
looking at printed core dumps we couldn't see any lower case in the
character dumps. We had to change an option somewhere in the guts of
MVS to tell MVS to print lower case wasn't unprintable. As we shipped
this stuff to customers we soon learned to ask our customers to make the
same change. So as late as 1995 some number of sites didn't do much
with lower case.
Then there is the infamous a/A key on real 3270s. Turn this to "A" and
any lower case in your data looks like upper case. Why would you want
to see what your data actually looked like.
Charlie Gibbs
2020-04-22 05:25:25 UTC
Permalink
Post by r***@gmail.com
Post by Dan Espen
IBMs lower case support was slow in coming. I do remember places having
129 keypunches with lower case support. The printers could do lower
case but lower case trains were uncommon.
Sites could use any of the available print trains.
Upper and Lower case was just one of them, and our site had this chain too.
When you mounted the TN train to get lower case, your printing speed
went way down, though. Most shops rightfully resented the loss of
productivity if you were to leave it in all the time.

I worked in Univac shops, but the situation was similar. I don't know
of anyone who used a lower-case print band (if such a thing was even
available, I'd have to dig out the manuals), but one shop wanted to
switch between 48- and 63-character bands to get that extra little
bit of speed. Fortunately, we managed to convince them that the
slight time saving from printing what reports would look OK with
a 48-character band would be more than offset by the time required
to switch bands when they wanted reports that needed the full
63-character set - even if the operator wasn't out for coffee
when the band change request came up.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Richard Thiebaud
2020-04-21 02:50:10 UTC
Permalink
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
Therefore, since it was a binary machine, it had a word size of 64 bits, so that
bit addressing would work easily: the last six bits pointed to the bit within a
word. While the System/360 didn't have that feature, they may have thought they
might need to add it later.
When designing the System/360, another thought was its use for business
computations. The IBM 1401 stored numbers as printable character strings. This
wasted two bits out of every six, since you only need four bits to represent a
decimal digit.
The IBM 360 could handle both binary numbers and packed decimal numbers, where
two decimal digits, each four bits long, were in every 8-bit byte. When you
*unpack* a decimal number, you get its printable character form as a string of
EBCDIC digits.
So this meant it could do decimal arithmetic and instead of changing 33% waste
to 50% waste, it eliminated the waste entirely. (Well, not _entirely_, as ten
possible digits don't use all sixteen possibilities of four bits. IBM got around
to dealing with that, through Chen-Ho encoding, and its later variant Densely
Packed Decimal, DPD, but that is another story.)
That was the rationale used for the 8-bit byte at the time the 360 was being
designed. Perhaps because of the STRETCH and bit addressing, a machine with a
48-bit word that could use _both_ 6-bit characters and 8-bit characters was not
considered.
John Savard
The Unisys 2200, with a 36-bit word, used both 6-bit and 9-bit characters.
Scott Lurndal
2020-04-21 14:44:55 UTC
Permalink
Post by Richard Thiebaud
Post by Quadibloc
Post by Thomas Koenig
Was it the Arpanet? Octet-based from the start, as far as I know.
The octet became ubiquitous long before the Arpanet.
My previous post recounts some of the history - the IBM 360 set the stage for
the PDP-11.
One thing I didn't mention was that before the 360, IBM made a high-performance
scientific computer called the STRETCH. One unusual feature it had was that some
instructions actually addressed memory by the *bit* rather than by the word or
the byte.
Therefore, since it was a binary machine, it had a word size of 64 bits, so that
bit addressing would work easily: the last six bits pointed to the bit within a
word. While the System/360 didn't have that feature, they may have thought they
might need to add it later.
When designing the System/360, another thought was its use for business
computations. The IBM 1401 stored numbers as printable character strings. This
wasted two bits out of every six, since you only need four bits to represent a
decimal digit.
The IBM 360 could handle both binary numbers and packed decimal numbers, where
two decimal digits, each four bits long, were in every 8-bit byte. When you
*unpack* a decimal number, you get its printable character form as a string of
EBCDIC digits.
So this meant it could do decimal arithmetic and instead of changing 33% waste
to 50% waste, it eliminated the waste entirely. (Well, not _entirely_, as ten
possible digits don't use all sixteen possibilities of four bits. IBM got around
to dealing with that, through Chen-Ho encoding, and its later variant Densely
Packed Decimal, DPD, but that is another story.)
That was the rationale used for the 8-bit byte at the time the 360 was being
designed. Perhaps because of the STRETCH and bit addressing, a machine with a
48-bit word that could use _both_ 6-bit characters and 8-bit characters was not
considered.
John Savard
The Unisys 2200, with a 36-bit word, used both 6-bit and 9-bit characters.
And the Unisys A-Series (clearpath libra), with a 48-bit word, uses both 6 and 8 bit
characters.
John Levine
2020-04-20 01:36:53 UTC
Permalink
Post by Thomas Koenig
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
The 360 design totally botched the floating point, and lost close to a
full digit on each operation compared to a well designed 32 bit format
like the later IEEE one. It's not a great comparison.

For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
r***@gmail.com
2020-04-20 04:51:18 UTC
Permalink
Post by John Levine
Post by Thomas Koenig
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
The 360 design totally botched the floating point, and lost close to a
full digit on each operation
The initial release could lose significance when the
mantissa of one operand was shifted down.
This was rectified by the hardware upgrade of the
guard digit, which retained the final digit that was
shifted out during pre-normalisation. If it was then
necessary to shift up the mantissa during post-normalisation,
the guard digit was shifted back into the register.

The implementation (hex floating-point) was a means to
obtain good execution times when pre- and post-normalising,
while maintaining at least 21 bits of mantissa.

The only blunder was the way HER (halve floating-point) was
implemented. It merely shifted the mantissa down by 1 place.
It failed to post-normalise, so that if it happened that the
most-significant digit of the mantissa was 1, you were left
with only 20 bits of precision instead of 21.

The advantage of HER was that it took a fraction of the time
that the Divide instruction took (to divide by 2).

This flaw was not corrected until the S/370.
Post by John Levine
compared to a well designed 32 bit format
like the later IEEE one. It's not a great comparison.
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
Well, CDC was not going to complicate things by having a 30-bit float
as well as the 60-bit float. They were running out of spare op-codes.
Giving effectively double precision for all float operations was
a reasonable design choice.
John Levine
2020-04-20 14:54:37 UTC
Permalink
Post by r***@gmail.com
Post by John Levine
The 360 design totally botched the floating point, and lost close to a
full digit on each operation
The initial release could lose significance when the
mantissa of one operand was shifted down.
This was rectified by the hardware upgrade of the
guard digit, which retained the final digit that was
shifted out during pre-normalisation. If it was then
necessary to shift up the mantissa during post-normalisation,
the guard digit was shifted back into the register.
The implementation (hex floating-point) was a means to
obtain good execution times when pre- and post-normalising,
while maintaining at least 21 bits of mantissa.
The only blunder was the way HER (halve floating-point) was
implemented. It merely shifted the mantissa down by 1 place.
It failed to post-normalise, so that if it happened that the
most-significant digit of the mantissa was 1, you were left
with only 20 bits of precision instead of 21.
No, the blunder was using hex. They botched the analysis of fraction
distribution and assumed it was linear rather than logrithmic, so they
lost an average of two bits of precision per operation and only got
one back from the smaller exponent. Also, they truncated rather than
rounded, which lost another bit. And finally, with binary floating
point one can do the hidden bit trick and not store the high bit of
the fraction which is always 1, but not in hex.
Post by r***@gmail.com
The advantage of HER was that it took a fraction of the time
that the Divide instruction took (to divide by 2).
Hex floating point certainly got fast wrong results.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2020-04-21 00:10:39 UTC
Permalink
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.

If you look at pocket calculators, and old books of mathematical tables, a _lot_
of scientific work was done to 10 digit precision. And before the 60-bit CDC
6600, CDC made a number of popular advanced scientific computers with a 48-bit
word length, which is about enough for a 10 digit number.

This is what led me to think that it *would* be good to have computers designed
around the 6-bit character.

If a computer could have...

36-bit floating point
- a little longer than 32 bits, longer enough that, historically, it was
adequate for many scientific computations

48-bit floating point
- ten digit precision is what scientific calculators and mathematical tables
used a lot, and so this is probably the ideal precision for most scientific
computing

60-bit floating point
- but sometimes you need double precision. However, given that 64-bit double
precision is nearly always more than is needed, chopping off four bits (instead
of going to 72-bit double precision) would likely not hurt things at all)

then it seemed to me that such a computer, unlike the ones we have now, would
offer floating-point formats that are ideally suited to the requirements of
scientific computation.

John Savard
r***@gmail.com
2020-04-21 01:19:06 UTC
Permalink
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough.
Rubbish. It was suitable for pretty-well all float calculations.
Post by Quadibloc
Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
If you look at pocket calculators, and old books of mathematical tables, a _lot_
of scientific work was done to 10 digit precision. And before the 60-bit CDC
6600, CDC made a number of popular advanced scientific computers with a 48-bit
word length, which is about enough for a 10 digit number.
This is what led me to think that it *would* be good to have computers designed
around the 6-bit character.
Not really. You are forgetting about lower-case letters;
and the fact that data transmission via phone lines was
in 8 bits.
Post by Quadibloc
If a computer could have...
36-bit floating point
- a little longer than 32 bits, longer enough that, historically, it was
adequate for many scientific computations
48-bit floating point
- ten digit precision is what scientific calculators and mathematical tables
used a lot, and so this is probably the ideal precision for most scientific
computing
60-bit floating point
- but sometimes you need double precision. However, given that 64-bit double
precision is nearly always more than is needed, chopping off four bits (instead
of going to 72-bit double precision) would likely not hurt things at all)
then it seemed to me that such a computer, unlike the ones we have now, would
offer floating-point formats that are ideally suited to the requirements of
scientific computation.
Dan Espen
2020-04-21 01:24:30 UTC
Permalink
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
--
Dan Espen
J. Clarke
2020-04-21 02:01:08 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
Peter Flass
2020-04-21 20:19:28 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
I think all that stuff is available online, somewhere.
--
Pete
J. Clarke
2020-04-21 22:22:12 UTC
Permalink
On Tue, 21 Apr 2020 13:19:28 -0700, Peter Flass
Post by Peter Flass
Post by J. Clarke
Post by Dan Espen
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
I think all that stuff is available online, somewhere.
I'm sure it is, it's just more effort than I want to go through for a
casual discussion.
John Levine
2020-04-21 21:19:19 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Quadibloc
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
Gee, mine is right here on my desk.

The 360 floating point feature always had the 64 bit format, and early
on they added partial support for 128 bit extended floats stored in a
pair of 64 bit registers. There was software to convert 709x Fortran
programs that turned all of the REAL variables to DOUBLE PRECISION
because the 32 bit float was so bad.

Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2020-04-21 22:01:41 UTC
Permalink
Post by John Levine
Post by J. Clarke
Post by Dan Espen
Post by Quadibloc
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
Gee, mine is right here on my desk.
The 360 floating point feature always had the 64 bit format, and early
on they added partial support for 128 bit extended floats stored in a
pair of 64 bit registers. There was software to convert 709x Fortran
programs that turned all of the REAL variables to DOUBLE PRECISION
because the 32 bit float was so bad.
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
No /30 I ever saw had FP, probably a good reason for that. The CPU had only
an 8-bit path to memory, and, of course, all the FP was microcode.
--
Pete
J. Clarke
2020-04-21 22:20:49 UTC
Permalink
On Tue, 21 Apr 2020 21:19:19 -0000 (UTC), John Levine
Post by John Levine
Post by J. Clarke
Post by Dan Espen
Post by Quadibloc
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
Alas my green card and assembler text are in my desk at work, which
will not be accessible to me until this blasted virus is done.
Gee, mine is right here on my desk.
Lucky you. We got told "don't come in tomorrow" and had to grab what
we could remember to grab.
Post by John Levine
The 360 floating point feature always had the 64 bit format, and early
on they added partial support for 128 bit extended floats stored in a
pair of 64 bit registers. There was software to convert 709x Fortran
programs that turned all of the REAL variables to DOUBLE PRECISION
because the 32 bit float was so bad.
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Divides tend to take a lot of cycles.
Scott Lurndal
2020-04-21 23:55:16 UTC
Permalink
Post by J. Clarke
On Tue, 21 Apr 2020 21:19:19 -0000 (UTC), John Levine
Post by John Levine
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Divides tend to take a lot of cycles.
Some divides. Others, like by a power of two, are simple right shift operations.

Burroughs Medium systems, division by a power of ten was a simple BCD move,
likewise multiplication.
Charlie Gibbs
2020-04-22 05:22:27 UTC
Permalink
Post by John Levine
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Did the 360/30 even have floating point instructions? Or were they
emulated in software?

In the Univac world, if you were really cheap you could order your
9200 or 9300 (their answer to the 360/20) without MP, DP, or ED
instructions. If you needed the functionality, you'd link a
software library that emulated the instructions, and precede
each such instruction with BAL 15,MPDP or BAL 15,EDIT as appropriate.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Quadibloc
2020-04-22 10:28:45 UTC
Permalink
Post by Charlie Gibbs
Post by John Levine
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Did the 360/30 even have floating point instructions? Or were they
emulated in software?
Good question. IBM System/360 Model 30 Functional Characteristics should contain
an answer...

it states that the standard set, the commercial set, the scientific set, and the
commercial set of instructions were all available for the machine. So if you
paid for floating-point on your machine, IBM would do something to ensure that
floating-point instructions in your programs got executed.

Since the Model 30 was microprogrammed, and IBM made its computers
microprogrammable specifically to allow the same instruction set to be used on
the small models as well as the large, I would _presume_ that IBM would use
microcode, rather than trap routines, to do the floating-point instructions, but
maybe the Model 30 might have had very limited microcode capacity.

I also have a copy of Microprogramming: Principles and Practices by Samir S.
Husson handy. It describes the microcode format of the Model 25, which was
derived from the Model 30, in a brief note, but otherwise does not help to
resolve the question.

John Savard
John Levine
2020-04-22 16:44:56 UTC
Permalink
Post by Charlie Gibbs
Post by John Levine
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Did the 360/30 even have floating point instructions? Or were they
emulated in software?
The whole computer was emulated in microcode. When it did disk
operations the microcode was so busy being the channel that the CPU
pretty much stopped. If you ordered the floating point feature, it
had floating point instructions. If you ordered the 1401 emulator, it
had 1401 instructions and a few poorly documented 360 instructions to
manage the emulation.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Dan Espen
2020-04-22 23:29:07 UTC
Permalink
Post by John Levine
Post by Charlie Gibbs
Post by John Levine
Fun fact: on a 360/30 a long floating divide took about 2ms, and I
don't mean 2us.
Did the 360/30 even have floating point instructions? Or were they
emulated in software?
The whole computer was emulated in microcode. When it did disk
operations the microcode was so busy being the channel that the CPU
pretty much stopped. If you ordered the floating point feature, it
had floating point instructions. If you ordered the 1401 emulator, it
had 1401 instructions and a few poorly documented 360 instructions to
manage the emulation.
I found the documentation for those instructions in one of those big
CE looseleafs.
I ended up using one of those instructions in an application program
that was reading 1401 tape data in load mode. The instruction needed
256 byte alignment on a data area to work correctly. That was the
instruction that blew up when the customer installed floating point.
The floating point save area moved the program origin away from the
256 byte aligned area I needed.

I figured out how to get alignment without worrying about program origin.
--
Dan Espen
Peter Flass
2020-04-21 20:19:26 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
Post by John Levine
For that matter, if 36 bits was enough why did the CDC supercomputers have
60 bit words?
36 bits was enough some of the time, but IBM's 32 bits on the 360 was almost
never enough. Whether or not a better-designed 32-bit float would be good enough
to bring back some of the applications lost is not clear to me.
I never, ever had to use a float on a mainframe but I believe you could
use 64 bit floats from the beginning.
I never used it either. The first time I worked with IBM floating point was
some years ago when I wrote a routine to convert some data from IBM to
Intel - don’t recall now what or why, but it worked (once) and was never
looked at again.
--
Pete
Questor
2020-04-20 19:28:23 UTC
Permalink
Post by Thomas Koenig
Why are eight-bit bytes so common today?
I think that powers of two factors into it. Once you start building a machine
with binary elements, it seems expedient to continue with that model as you
aggregate them into larger and larger structures. I suspect it simplifies some
of the logic needed.
Douglas Miller
2020-04-20 21:09:34 UTC
Permalink
Post by J. Clarke
On Sun, 19 Apr 2020 10:46:23 -0000 (UTC), Thomas Koenig
Post by Thomas Koenig
Why are eight-bit bytes so common today?
I think that powers of two factors into it. Once you start building a machine
with binary elements, it seems expedient to continue with that model as you
aggregate them into larger and larger structures. I suspect it simplifies some
of the logic needed.
More likely, it is related to the size of a character - typically the smallest unit of data a computer deals with. Once 7-bit ASCII became established - and possibly even before that - also 8-bit EBCDIC - the 8-bit byte became a convenient unit of data. It's really just what makes sense, based on how things evolved. There have been a lot of different ways of representing characters, but these days everything revolves around bytes. Once you settle on 8-bit characters, everything else falls into place. Back when a company designed and built every component in a computer system, you could make more-arbitrary decisions about such things. But these days, everything needs to interconnect with pre-existing components.
Quadibloc
2020-04-20 23:52:58 UTC
Permalink
Post by Thomas Koenig
A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
Why are eight-bit bytes so common today?
For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.

But to answer your question:

Before April 1964, and the IBM System/360, computers generally stored text in
the form of 6-bit characters.

This was fine - as long as you were content to have text that was upper-case
only.

Now, there was a typesetting system built around the PDP-8, with 6-bit characters in a 12-bit word; you could always have a document like this:

\MY NAME IS \JOHN.

where the backslash is an escape character making the letter it precedes appear
in upper-case instead of lower-case.

And the IBM 360, while it had an 8-bit byte, had keypunches that were upper-case
only, and lower-case was a relatively uncommon extra-cost option on their video
terminals; however, the 2741 printing terminal, based on the Selectric
typewriter, offered lower-case routinely. However, the 360 was mostly oriented
around punched-card batch.

For that matter, the Apple II with an 8-bit byte had an upper-case only
keyboard, and there was a word-processor for it that let you use an escape
character if you wanted mixed case.

So byte size isn't everything. But the 8-bit byte does make lower-case text a
lot easier to process. That is _one_ important reason why, when the hugely
popular (for other reasons) IBM System/360 came out, everybody started looking
upon computers with 6-bit characters as something old-fashioned.

IBM, when it designed the 360, wanted a very flexible machine, suitable to a
wide variety of applications. The intention was that microcode would allow big
and small computers to have the same instruction set, and this would serve both
business computers sending out bills with their computers, and universities
doing scientific research with them.

Because 32 bits was smaller than 36 bits - and IBM made things worse by using a
hexadecimal exponent, and by truncating instead of rounding floating-point
calculations - people doing scientific calculations simply switched to using
double precision for everything.

The IBM 360 was very popular. SDS, later purchased by Xerox, made a machine
called the Sigma which, although not compatible with the 360, used similar data
formats, and which was designed to perform the same types of calculations. (In
some ways, it was a simpler design aimed at allowing 360 workloads to be handled
with 7090-era technology.) And there was the Spectra series of computers from
RCA, which were partly 360 compatible.

But the effects of the 360 were felt everywhere. The PDP-4 had an 18 bit word, but minicomputers from other companies, like the HP 211x series or the Honeywell 316, went to a 16 bit word. And DEC, which had minis with 12-bit and 18-bit words with the same basic design as the HP 211x and Honeywell 316 - single-word memory-reference instructions were achieved by allowing them to refer to locations on "page zero" and the *current* page, with indirect addressing used to get at anywhere else - decided it needed something modern too.

And so DEC came up with the *wildly successful* PDP-11. It was a minicomputer
with a 16-bit word. But unlike the minicomputers I've just mentioned, it had a
modern architecture. The only indirect addressing was register indirect
addressing. Memory wasn't divided into pages.

The PDP-11 transformed the world of computing. It solidified the dominance of
the 8-bit byte. It also made the "little-endian" byte order a common choice.

John Savard
Peter Flass
2020-04-21 00:45:04 UTC
Permalink
Post by Quadibloc
Post by Thomas Koenig
A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
Why are eight-bit bytes so common today?
For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.
Before April 1964, and the IBM System/360, computers generally stored text in
the form of 6-bit characters.
This was fine - as long as you were content to have text that was upper-case
only.
Now, there was a typesetting system built around the PDP-8, with 6-bit
\MY NAME IS \JOHN.
where the backslash is an escape character making the letter it precedes appear
in upper-case instead of lower-case.
And the IBM 360, while it had an 8-bit byte, had keypunches that were upper-case
only, and lower-case was a relatively uncommon extra-cost option on their video
terminals; however, the 2741 printing terminal, based on the Selectric
typewriter, offered lower-case routinely. However, the 360 was mostly oriented
around punched-card batch.
For that matter, the Apple II with an 8-bit byte had an upper-case only
keyboard, and there was a word-processor for it that let you use an escape
character if you wanted mixed case.
No one will ever want two cases.;-)
--
Pete
Quadibloc
2020-04-21 00:48:06 UTC
Permalink
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...

John Savard
Dan Espen
2020-04-21 01:38:07 UTC
Permalink
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.

I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
--
Dan Espen
Charlie Gibbs
2020-04-21 15:58:59 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.
I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
There was a bit of a mystique in those days. Even well into the
'80s, many people could not bring themselves to use lower case,
perhaps believing that it was somehow not legitimate.
(I confess, I found the thought of writing assembly language
in lower case to be somewhat blasphemous, and those Algol
people looked as if they weren't taking things seriously.)
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Jorgen Grahn
2020-04-21 20:05:39 UTC
Permalink
Post by Charlie Gibbs
Post by Dan Espen
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.
I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
There was a bit of a mystique in those days. Even well into the
'80s, many people could not bring themselves to use lower case,
perhaps believing that it was somehow not legitimate.
(I confess, I found the thought of writing assembly language
in lower case to be somewhat blasphemous, and those Algol
people looked as if they weren't taking things seriously.)
I must have had a brief period like that, but with Unix and the Amiga
I bought the whole lowercase concept. It still looks elegant,
readable and modern to me.

BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Peter Flass
2020-04-21 20:23:37 UTC
Permalink
Post by Jorgen Grahn
Post by Charlie Gibbs
Post by Dan Espen
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.
I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
There was a bit of a mystique in those days. Even well into the
'80s, many people could not bring themselves to use lower case,
perhaps believing that it was somehow not legitimate.
(I confess, I found the thought of writing assembly language
in lower case to be somewhat blasphemous, and those Algol
people looked as if they weren't taking things seriously.)
I must have had a brief period like that, but with Unix and the Amiga
I bought the whole lowercase concept. It still looks elegant,
readable and modern to me.
BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?
Nope, I’m an old mainframe guy and I find it odd. I just got used to
looking at a hex dump with upper-case A-F. Besides, I think the upper-case
is more readable here, since the letters are larger and more distinct vs.
lower-case.
--
Pete
Dan Espen
2020-04-21 20:47:41 UTC
Permalink
Post by Peter Flass
Post by Jorgen Grahn
Post by Charlie Gibbs
Post by Dan Espen
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.
I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
There was a bit of a mystique in those days. Even well into the
'80s, many people could not bring themselves to use lower case,
perhaps believing that it was somehow not legitimate.
(I confess, I found the thought of writing assembly language
in lower case to be somewhat blasphemous, and those Algol
people looked as if they weren't taking things seriously.)
I must have had a brief period like that, but with Unix and the Amiga
I bought the whole lowercase concept. It still looks elegant,
readable and modern to me.
BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?
Nope, I’m an old mainframe guy and I find it odd. I just got used to
looking at a hex dump with upper-case A-F. Besides, I think the upper-case
is more readable here, since the letters are larger and more distinct vs.
lower-case.
Yep, mainframe thing. It was A-F for decades.
Using lower case just doesn't look right.
--
Dan Espen
Jorgen Grahn
2020-04-21 21:03:06 UTC
Permalink
...
Post by Dan Espen
Post by Peter Flass
Post by Jorgen Grahn
BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?
Nope, I’m an old mainframe guy and I find it odd. I just got used to
looking at a hex dump with upper-case A-F. Besides, I think the upper-case
is more readable here, since the letters are larger and more distinct vs.
lower-case.
Yep, mainframe thing. It was A-F for decades.
Using lower case just doesn't look right.
However, my coworkers were much too young for mainframes, so uppercase
is popular somewhere else too.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
JimP
2020-04-22 15:30:39 UTC
Permalink
Post by Jorgen Grahn
Post by Charlie Gibbs
Post by Dan Espen
Post by Quadibloc
Post by Peter Flass
No one will ever want two cases.;-)
Of course, if we all spoke Georgian...
Back in my 1401 days I thought computers
should NEVER support lower case.
I DIDN'T SEE ANY PROBLEM IN HUMANS ADAPTING TO AN ALL UPPER CASE WORLD.
There was a bit of a mystique in those days. Even well into the
'80s, many people could not bring themselves to use lower case,
perhaps believing that it was somehow not legitimate.
(I confess, I found the thought of writing assembly language
in lower case to be somewhat blasphemous, and those Algol
people looked as if they weren't taking things seriously.)
I must have had a brief period like that, but with Unix and the Amiga
I bought the whole lowercase concept. It still looks elegant,
readable and modern to me.
BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?
/Jorgen
I've seen lower case hexadecimal in Windows. Must have been the people
wanting it.
--
Jim
Quadibloc
2020-04-22 22:50:34 UTC
Permalink
Post by JimP
I've seen lower case hexadecimal in Windows. Must have been the people
wanting it.
IBM mainframes used to always use upper-case hexadecimal. On the other hand, the
convention in C is to use lower-case hexadecimal. I would expect Windows to follow
C in doing what is popular nowadays.

John Savard
Quadibloc
2020-04-22 22:55:20 UTC
Permalink
Post by Jorgen Grahn
BTW, I had a strange experience at a previous (but recent) workplace.
I had to print some data in hex, and naturally chose lower case hex,
e.g. 0x0f1e. The others however found that odd: they wanted me to
change it to 0x0F1E. I thought it was a joke at first, but they
insisted; they really found the lowercase form odd. A Windows thing,
perhaps?
As others have noted, Windows isn't the culprit.

The current standard for hexadecimal, with the letters from A through F as the
additional digits, originated with the IBM System/360, and it consistently used
only the upper-case letters, both on the computer and in documentation.

On the other hand, much later, Unix and C used lower-case hexadecimal, and
Windows is in the modern camp with those.

Before the System/360 came along, a few other computers used hexadecimal instead of octal, but they often used a *different* set of letters for the extra digits.

Here are the examples

Bendix G-15, SWAC u v w x y z
Monrobot XI S T U V W X
Datamatic D-1000 b c d e f g
Elbit 100 B C D E F G
LGP-30 f g j k q w
ILLIAC k s n j f l
Pacific Data Systems 1020 L C A S M D

that I've listed on my web page at
http://www.quadibloc.com/comp/cp02.htm

John Savard
Douglas Miller
2020-04-22 23:27:02 UTC
Permalink
Post by Quadibloc
...
Before the System/360 came along, a few other computers used hexadecimal instead of octal, but they often used a *different* set of letters for the extra digits.
Here are the examples
Bendix G-15, SWAC u v w x y z
Monrobot XI S T U V W X
Datamatic D-1000 b c d e f g
Elbit 100 B C D E F G
LGP-30 f g j k q w
ILLIAC k s n j f l
Pacific Data Systems 1020 L C A S M D
that I've listed on my web page at
http://www.quadibloc.com/comp/cp02.htm
John Savard
Curious choices for the hexadecimal digits beyond '9'. I assume these machines did not use ASCII, so did their choice of hex digits have to do with the binary values of the chosen letters? Just seems more like accidental/incidental decoding than a logical, intentional, choice.
r***@gmail.com
2020-04-21 01:11:30 UTC
Permalink
Post by Quadibloc
Post by Thomas Koenig
A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
Why are eight-bit bytes so common today?
For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.
Before April 1964, and the IBM System/360, computers generally stored text in
the form of 6-bit characters.
This was fine - as long as you were content to have text that was upper-case
only.
\MY NAME IS \JOHN.
where the backslash is an escape character making the letter it precedes appear
in upper-case instead of lower-case.
And the IBM 360, while it had an 8-bit byte, had keypunches that were upper-case
only, and lower-case was a relatively uncommon extra-cost option on their video
terminals; however, the 2741 printing terminal, based on the Selectric
typewriter, offered lower-case routinely. However, the 360 was mostly oriented
around punched-card batch.
For that matter, the Apple II with an 8-bit byte had an upper-case only
keyboard, and there was a word-processor for it that let you use an escape
character if you wanted mixed case.
So byte size isn't everything. But the 8-bit byte does make lower-case text a
lot easier to process. That is _one_ important reason why, when the hugely
popular (for other reasons) IBM System/360 came out, everybody started looking
upon computers with 6-bit characters as something old-fashioned.
IBM, when it designed the 360, wanted a very flexible machine, suitable to a
wide variety of applications. The intention was that microcode would allow big
and small computers to have the same instruction set, and this would serve both
business computers sending out bills with their computers, and universities
doing scientific research with them.
Because 32 bits was smaller than 36 bits - and IBM made things worse by using a
hexadecimal exponent, and by truncating instead of rounding floating-point
calculations - people doing scientific calculations simply switched to using
double precision for everything.
No they didn't. Double-precision was expensive, both in terms of
execution time and in storage.

If all the variables were scalars, storage wasn't an issue, but
once arrays were used, their size was often the limiting factor.

Memory was very expensive.
Post by Quadibloc
The IBM 360 was very popular. SDS, later purchased by Xerox, made a machine
called the Sigma which, although not compatible with the 360, used similar data
formats, and which was designed to perform the same types of calculations. (In
some ways, it was a simpler design aimed at allowing 360 workloads to be handled
with 7090-era technology.) And there was the Spectra series of computers from
RCA, which were partly 360 compatible.
The Spectra was compatible with the S/360 as far as user programs
were concerned. It was possible to port binary load modules between
the machines. Where the Spectra differed was in the provision of
extra register sets and four processor states that allowed
fast switching between those states.
Post by Quadibloc
But the effects of the 360 were felt everywhere. The PDP-4 had an 18 bit word, but minicomputers from other companies, like the HP 211x series or the Honeywell 316, went to a 16 bit word. And DEC, which had minis with 12-bit and 18-bit words with the same basic design as the HP 211x and Honeywell 316 - single-word memory-reference instructions were achieved by allowing them to refer to locations on "page zero" and the *current* page, with indirect addressing used to get at anywhere else - decided it needed something modern too.
And so DEC came up with the *wildly successful* PDP-11. It was a minicomputer
with a 16-bit word. But unlike the minicomputers I've just mentioned, it had a
modern architecture. The only indirect addressing was register indirect
addressing. Memory wasn't divided into pages.
The PDP-11 transformed the world of computing. It solidified the dominance of
the 8-bit byte. It also made the "little-endian" byte order a common choice.
Quadibloc
2020-04-21 05:38:22 UTC
Permalink
Post by Quadibloc
For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.
My web site is back online, and the section I am referring to is:

http://quadibloc.com/arch/perint.htm

My web site *used* to be at *www.quadibloc.com* so the renewal has moved it
slightly. Anyone who has bookmarked my site will have to change his or her
bookmarks.

John Savard
Andy Walker
2020-04-21 11:35:41 UTC
Permalink
Post by Quadibloc
Before April 1964, and the IBM System/360, computers generally stored
text in the form of 6-bit characters.
This was fine - as long as you were content to have text that was
upper-case only.
Those of us who weren't wedded to IBM and cards often used
paper tape. Our Flexowriters were 7-track, but that was 6-bit plus
parity, and the six bits included upper-case and lower-case characters.
Some others [I forget which, but certainly space and erase, and some
selection from full stop, comma, backspace, underline, CR/LF] were
also in both cases. That left over 50 characters available per case,
or 100+ in total, easily enough for upper case and lower case letters,
digits and the commonest other symbols. That's somewhat better for
computing than modern pseudo-keyboards on 'phones and tablets, with
their separate displays for u-c letters, l-c letters, and two lots of
symbols, as long as you don't want the emojis.

Because blank tape had the wrong parity, the UC character was
used for "runout"; sensible programmers put a few inches of runout
every few lines to make splicing/editing easier.
--
Andy Walker,
Nottingham.
Quadibloc
2020-04-21 17:47:38 UTC
Permalink
Post by Andy Walker
Our Flexowriters were 7-track, but that was 6-bit plus
parity, and the six bits included upper-case and lower-case characters.
For certain values of "includes".

What 6-bit Flexowriter codes actually included were shift codes. So the letters
only appeared once in the 6-bit code, but there were characters to shift the
machine into upper-case and lower-case.

This affected the special characters as well as the letters.

John Savard
Andy Walker
2020-04-21 19:54:06 UTC
Permalink
Post by Quadibloc
Post by Andy Walker
Our Flexowriters were 7-track, but that was 6-bit plus
parity, and the six bits included upper-case and lower-case characters.
For certain values of "includes".
What 6-bit Flexowriter codes actually included were shift codes. So
the letters only appeared once in the 6-bit code, but there were
characters to shift the machine into upper-case and lower-case.
If that's not what I said, it's at least an approximation to
what I meant to say! The real point is that within a 6-bit code it
was perfectly possible to write programs that used both UC and LC
letters, digits, punctuation and a decent [tho' idiosyncratic] range
of other symbols.
--
Andy Walker,
Nottingham.
Dan Espen
2020-04-21 21:02:19 UTC
Permalink
Post by Andy Walker
Post by Quadibloc
Post by Andy Walker
Our Flexowriters were 7-track, but that was 6-bit plus
parity, and the six bits included upper-case and lower-case characters.
For certain values of "includes".
What 6-bit Flexowriter codes actually included were shift codes. So
the letters only appeared once in the 6-bit code, but there were
characters to shift the machine into upper-case and lower-case.
If that's not what I said, it's at least an approximation to
what I meant to say! The real point is that within a 6-bit code it
was perfectly possible to write programs that used both UC and LC
letters, digits, punctuation and a decent [tho' idiosyncratic] range
of other symbols.
52 of the 64 bit patterns only leaves 12 symbols.
One of those symbols has to be space, NULL and FF are going to cause
problems. So, we're down to 9 special characters, assuming we don't
need to use the binary control characters, ESC, STX, EOT, backspace,
etc.

So:

1. Period
2. Comma
3. Dash
4. Open Paren
5. Close Paren
6. Dollar
7. Percent
8. Asterisk
9. Underscore

Of course I'm guessing and trying to see how bad it is...
Looks pretty bad to me, not enough symbols to write C but COBOL
looks feasible. No ">" but COBOL doesn't need it.
--
Dan Espen
Douglas Miller
2020-04-21 21:54:50 UTC
Permalink
Post by Dan Espen
...
52 of the 64 bit patterns only leaves 12 symbols.
One of those symbols has to be space, NULL and FF are going to cause
problems. So, we're down to 9 special characters, assuming we don't
need to use the binary control characters, ESC, STX, EOT, backspace,
etc.
1. Period
2. Comma
3. Dash
4. Open Paren
5. Close Paren
6. Dollar
7. Percent
8. Asterisk
9. Underscore
Of course I'm guessing and trying to see how bad it is...
Looks pretty bad to me, not enough symbols to write C but COBOL
looks feasible. No ">" but COBOL doesn't need it.
--
Dan Espen
Your math is based on a misconception. There are only 26 alphabetic symbols. There are codes to "shift up" and "shift down" (or some equivalent) so UC and LC do not both take up codes. In fact, the special characters and digits are also predicated by shifting, so you have closer to 128 symbols possible, total.
Andy Walker
2020-04-22 11:09:22 UTC
Permalink
Post by Douglas Miller
52 of the 64 bit patterns only leaves 12 symbols. [...]
Your math is based on a misconception. There are only 26 alphabetic
symbols. There are codes to "shift up" and "shift down" (or some
equivalent) so UC and LC do not both take up codes. In fact, the
special characters and digits are also predicated by shifting, so you
have closer to 128 symbols possible, total.
Yes. There is a code table at [eg]

Loading Image...

Their codes differ somewhat from the ones we had -- our machines
included half, pi, squared, alpha and beta, and perhaps others I've
forgotten -- but it shows the idea. The order looks illogical in
the Gif, but that's because of the parity bit.
--
Andy Walker,
Nottingham.
Peter Flass
2020-04-21 22:01:39 UTC
Permalink
Post by Dan Espen
Post by Andy Walker
Post by Quadibloc
Post by Andy Walker
Our Flexowriters were 7-track, but that was 6-bit plus
parity, and the six bits included upper-case and lower-case characters.
For certain values of "includes".
What 6-bit Flexowriter codes actually included were shift codes. So
the letters only appeared once in the 6-bit code, but there were
characters to shift the machine into upper-case and lower-case.
If that's not what I said, it's at least an approximation to
what I meant to say! The real point is that within a 6-bit code it
was perfectly possible to write programs that used both UC and LC
letters, digits, punctuation and a decent [tho' idiosyncratic] range
of other symbols.
52 of the 64 bit patterns only leaves 12 symbols.
One of those symbols has to be space, NULL and FF are going to cause
problems. So, we're down to 9 special characters, assuming we don't
need to use the binary control characters, ESC, STX, EOT, backspace,
etc.
1. Period
2. Comma
3. Dash
4. Open Paren
5. Close Paren
6. Dollar
7. Percent
8. Asterisk
9. Underscore
Of course I'm guessing and trying to see how bad it is...
Looks pretty bad to me, not enough symbols to write C but COBOL
looks feasible. No ">" but COBOL doesn't need it.
026 keypunches (BCD) used % and <lozenge> for ( and ). no underscore.
--
Pete
John Levine
2020-04-21 22:07:06 UTC
Permalink
Post by Thomas Koenig
Why are eight-bit bytes so common today?
We don't have to guess, since we can read the April 1964 IBM Systems
Journal article "Architecture of the IBM System/360".

They wanted it to be character addressable for commercial programs, so
they had to decide between 6 and 8 bit bytes. They said the deciding
factor was that they could store two BCD digits in an 8 bit byte, and
there was a lot more numeric than alphabetic data at the time. They
also mentioned allowing a larger alphabetic character set (they knew
that STRETCH had upper and lower case) and it fit with their decision
to do 32/64 bit floats rather than 48, and 16 bit instruction fields.

Since the 360 was such a huge success, other systems copied its
addressing architecture. I have never been able to figure out why DEC
made the PDP-11 little-endian rather than big-endian, despite looking
through a lot of books and papers.

https://www.researchgate.net/publication/220498837_Architecture_of_the_IBM_System360
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Scott Lurndal
2020-04-21 23:56:26 UTC
Permalink
Post by John Levine
Post by Thomas Koenig
Why are eight-bit bytes so common today?
We don't have to guess, since we can read the April 1964 IBM Systems
Journal article "Architecture of the IBM System/360".
They wanted it to be character addressable for commercial programs, so
they had to decide between 6 and 8 bit bytes. They said the deciding
factor was that they could store two BCD digits in an 8 bit byte, and
there was a lot more numeric than alphabetic data at the time. They
also mentioned allowing a larger alphabetic character set (they knew
that STRETCH had upper and lower case) and it fit with their decision
to do 32/64 bit floats rather than 48, and 16 bit instruction fields.
Since the 360 was such a huge success, other systems copied its
addressing architecture. I have never been able to figure out why DEC
made the PDP-11 little-endian rather than big-endian, despite looking
through a lot of books and papers.
Even Burroughs moved 6-bit BCD to 8-bit EBCDIC to be compatible with IBM peripherals.
Andy Walker
2020-04-22 11:02:21 UTC
Permalink
[...] I have never been able to figure out why DEC
made the PDP-11 little-endian rather than big-endian, despite looking
through a lot of books and papers.
I had a lot of trouble understanding the PDP 11 until I
realised that it made more sense to imagine storage right-to-left;
that is, bytes were numbered

... 10 9 8 7 6 5 4 3 2 1 0 [-1 -2 ...]

rather than

[... -1 -2] 1 0 3 2 5 4 7 6 9 8 11 ...
--
Andy Walker,
Nottingham.
Quadibloc
2020-04-22 23:00:53 UTC
Permalink
Post by John Levine
I have never been able to figure out why DEC
made the PDP-11 little-endian rather than big-endian, despite looking
through a lot of books and papers.
I agree that little-endian is an unfortunate choice.

I've learned that DEC was so successful as a computer company because its prices
were lower.

If you add two numbers together that are longer than a computer word, little-
endian order lets you fetch two things you can add right away, having the carry
ready for the next step.

Putting the two character bytes in the 16-bit word in little-endian order then
lets you make the machine as consistent as a big-endian one.

So I've always felt that that's all the explanation that is needed; fetching the
end of the number first would have taken a little extra circuit complexity.

John Savard
Jon Elson
2020-04-22 21:52:54 UTC
Permalink
Post by Thomas Koenig
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
But, they botched it even worse on the 360. Single precision float had a
24-bit mantissa, but they normalized in 4-bit steps. So, any particular
result could have up to 3 most significant zeroes in the mantissa,
squeezing out up to 3 bits of precision. This allowed the exponent to
cover an 8X larger range, as it was a power of 16, not power of 2.

This played hob with numerical approximations by iteration, as when you got
close to the final result, the random jumping of precision was bigger than
the epsilon of the approximation.

Jon
Ron Shepard
2020-04-18 17:16:27 UTC
Permalink
Post by Thomas Koenig
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
This was an interesting time for microcomputers and numerical computing.
In 1985, there were several VAX models available, 780, 750, 730, and I
think the microvaxes were becoming available about that time. The
microvaxes cost about $10k and were maybe 2x or 3x slower than the
flagship 780 model, which cost about $100k-$200k.

I think a C64 cost about $500 at that time, so it was maybe 20x cheaper
than a microvax and ran the linpack code about 1000x slower. Maybe I'm
off a factor of two here or there, but those should be roughly correct.

Of course there were other major differences than just the floating
point performance. The C64 had 16-bit addressing, the VAX had 32-bit
addressing, you could communicate with ethernet with the VAX, you could
put "large" 300MB disk drives on the VAX, and so on.

Basically what happened was that the minicomputers in the late 1970s and
early 1980s downsized faster than the microcomputers upsized, so they
eventually just got squeezed out of the market regarding floating point
computation.

In the early 1980s you could also buy FPS attached processors for the
VAX. The ones I used cost about $100k and ran about 50x the speed for
the linpack benchmark. These were word addressable 64-bit floating point
machines (physical memory, not virtual, no time sharing). The cross
compiler ran on the VAX front end, then offloaded the execution onto the
array processor. This was probably the most cost effective way to
compute in the early 1980s.

Then in the late 1980s came all of the RISC microprocssors (MIPS, SPARC,
RS6000, etc.) which reduced computing costs by about another factor of
10. These were ganged together to make parallel machines, and that was
probably the most cost effective way to compute throughout the 1990s.

Then by the early 2000s, Intel/AMD microcomputers began to catch up in
speed, and they were ganged together to form parallel machines. That is
pretty much where we are today, 20 years later, with the twist that now
there are multiple cores per chip, and they have vector engines and
"graphical" coprocessors to offload the floating point computations.

The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing. They downsized,
taking their PC and Macintosh applications with them, and eventually
just displaced the microprocessors. These are mostly just appliances
now, not programming devices.

That is my perspective of what happened to the microprocessor computing
effort in the late 1970s. Some of the problems it encountered were
technical, and some were based on market forces.

BTW, the author Nicholas Higham is still today one of the most respected
numerical analysts.

https://en.wikipedia.org/wiki/Nicholas_Higham

$.02 -Ron Shepard
Scott Lurndal
2020-04-18 20:01:08 UTC
Permalink
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
Two points:

1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,

https://developer.arm.com/docs/101726/latest/explore-the-scalable-vector-extension-sve/what-is-the-scalable-vector-extension

multiple privilege levels, scalable vector lengths up to 2048 bits, and
a reference manual containing 8128 pages (just for the architecture,
instruction set, and registers). Then there is tens of thousands
of additional pages of documentation for the I/OMMU, Interrupt Controller,
coresight (external debug/trace), et alia.

2) The latest ARMv8 processors from Marvell are used in supercomputers:

https://www.top500.org/system/179565
https://www.marvell.com/products/server-processors/thunderx2-arm-processors.html
https://www.hpcwire.com/2020/03/17/marvell-talks-up-thunderx3-and-arm-server-roadmap/

Fujitsu also as an ARM-based supercomputer:

https://www.nextplatform.com/2019/11/22/arm-supercomputer-captures-the-energy-efficiency-crown/
Peter Flass
2020-04-19 00:55:45 UTC
Permalink
Post by Scott Lurndal
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
--
Pete
John Levine
2020-04-19 02:25:06 UTC
Permalink
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
RISC was and is a perfectly good idea, to take stuff out of the
hardware that software can do better. Hardware is a lot more capable
than it was in the 1980s so stuff like simple instruction decoding
that mattered then doesn't now.

Oracle released the SPARC design as the open source OpenSPARC in the
late 2000s. I gather it's still used in embedded designs.

MIPS processors are common in routers and switches.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Scott Lurndal
2020-04-19 14:56:01 UTC
Permalink
Post by John Levine
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
RISC was and is a perfectly good idea, to take stuff out of the
hardware that software can do better. Hardware is a lot more capable
than it was in the 1980s so stuff like simple instruction decoding
that mattered then doesn't now.
Oracle released the SPARC design as the open source OpenSPARC in the
late 2000s. I gather it's still used in embedded designs.
MIPS processors are common in routers and switches.
Fewer and fewer each year, as ARM has been winning new designs.

Cavium started moving from MIPS to ARM in 2012.
J. Clarke
2020-04-19 03:41:30 UTC
Permalink
On Sat, 18 Apr 2020 17:55:45 -0700, Peter Flass
Post by Peter Flass
Post by Scott Lurndal
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
There's ARM (Acorn RISC Machine), which is doing very well in cell
phones--they're also used in the Raspberry Pi.
Scott Lurndal
2020-04-19 14:59:36 UTC
Permalink
Post by J. Clarke
On Sat, 18 Apr 2020 17:55:45 -0700, Peter Flass
Post by Scott Lurndal
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
There's ARM (Acorn RISC Machine), which is doing very well in cell
phones--they're also used in the Raspberry Pi.
However, they can't really be considered 'reduced instruction set' any more,
even the armv7 (which has two complete instruction sets, a32 and t32), much
less the armv8 which is in most cell phones built in the last four years,
and in newer pi's (3 and higher) and hardocp, etc.
Douglas Miller
2020-04-19 15:09:53 UTC
Permalink
Post by Scott Lurndal
...
However, they can't really be considered 'reduced instruction set' any more,
even the armv7 (which has two complete instruction sets, a32 and t32), much
less the armv8 which is in most cell phones built in the last four years,
and in newer pi's (3 and higher) and hardocp, etc.
(Un)Like the rest of the world today, there's little room for extremism. RISC processors have incorporated complexity, CISC have incorporated things learned from RISC. Both have moved towards the center.
r***@gmail.com
2020-04-19 07:00:40 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
RISC had its adherants.
However, in creating a machine possessing instructions that
did only very basic stuff, meant that you needed a high-speed
channel supplying the instructions to the CPU to be executed.
At the same time, the instructions being executed made memory
references that competed for access to memory.
That's a sort of bottleneck.

On the other hand, instructions that do a lot of work
reduce the rate at which instructions need to be fed
from memory.

Taking the IBM system z as an example,
a load instruction to a register may requires an index.
Given that the index is already held in a register,
the contents need to be multiplied by 2, or 4, or 8 [shifted]
and then used by the Load instruction.
Thus:
LR 3,2 copy the index from register 2.
SLL 3,2 a shift of 2 places left multiplies by 4.
L 5,0(3,6) An indexed load gets the value.

If the Load instruction achieved the shift of 2 places
as part of its execution, you need only write
L 5,0(2,6)

which saves loading and executing 2 instructions.
Quadibloc
2020-04-19 07:37:08 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
It all depends on what you call RISC.

If by RISC you mean - all the instructions are exactly 32 bits long, only load
and store instructions reference memory - SPARC, PowerPC, MIPS, and the wildly
successful ARM are all RISC.

But _originally_, when the idea of RISC was first presented, one of the features
included in its definition was that *every instruction would execute in one
cycle*. This would kind of put paid to hardware divide, let alone hardware
floating-point.

That flopped, but I don't think it was a good idea.

So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.

Modern VLIW designs are RISC-like in many respects; RISC-V has variable-length
instructions, which means it isn't quite RISC, but given its name, it's still
intended to be mostly RISC-like.

John Savard
Peter Flass
2020-04-19 17:47:44 UTC
Permalink
Post by Quadibloc
So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.
They’re both RISC, or some variant of RISC, under the covers, it’s just not
user-accessible. It’s just sad that CISC architectures like VAX aren’t
currently commercial, a worse architecture than x86 is hard to imagine,
although that might be a good project ;-)
--
Pete
Scott Lurndal
2020-04-20 15:58:40 UTC
Permalink
Post by Quadibloc
So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.
They’re both RISC, or some variant of RISC, under the covers, it’s just not
user-accessible. It’s just sad that CISC architectures like VAX aren’t
currently commercial, a worse architecture than x86 is hard to imagine,
Then you don't have a very good imagination, or you're assuming that the 8086 was
the be-all and end-all of intel architecture.

Hint: The current i5/i7/i9 processors are much closer to the VAX archticturally than you
think and far more capable than the vax ever was. 8086 segments haven't been used
for almost forty years now.
Peter Flass
2020-04-21 00:45:02 UTC
Permalink
Post by Scott Lurndal
Post by Peter Flass
Post by Quadibloc
So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.
They’re both RISC, or some variant of RISC, under the covers, it’s just not
user-accessible. It’s just sad that CISC architectures like VAX aren’t
currently commercial, a worse architecture than x86 is hard to imagine,
Then you don't have a very good imagination, or you're assuming that the 8086 was
the be-all and end-all of intel architecture.
Hint: The current i5/i7/i9 processors are much closer to the VAX archticturally than you
think and far more capable than the vax ever was. 8086 segments haven't been used
for almost forty years now.
True, but (AFAIK) they’re all heavily microcoded, and the microcode is
RISC-like.
--
Pete
Scott Lurndal
2020-04-21 14:46:58 UTC
Permalink
Post by Scott Lurndal
Post by Quadibloc
So RISC was an idea... the good parts of which have succeeded. RISC is basically
the new standard; x86 and zArchitecture are still around today representing
CISC, but they're legacy architectures; new commercial designs are almost always
RISC.
They’re both RISC, or some variant of RISC, under the covers, it’s just not
user-accessible. It’s just sad that CISC architectures like VAX aren’t
currently commercial, a worse architecture than x86 is hard to imagine,
Then you don't have a very good imagination, or you're assuming that the 8086 was
the be-all and end-all of intel architecture.
Hint: The current i5/i7/i9 processors are much closer to the VAX archticturally than you
think and far more capable than the vax ever was. 8086 segments haven't been used
for almost forty years now.
True, but (AFAIK) they’re all heavily microcoded, and the microcode is
RISC-like.
I wouldn't say "heavily microcoded", but there is microcode for many of the
more complex instructions (string instructions, certain privileged instructions,
etc).
Jorgen Grahn
2020-04-19 08:31:02 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
Post by Ron Shepard
The RISC processors are still all around us, in our phones, cars,
tablets, and so on, but they aren't doing floating point science, they
are doing mostly networking and signal processing.
1) I think the term RISC can no longer be applied to ARM processors;
the latest ARMv8 processors have thousands of instructions, up to
three distinct instruction sets,
So what’s left in the RISC space (commercially)? Is RISC another
good idea that flopped?
I seem to see a trend of people assuming everything is Intel x86 with
features like little-endianness, unaligned accesses which work most of
the time, and tolerant SMP designs. Also it seems to me ARM tries to
emulate these properties.

Which leaves me a bit bitter, because I learned to write valid C code
when targeting intolerant processors like PowerPC and SPARC.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Thomas Koenig
2020-04-19 09:37:38 UTC
Permalink
Post by Peter Flass
So what’s left in the RISC space (commercially)? Is RISC another good idea
that flopped?
The POWER architecture is still pretty RISCy, is being opened up
and is still very much in commercial use.

Apparently, they have an extremely large memory bandwith for graphics
cards, which makes them good for supercomputers.

You can even buy a desktop PC or a mainboard, and not from IBM :-)
For people who are concerned about things like the Intel Management
Egnine, that one is completely open source.
dpb
2020-04-19 21:53:36 UTC
Permalink
Post by Thomas Koenig
[F'up]
Wow.
Seems like somebody actually took the time and effort to do
some assembly version of Linpack routines on a few micros (C64,
BBC Micro, plus a few others) and see how fast they are, both in
their native Basic dialects and hand-coded assembly which used
the native floating point format.
http://eprints.maths.manchester.ac.uk/2029/1/Binder1.pdf
One thing that's also interesting from this is that the floating
point format of these machines was actually not bad - a 40 bit
word using a 32 bit mantissa actually gives much better roundoff
results than today's 32 bit single precision real variables.
Also interesting is the fact that a C64 was around a factor of 2000
slower than a VAX 11/780 (with the C64 using optimized assembly).
Which computer gave you more flops for the buck I don't know
because I don't know the price of a VAX at the time :-)
Used a DX-64 (the C-64 "portable" w/ 5" color monitor)
<https://www.c64-wiki.com/wiki/Executive_64>. Apparently I was one of
only a few that actually managed to purchase a "D" instead of "C" with
the two drives. Not having had a regular Commodore before, the lack of
cassette interface didn't bother me.

I used both a Fortran compiler and the builtin basic, but mostly a Forth
interpreter (forget the origin of which) that had access to the sprites
and all...it was multitasking kernel and was really quite slick.

Did not incorporate floating point, though, as few Forths did at the time.

--
Loading...