Discussion:
An Argument for Big-Endian: Packed Decimal
(too old to reply)
Quadibloc
2019-08-05 21:35:27 UTC
Permalink
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.

Historically, since computers were developed in the Western world, storage of
chracters in computer words was from "left" to "right", that is, from the most
significant part to the least significant part, basically without thought - it
was simply assumed to be the natural way to do it, as it corresponded to how
numbers and words were written by us on paper.

When advances in technology made it useful to build small-sized computers, with
a word length less than that of numbers it might be desired to calculate with on
those computers, some of those computers (one example is the Honeywell 316 and
related machines) would put the least significant part of a 32-bit number in the
first 16-bit word making it up.

This was done for the practical reason that adding two numbers together could
then proceed simply: go to the addresses where they're stored, get the least
significant parts first, add them, then use the carry from that for adding the
two most significant parts together.

The PDP-11 computer was a 16-bit computer made by Digital Equipment Corporation.
This company made aggressively priced minicomputers, but the PDP-11 was intended
to be more than just a run-of-the-mill minicomputer; instead of using an old-
fashioned ISA like that of DEC's own PDP-8 and PDP-4/7/9/15, or 16-bit machines
like the Honeywell 316 or the Hewlett-Packard 2115A, it was to be modern and
advanced, and in some ways similar to mainframes like the IBM System/360.

So, because it was to have a low price, it handled 32-bit numbers with their
least significant 16 bits first.

To make it less like an ordinary minicomputer, and more like a mainframe, the
byte ordering within a 32-bit integer was going to be consistent - like on the
System/360. It couldn't be ABCD, but instead of being CDAB, it could be DCBA.
Just put the first character in a 16-bit word in the least significant part, and
give that part the lower byte address!

This is how the idea of little-endian byte order was born.

The general belief today among computer scientists, it seems to me, is that the
industry should standardize on little-endian. Yes, big-endian seems more
familiar to people out of habit, but byte-order is just a convention.

Personally, I like big-endian myself. But I can see that it's an advantage that,
given that we address objects in memory by their lowest-addressed byte, the
place value for the first part is independent of length and thus always
constant.

But having the world of binary numbers used for arithmetic follow one rule, and
numbers as strings of ASCII characters that are read in and printed follow the
opposite rule... only works if those to worlds really are separate.

When the IBM System/360 came out, of course the PDP-11, which it partly helped
to inspire, didn't exist yet. So little-endian didn't exist as an alternative to
consider yet.

But despite that, if there had been a choice, big-endian would have been the
right choice for System/360.

The IBM System 360 mainframe, just like the microcomputer on your desktop,
handled character strings in its memory, even if they were in EBCDIC instead of
ASCII, and it also handled binary two's complement integers. (That was a bit of
a departure for IBM, as the 7090 used sign-magnitude.)

But it also had something else that Intel and AMD x86-architecture processors
only give very limited support to. Packed decimal numbers.

A packed decimal number is a way to represent numbers inside a computer as a
sequence of decimal digits, each one represented by four binary bits with the
binary-coded-decimal representation of that number.

The System/360 had pack and unpack instructions, to convert between numbers in
character string form and numbers in packed decimal form. Obviously, having both
kinds of numbers in the same order, with the first digit in the lowest address,
made those instructions easier to implement.

But the System/360 also *did arithmetic* with packed decimal quantities. And
they often used the same ALU, with a decimal adjust feature using nibble carries
turned on, to perform that arithmetic. So having the most significant part on
the same side of a number for binary numbers and packed decimal numbers made
_that_ easier.

Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.

John Savard
Ahem A Rivet's Shot
2019-08-06 06:44:00 UTC
Permalink
On Mon, 5 Aug 2019 14:35:27 -0700 (PDT)
Post by Quadibloc
Historically, since computers were developed in the Western world,
storage of chracters in computer words was from "left" to "right", that
is, from the most significant part to the least significant part,
basically without thought - it was simply assumed to be the natural way
to do it, as it corresponded to how numbers and words were written by us
on paper.
However a little extra thought will reveal that we generally
manipulate numbers starting at the least significant end (except for long
division).
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Scott Lurndal
2019-08-06 13:38:42 UTC
Permalink
Post by Quadibloc
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.
Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.
Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.

Burroughs medium systems would do the arithmetic from the most significant
digit (logically adding leading zeros to the shorter operand). This allowed
overflow to be detected quickly.

To do this, they add corresponding digits and if they carry, overflow is signaled.
If not and the result was '9', they increment a 9's counter and add the next pair
of digits; if a subsequent pair of digits produces a carry, and the the nines counter
is nonzero, an overflow is signaled.

One can find a flow-chart of the algorithm in the B3500 RefMan on Bitsavers.
1025475_B2500_B3500_RefMan_Oct69.pdf
Peter Flass
2019-08-06 20:50:58 UTC
Permalink
Post by Scott Lurndal
Post by Quadibloc
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.
Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.
Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.
Burroughs medium systems would do the arithmetic from the most significant
digit (logically adding leading zeros to the shorter operand). This allowed
overflow to be detected quickly.
To do this, they add corresponding digits and if they carry, overflow is signaled.
If not and the result was '9', they increment a 9's counter and add the next pair
of digits; if a subsequent pair of digits produces a carry, and the the nines counter
is nonzero, an overflow is signaled.
One can find a flow-chart of the algorithm in the B3500 RefMan on Bitsavers.
1025475_B2500_B3500_RefMan_Oct69.pdf
Big-endian has always made more sense to me, but this is a discussion that
will never be resolved. We’re long past the time where hardware
considerations of either are significant.
--
Pete
Scott Lurndal
2019-08-06 20:58:03 UTC
Permalink
Post by Peter Flass
Post by Scott Lurndal
Post by Quadibloc
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.
Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.
Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.
Burroughs medium systems would do the arithmetic from the most significant
digit (logically adding leading zeros to the shorter operand). This allowed
overflow to be detected quickly.
To do this, they add corresponding digits and if they carry, overflow is signaled.
If not and the result was '9', they increment a 9's counter and add the next pair
of digits; if a subsequent pair of digits produces a carry, and the the nines counter
is nonzero, an overflow is signaled.
One can find a flow-chart of the algorithm in the B3500 RefMan on Bitsavers.
1025475_B2500_B3500_RefMan_Oct69.pdf
Big-endian has always made more sense to me, but this is a discussion that
will never be resolved. We’re long past the time where hardware
considerations of either are significant.
That's not the case. The processor we just taped out can be configured
for big-endian or little-endian independently in all privilege levels of
the processor (ARM Aarch64). Big-endian is popular with networking appliances
(because multibyte data on IEEE 803.1 networks is interpreted as big-endian);
whereas little-endian is popular in most other contexts. Little-endian is
somewhat easier to work with once one becomes accustomed to it. In any case,
99% of all programmers don't know or care what the endianness is.

With Arm64 processors, linux looks at the codefile (ELF) header and tells the
kernel what endianness to set for the process based on header flags during
the exec(2) system call.
Charlie Gibbs
2019-08-07 16:25:51 UTC
Permalink
Post by Peter Flass
Big-endian has always made more sense to me, but this is a discussion that
will never be resolved. We’re long past the time where hardware
considerations of either are significant.
8080 One little,
8085 Two little,
8086 Three little-endians...
8088 Four little,
80186 Five little,
80286 Six little-endians...
80386 Seven little,
80386SX Eight little,
80486 Nine little-endians...
Pentium <segment fault>
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
Gene Wirchenko
2019-08-07 20:21:48 UTC
Permalink
Post by Charlie Gibbs
Post by Peter Flass
Big-endian has always made more sense to me, but this is a discussion that
will never be resolved. We’re long past the time where hardware
considerations of either are significant.
8080 One little,
8085 Two little,
No Z-80?
Post by Charlie Gibbs
8086 Three little-endians...
8088 Four little,
80186 Five little,
80286 Six little-endians...
80386 Seven little,
80386SX Eight little,
80486 Nine little-endians...
Pentium <segment fault>
Sincerely,

Gene Wirchenko
Charlie Gibbs
2019-08-08 03:10:33 UTC
Permalink
Post by Gene Wirchenko
Post by Charlie Gibbs
8080 One little,
8085 Two little,
No Z-80?
Maybe I could replace the 80186. But I can't have too many
or the rhyme wouldn't work.

There once was a fellow named Dan,
Whose limericks wouldn't quite scan.
When told this was so,
He replied, "Yes, I know,
But I always try to get as many syllables into the last line as I possibly can."
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
Gareth's was W7 now W10 Downstairs Computer
2019-08-08 08:58:19 UTC
Permalink
Post by Charlie Gibbs
Post by Gene Wirchenko
Post by Charlie Gibbs
8080 One little,
8085 Two little,
No Z-80?
Maybe I could replace the 80186. But I can't have too many
or the rhyme wouldn't work.
There once was a fellow named Dan,
Whose limericks wouldn't quite scan.
When told this was so,
He replied, "Yes, I know,
But I always try to get as many syllables into the last line as I possibly can."
There was a young fellow from Trinity,
Who, though he could trill like a linnet, he
Could never complete
Any verses with feet,
For, as he said, "Look, you fools, what I am writing here is free verse.".
Charlie Gibbs
2019-08-09 00:16:47 UTC
Permalink
On 2019-08-08, Gareth's was W7 now W10 Downstairs Computer
Post by Gareth's was W7 now W10 Downstairs Computer
Post by Charlie Gibbs
Post by Gene Wirchenko
Post by Charlie Gibbs
8080 One little,
8085 Two little,
No Z-80?
Maybe I could replace the 80186. But I can't have too many
or the rhyme wouldn't work.
There once was a fellow named Dan,
Whose limericks wouldn't quite scan.
When told this was so,
He replied, "Yes, I know,
But I always try to get as many syllables into the last line as I possibly can."
There was a young fellow from Trinity,
Who, though he could trill like a linnet, he
Could never complete
Any verses with feet,
For, as he said, "Look, you fools, what I am writing here is free verse.".
There once was a man from Purdue,
Whose limericks stopped at line two.

There once was a fellow named Dunn,

(There was something about someone named Nero,
but I can't find a reference...)
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
Dan Espen
2019-08-07 01:06:17 UTC
Permalink
Post by Scott Lurndal
Post by Quadibloc
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.
Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.
Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.
On S/360 the packed decimal instructions have the address of the first
byte and a length. Using that, the hardware should know where
ALL the digits are.

Typically, the packed number is unpacked and printed with
either UNPK, ED, or EDMK. I suppose ED and EDMK could be
engineered to reverse nibble order, but we'd need a new instruction
to UNPK in reverse order.

I can't see how little endian is an optimization, how hard is it
for a machine to find all the bytes in a word, half word...just because
the instruction references the first byte, doesn't mean the machine
can't start working on the second or fourth.
--
Dan Espen
Quadibloc
2019-08-07 02:05:54 UTC
Permalink
Post by Dan Espen
I can't see how little endian is an optimization, how hard is it
for a machine to find all the bytes in a word, half word...just because
the instruction references the first byte, doesn't mean the machine
can't start working on the second or fourth.
Fundamentally, it isn't. But remember, this started back in the days of small-
scale integration, if not discrete transistors.

John Savard
Dan Espen
2019-08-07 11:55:05 UTC
Permalink
Post by Quadibloc
Post by Dan Espen
I can't see how little endian is an optimization, how hard is it
for a machine to find all the bytes in a word, half word...just because
the instruction references the first byte, doesn't mean the machine
can't start working on the second or fourth.
Fundamentally, it isn't. But remember, this started back in the days of small-
scale integration, if not discrete transistors.
Hardly an excuse. There were all kinds of machines built before then
that were big endian and worked fine, including IBM 14xx.
--
Dan Espen
John Levine
2019-08-07 21:12:27 UTC
Permalink
Post by Dan Espen
Post by Quadibloc
Fundamentally, it isn't. But remember, this started back in the days of small-
scale integration, if not discrete transistors.
Hardly an excuse. There were all kinds of machines built before then
that were big endian and worked fine, including IBM 14xx.
As far as I can tell, until the DEC PDP-11 every machine that had a
byte order was big-endian. Even the -11 has some big-endianness in
some of its multi-word arithmetic hardware.

I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2019-08-07 21:43:45 UTC
Permalink
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
That may be, but I would have thought the answer was obvious. The PDP-11 had
general registers. It was an attempt to get fancy, like a System/360. But DEC
was the low-price leader in the minicomputer business. So they wanted to keep
things simple, and have the least significant word of a 32-bit quantity first -
it saved a few transistors. And other machines like the Honeywell 316 did it
that way, it was a common practice.

Going little-endian for byte addressing and for packing characters in a word
gave you consistency "for free". That's an obvious enough fact that I don't
think it really is speculation to assume this is the cause.

Yes, it would be nice to have a primary source, but if, for whatever reaon, the
specific engineer responsible wishes to remain anonymous, we may never know for
sure.

John Savard
Peter Flass
2019-08-07 22:06:56 UTC
Permalink
Post by Quadibloc
Yes, it would be nice to have a primary source, but if, for whatever reaon, the
specific engineer responsible wishes to remain anonymous,
I can understand why!
--
Pete
John Levine
2019-08-07 23:13:39 UTC
Permalink
Post by Quadibloc
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. ...
That may be, but I would have thought the answer was obvious. The PDP-11 had
general registers. It was an attempt to get fancy, like a System/360. But DEC
was the low-price leader in the minicomputer business. So they wanted to keep
things simple, and have the least significant word of a 32-bit quantity first -
it saved a few transistors. And other machines like the Honeywell 316 did it
that way, it was a common practice.
Actually, the early PDP-11's handled nothing bigger than a 16 bit
word. (I programmed a PDP-11/20 so I'm speaking from experience
here.) Any 32 bit values were done in software. As others have
noted, when they added some optional multiple precision hardware
later, they got the word order wrong leading to "middle-endian"

For character strings, byte order doesn't really matter, since you
address the bytes rather than the words they might be packed into.

The DDP 316 was word addressed. I never programmed one, but the
reference manual at bitsavers says that in double precision values,
the most significant 15 bits are in the first word and the less
significant 15 bits are in the second word. The sign is the high
bit of the first word. Floating point also put the high part first.

For that and other reasons I doubt it was an influence on DEC.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Bob Eager
2019-08-08 08:00:17 UTC
Permalink
Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
(I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
bit values were done in software.
Me too. It was luxury when we got bigger 11s that had hardware for it!
The DDP 316 was word addressed. I never programmed one, but the
reference manual at bitsavers says that in double precision values,
the most significant 15 bits are in the first word and the less
significant 15 bits are in the second word. The sign is the high bit of
the first word. Floating point also put the high part first.
Yes, I was programming a 516 (pretty well the same) at around the same
time. Completely different. It was more fun for me though, because I
ended up modifying the CPU to change the instruction set a bit.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Peter Flass
2019-08-09 00:41:26 UTC
Permalink
Post by Bob Eager
Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
(I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
bit values were done in software.
Me too. It was luxury when we got bigger 11s that had hardware for it!
Didn’t the original -11s have to have the bootstrap toggled in from the
console? I seem to recall some machine where a boot prom was an option.
--
Pete
Gareth's was W7 now W10 Downstairs Computer
2019-08-09 08:30:13 UTC
Permalink
Post by Peter Flass
Post by Bob Eager
Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
(I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
bit values were done in software.
Me too. It was luxury when we got bigger 11s that had hardware for it!
Didn’t the original -11s have to have the bootstrap toggled in from the
console? I seem to recall some machine where a boot prom was an option.
Altogether now ...

16701
26
12702
352
5211
105711 ...

Can't remember much more, but it got cemented in my
mind 48 years ago :-)
googlegroups jmfbahciv
2019-08-10 19:08:14 UTC
Permalink
Post by Peter Flass
Post by Bob Eager
Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
(I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
bit values were done in software.
Me too. It was luxury when we got bigger 11s that had hardware for it!
Didn’t the original -11s have to have the bootstrap toggled in from the
console? I seem to recall some machine where a boot prom was an option.
--
Pete
Toggling to read in a papertape was the de rigor in 1971.

/BAH
Alan Bowler
2020-07-22 00:51:09 UTC
Permalink
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes. Did that a few times.
John Levine
2020-07-22 03:14:44 UTC
Permalink
Post by Alan Bowler
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes. Did that a few times.
True, but it genderally didn't take 11 months to do.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Gareth Evans
2020-07-22 10:05:15 UTC
Permalink
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...

16701
26
12702
352

... but I forget the rest!
Quadibloc
2020-07-22 16:45:52 UTC
Permalink
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
I was able to Google a page which preserves it:

http://gunkies.org/wiki/PDP-11_Bootstrap_Loader

Set the address to 7744.

Then put the loader in:

7744 016701
7746 000026
7750 012702
7752 000352
7754 005211
7756 105711
7760 100376
7762 116162
7764 000002
7766 007400
7770 005267
7772 177756
7774 000765

and then the next word needs to contain the address of the boot device, which
may vary between systems.

John Savard
Gareth Evans
2020-07-22 18:50:40 UTC
Permalink
Post by Quadibloc
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
http://gunkies.org/wiki/PDP-11_Bootstrap_Loader
Set the address to 7744.
7744 016701
7746 000026
7750 012702
7752 000352
7754 005211
7756 105711
7760 100376
7762 116162
7764 000002
7766 007400
7770 005267
7772 177756
7774 000765
and then the next word needs to contain the address of the boot device, which
may vary between systems.
Yes, thanks, and that triggers my memory of being able to read
the machine code in its neat orthogonal octal!
Bob Eager
2020-07-23 00:43:00 UTC
Permalink
Post by Quadibloc
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from
the console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701 26 12702 352
... but I forget the rest!
http://gunkies.org/wiki/PDP-11_Bootstrap_Loader
Set the address to 7744.
7744 016701 7746 000026 7750 012702 7752 000352 7754 005211 7756 105711
7760 100376 7762 116162 7764 000002 7766 007400 7770 005267 7772 177756
7774 000765
and then the next word needs to contain the address of the boot device,
which may vary between systems.
As it happens, I have that printed on the programming card right beside
me on the desk (I am doing PDP-11 stuff).
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Gareth Evans
2020-07-23 11:08:28 UTC
Permalink
Post by Bob Eager
Post by Quadibloc
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from
the console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701 26 12702 352
... but I forget the rest!
http://gunkies.org/wiki/PDP-11_Bootstrap_Loader
Set the address to 7744.
7744 016701 7746 000026 7750 012702 7752 000352 7754 005211 7756 105711
7760 100376 7762 116162 7764 000002 7766 007400 7770 005267 7772 177756
7774 000765
and then the next word needs to contain the address of the boot device,
which may vary between systems.
As it happens, I have that printed on the programming card right beside
me on the desk (I am doing PDP-11 stuff).
Yes; should've thought of that, as I have PDP11 programming
cards from 1971 and also a fair selection of PDP8 and PDP11
manuals from that time.

PDP11 - undergraduate internship in the summer of 1971
and then a PDP11 assembler programmer from 1972 to 1981.

PDP8 - Final year at Essex in 1972 studying Computer
and Communications Engineering as a 3rd year specialism
of Electronics, we had the PDP8 as study examples both
for hardware and software.

Also, from 1971 I've a sales glossy, "Digital Products
and Applications" which covered the whole of their then
range.

(Reminder to self; must gird the loins and actually
assemble the PiDP8 and PiDP11 kits :-) )
Andy Walker
2020-07-23 12:09:36 UTC
Permalink
[...]
Post by Bob Eager
Post by Quadibloc
Set the address to 7744.
7744 016701 [...]
As it happens, I have that printed on the programming card right beside
me on the desk (I am doing PDP-11 stuff).
I was doing that in the '70s, and have now [of course] forgotten
all the details, but I have a vague memory that someone found a way of
shortening that by a couple of words, which makes a big difference when
you're setting switches by hand. Any takers?

Anyway, it was a great moment when the 11/05 was replaced by an
11/34, which just booted up into Unix "automatically".

The 11/05 was in a room carved out of a basement area previously
used as a "Gents". There had regularly been puddles on the floor, which
were ascribed to carelessness by the users. But when it was a computer
room, it became clear that the problem lay elsewhere. The University
wasn't disposed to do much about it -- yes, the building had been built
directly over a stream, yes, the [expensive] architect had bungled, but
what's the problem, really? "Well, there's a puddle half-way across
the floor, it's raining, the puddle is growing, and if it gets to where
the computers are, it could blow them, and cost ..." "Oh, /computers/.
Right." There was a pump installed within the hour.
--
Andy Walker,
Nottingham.
Gareth Evans
2020-07-23 13:57:46 UTC
Permalink
[...]
Post by Bob Eager
Post by Quadibloc
Set the address to 7744.
7744 016701 [...]
As it happens, I have that printed on the programming card right beside
me on the desk (I am doing PDP-11 stuff).
    I was doing that in the '70s, and have now [of course] forgotten
all the details, but I have a vague memory that someone found a way of
shortening that by a couple of words, which makes a big difference when
you're setting switches by hand.  Any takers?
You could save the input device directly in the first instruction as ...

12701
177560

instead of picking it up from the end, but can't see how
to save another word!
r***@gmail.com
2020-07-23 00:39:56 UTC
Permalink
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.
Peter Flass
2020-07-23 02:56:49 UTC
Permalink
Post by r***@gmail.com
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.
The only real machine I know that couldn’t do this was the CDC 6400. S/360
had a three-card loader that you could IPL from the card reader.
--
Pete
Charlie Gibbs
2020-07-23 05:58:04 UTC
Permalink
Post by Peter Flass
Post by r***@gmail.com
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.
The only real machine I know that couldn’t do this was the CDC 6400. S/360
had a three-card loader that you could IPL from the card reader.
I wrote a one-card loader for the Univac 9300 (sort of like a 360/20)
which could load up to 16 subsequent cards (1280 bytes) into contiguous
memory locations and jump to the beginning. If you couldn't do what you
wanted in 1280 bytes (e.g. my 3-card memory dump), you could always write
a loader that would bring in whatever you wanted. (Mind you, that's what
the standard cards on the front of a binary deck did...)
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
r***@gmail.com
2020-07-23 08:55:54 UTC
Permalink
Post by Peter Flass
Post by r***@gmail.com
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.
The only real machine I know that couldn’t do this was the CDC 6400. S/360
had a three-card loader that you could IPL from the card reader.
A number of the early machines had the loader in some form
of magnetic storage. I think that these computers had only
paper tape input, so the loader needed to assemble instruction
words from successive rows of paper tape.

As I said, ACE (1951) and DEUCE did not have any in-store
loader. For any program, the operator placed a deck of
program cards in the card reader and pressed a key on
the card reader, which cleared the high-speed store
and started the card reader.
The self-loading instructions on the first four rows of
a set of 3 cards were ordinary 32-bit binary-image instructions.
(as were the contents of the next 32 rows of the three cards).
Bob Eager
2020-07-23 09:33:47 UTC
Permalink
Post by Peter Flass
Post by r***@gmail.com
Post by Gareth Evans
Post by Peter Flass
Didn’t the original -11s have to have the bootstrap toggled in from
the console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701 26 12702 352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions to
read in the remaining 32 rows of the cards and to store them directly
in the high speed store.
The only real machine I know that couldn’t do this was the CDC 6400.
S/360 had a three-card loader that you could IPL from the card reader.
Three cards? The Elliott 4100 series could do it on 12 rows (4 words) of
paper tape.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Ahem A Rivet's Shot
2020-07-23 12:00:14 UTC
Permalink
On Wed, 22 Jul 2020 19:56:49 -0700
Post by Peter Flass
S/360 had a three-card loader that you could IPL from the card reader.
The 1130 had a single card loader that IPL'd from the card reader,
attempting to copy one in an 029 was a bad idea.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Scott Lurndal
2020-07-23 15:11:47 UTC
Permalink
Post by r***@gmail.com
Post by Gareth Evans
Didn’t the original -11s have to have the bootstrap toggled in from the
console?
Yes.  Did that a few times.
Altogther now, off the top of my head, from 46 years ago ...
16701
26
12702
352
... but I forget the rest!
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.
The only real machine I know that couldn’t do this was the CDC 6400. S/360
had a three-card loader that you could IPL from the card reader.
Burroughs medium systems used a one-card loader, and the 'load' button
was hardwired to issue a read from the card reader (or strapped to read
the first sector of disk) and transfer control to the buffer after the
read completed.
Joe Pfeiffer
2020-07-23 16:36:30 UTC
Permalink
I remember the Nova with floppy disk drive had a two-instruction
loader -- first one started a DMA transfer from the disk to address 0,
second one was a jmp to itself.
Alfred Falk
2020-07-23 23:45:05 UTC
Permalink
Post by Joe Pfeiffer
I remember the Nova with floppy disk drive had a two-instruction
loader -- first one started a DMA transfer from the disk to address 0,
second one was a jmp to itself.
Correct. In full:

Reset
000376
Examine Sets address for subsquent deposits
0601xx NIOS xx Start IO on device xx (typically 33 or 37)
Deposit
000377 JMP 377
Deposit Next
000376
Start

(I had to look that up. Later machine with automatic load made it even
simpler:

Reset
xx device address (typically 33 or 37)
Start
googlegroups jmfbahciv
2019-08-10 19:04:56 UTC
Permalink
Post by John Levine
Post by Dan Espen
Post by Quadibloc
Fundamentally, it isn't. But remember, this started back in the days of small-
scale integration, if not discrete transistors.
Hardly an excuse. There were all kinds of machines built before then
that were big endian and worked fine, including IBM 14xx.
As far as I can tell, until the DEC PDP-11 every machine that had a
byte order was big-endian. Even the -11 has some big-endianness in
some of its multi-word arithmetic hardware.
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
--
Regards,
Please consider the environment before reading this e-mail. https://jl.ly
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.

/BAH
Bob Eager
2019-08-10 20:33:48 UTC
Permalink
Post by googlegroups jmfbahciv
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
For the sake of those Googling:

"The Soul of a New Machine", by Tracy Kidder
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
John Levine
2019-08-10 20:46:38 UTC
Permalink
Post by googlegroups jmfbahciv
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from.
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
No, Ed de Castro's 16-bit machine was apparently more like what ended
up as the Nova, which was word addressed. The PDP-11 was Gordon
Bell's project.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Bob Eager
2019-08-10 20:56:46 UTC
Permalink
Post by googlegroups jmfbahciv
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from.
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
No, Ed de Castro's 16-bit machine was apparently more like what ended up
as the Nova, which was word addressed. The PDP-11 was Gordon Bell's
project.
The only DEC machine that de Castro was really involved in was the word
addressed PDP-8 (he was project manager).
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Quadibloc
2019-08-10 22:08:47 UTC
Permalink
Post by googlegroups jmfbahciv
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
The Soul of a New Machine, by Tracy Kidder, was about a group of young designers
hired by Data General to design their 32-bit version of the Eclipse.

Edson de Castro, who designed the original Nova, and used to work at DEC... left
because DEC went with the PDP-11 instead of his design. So he wasn't working on
the PDP-11.

So I doubt it would have any information of use.

John Savard
Bob Eager
2019-08-10 23:16:58 UTC
Permalink
Post by Quadibloc
Post by googlegroups jmfbahciv
Post by John Levine
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
The Soul of a New Machine, by Tracy Kidder, was about a group of young
designers hired by Data General to design their 32-bit version of the
Eclipse.
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So
he wasn't working on the PDP-11.
So I doubt it would have any information of use.
It doesn't. I re-read it recently.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Rich Alderson
2019-08-13 21:47:29 UTC
Permalink
Post by Quadibloc
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So he
wasn't working on the PDP-11.
This is the mythology. The real story is a little more complicated, as I
learned last October at the ***@50 celebration hosted by Bruce Ray of Wild
Hare. I met a number of original DG folks.

I've seen the 16-bit design which EdC pitched to DEC management. It looks like
a 16-bit extended PDP-8, rather than either a Nova or a PDP-11.

The Nova came out and was in production for a full year before DEC started on
the PDP-11. It was beating out the PDP-8/i (then the latest model) for sales,
which is why DEC went with a 16-bit system.

So no, DEC didn't go with EdC's 16-bit design--but neither did Data General.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
John Levine
2019-08-13 22:13:12 UTC
Permalink
Post by Rich Alderson
I've seen the 16-bit design which EdC pitched to DEC management. It looks like
a 16-bit extended PDP-8, rather than either a Nova or a PDP-11.
Was it more like a stretched -8 or a squashed -9?
Post by Rich Alderson
The Nova came out and was in production for a full year before DEC started on
the PDP-11. It was beating out the PDP-8/i (then the latest model) for sales,
which is why DEC went with a 16-bit system.
The Nova was a good computer, a lot of performance from a low cost
design. It eventually had the same problem DEC did, nobody wanted
a mini any more when they could get commodity micros.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Anne & Lynn Wheeler
2019-08-13 23:19:30 UTC
Permalink
Post by John Levine
The Nova was a good computer, a lot of performance from a low cost
design. It eventually had the same problem DEC did, nobody wanted
a mini any more when they could get commodity micros.
Old post, I've periodically reference, decade of DEC VAX sales, sliced
and diced by year, model, US/non-US ... around mid-80s drop in sales,
effectively shows mid-range market moving to large PCs and workstations
http://www.garlic.com/~lynn/2002f.html#0

IBM 4300s sold in the same mid-range market with similar numbers (modulo
large corporate orders of 100+ for distributed computing placing out
in departmental areas) ... and experienced similar decline.

I had a project that I was told neeeded some IBM content so went to order
some Series/1 ... but this was just after IBM had bought ROLM ... ROLM
was DG machines and years output of Series/1 were ordered to replace
DG. The person running ROLM datacenter (even before IBM bought them), I
had known several years earlier at IBM ... and horse traded some of help
with their build&test for some series/1.

later in SCI meetings
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

both sequent and data general went with four i486 proceessor (shared
cache) boards ... with SCI memory interface (allowing up to 64 boards,
256 processors). Never did see any of the data general machines
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
... but did see some number of the sequent
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

this was before IBM bought them and shut them down. I had left in the
early 90s ... but did some consulting for Steve Chen ... who
was CTO of Sequent at the time
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)
--
virtualization experience starting Jan1968, online at home since Mar1970
Quadibloc
2019-08-13 23:58:57 UTC
Permalink
Post by Rich Alderson
So no, DEC didn't go with EdC's 16-bit design--but neither did Data General.
Well, of course he had to change it, at least a little! He couldn't use something
he designed while an employee of DEC, it would still have belonged to them. Plus,
since he had the PDP-11 to compete with, designing something that looked like the
HP 211x or the Honeywell x16 would not have been successful.

John Savard
Quadibloc
2019-08-14 00:00:33 UTC
Permalink
Post by Rich Alderson
Post by Quadibloc
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So he
wasn't working on the PDP-11.
This is the mythology.
Where did I say that the design of Edson de Castro that DEC didn't go with - was
the same design as he used later for the Nova?

John Savard
Quadibloc
2020-07-24 10:57:14 UTC
Permalink
Post by Quadibloc
Post by Rich Alderson
Post by Quadibloc
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So he
wasn't working on the PDP-11.
This is the mythology.
Where did I say that the design of Edson de Castro that DEC didn't go with - was
the same design as he used later for the Nova?
After all, since he worked on his original PDP-11 proposal while employed by
DEC, they could have sued him if he just used it without changes. And if the
PDP-11 was supposed to be 'better', then instead of an old-fashioned 'me-too'
design that looked like the HP 211x or the Honeywell 316/516, a new design that
he could argue was 'the best of both worlds' might be just the thing.

John Savard
John Levine
2020-07-24 15:47:22 UTC
Permalink
Post by Quadibloc
Post by Quadibloc
Post by Rich Alderson
Post by Quadibloc
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So he
wasn't working on the PDP-11.
This is the mythology.
Where did I say that the design of Edson de Castro that DEC didn't go with - was
the same design as he used later for the Nova?
After all, since he worked on his original PDP-11 proposal while employed by
DEC, they could have sued him if he just used it without changes. And if the
PDP-11 was supposed to be 'better', then instead of an old-fashioned 'me-too'
design that looked like the HP 211x or the Honeywell 316/516, a new design that
he could argue was 'the best of both worlds' might be just the thing.
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Jon Elson
2020-07-24 17:32:39 UTC
Permalink
Post by John Levine
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8. The PDP-11 was a
bold step into a new way of thinking, a whole new concept of CPU
architecture. Hopefully I don't have to detail the differences here.
(And, my memory of the Nova has faded largely into the distant past,
I last used one 45 years ago!)

Jon
Peter Flass
2020-07-24 18:45:22 UTC
Permalink
Post by Jon Elson
Post by John Levine
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8. The PDP-11 was a
bold step into a new way of thinking, a whole new concept of CPU
architecture. Hopefully I don't have to detail the differences here.
(And, my memory of the Nova has faded largely into the distant past,
I last used one 45 years ago!)
Reading Bell’s paper it appears that the Nova was based on the DEC “PDP-X”
design that was rejected in favor of the PDP-11.
--
Pete
Peter Flass
2020-07-25 01:03:00 UTC
Permalink
Post by Peter Flass
Post by Jon Elson
Post by John Levine
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8. The PDP-11 was a
bold step into a new way of thinking, a whole new concept of CPU
architecture. Hopefully I don't have to detail the differences here.
(And, my memory of the Nova has faded largely into the distant past,
I last used one 45 years ago!)
Reading Bell’s paper it appears that the Nova was based on the DEC “PDP-X”
design that was rejected in favor of the PDP-11.
Or maybe not-

http://simh.trailing-edge.com/docs/pdpx.pdf
--
Pete
John Levine
2020-07-24 19:28:52 UTC
Permalink
Post by Jon Elson
Post by John Levine
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8.
I agree the PDP-11 was more fun to program, but the Nova was an
excellent piece of engineering. It was two large circuit boards that
pluggeed into a simple backplane so it was cheap and easy to
manufacture. The Nova shipped in 1969 at a base price of $4K or $8K
for a usable configuration. The PDP-11/20 shipped a year later priced
at $20K, partly because it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.

Eventually the PDP-11 won as the extra complexity became cheaper to
implement, and the advantages of byte addressing became more
compelling, but DG sold a whole lot of computers, mostly through OEMs
who packaged them into something else so the end customer didn't do
the programming.

DEC's Omnibus in 1971 was a backplane for the PDP-8/E and DEC came
up with one for the PDP-11 around 1973.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Peter Flass
2020-07-25 02:43:07 UTC
Permalink
Post by John Levine
Post by Jon Elson
Post by John Levine
The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8.
I agree the PDP-11 was more fun to program, but the Nova was an
excellent piece of engineering. It was two large circuit boards that
pluggeed into a simple backplane so it was cheap and easy to
manufacture. The Nova shipped in 1969 at a base price of $4K or $8K
for a usable configuration. The PDP-11/20 shipped a year later priced
at $20K, partly because it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.
“Flip Chips?”
Post by John Levine
Eventually the PDP-11 won as the extra complexity became cheaper to
implement, and the advantages of byte addressing became more
compelling, but DG sold a whole lot of computers, mostly through OEMs
who packaged them into something else so the end customer didn't do
the programming.
DEC's Omnibus in 1971 was a backplane for the PDP-8/E and DEC came
up with one for the PDP-11 around 1973.
--
Pete
r***@gmail.com
2020-07-25 03:14:46 UTC
Permalink
Post by Peter Flass
I agree the PDP-11 was more fun to program...
...it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.
“Flip Chips?”
Not really. Flip Chips was DEC's name for their line of logic boards which became somewhat obsolete when 14 and 16 pin DIP packaged RTL, DTL, and TTL circuits became available. The PDP-11 modules used the same physical board design as the Flip Chips but housed much more complex circuits, designed purely for the PDP-11.

Bob Netzlof
Rich Alderson
2020-07-25 18:03:16 UTC
Permalink
I agree the PDP-11 was more fun to program...
...it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.
=20
=E2=80=9CFlip Chips?=E2=80=9D
Not really. Flip Chips was DEC's name for their line of logic boards which =
became somewhat obsolete when 14 and 16 pin DIP packaged RTL, DTL, and TTL =
circuits became available. The PDP-11 modules used the same physical board =
design as the Flip Chips but housed much more complex circuits, designed pu=
rely for the PDP-11.
Bob Netzlof
If you look at the DEC documentation, FlipChip(TM) was applied to every
single-height board they produced, and sometimes to the dual, quad, and hex
height boards as well. The complexity of the circuits on the boards has
nothing to do with what they are called.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
John Levine
2020-07-25 03:19:11 UTC
Permalink
Post by John Levine
at $20K, partly because it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.
“Flip Chips?”
Yes. DEC had very bad experiences with the large boards in the PDP-6
and it was a long time before they used them again.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Alfred Falk
2020-07-25 15:48:30 UTC
Permalink
In article
Post by John Levine
at $20K, partly because it was a more complex design, but also
because it was built from many small modules that plugged into a
custom wired backplane.
“Flip Chips?”
Yes. DEC had very bad experiences with the large boards in the PDP-6
and it was a long time before they used them again.
I no longer remember where I heard or read it, but this person* claimed part
of DEC's reason for rejecting De Castro's design was that the large boards
would be impractical to test. I do remember seeing an ad somewhere from a
company claiming that they were the only one that could provide DG with a
test system for their very large (15" square), hence complex, boards.

* might have been a DG salesman
Rich Alderson
2020-07-25 18:12:44 UTC
Permalink
Post by John Levine
Post by John Levine
at $20K, partly because it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.
“Flip Chips?”
Yes. DEC had very bad experiences with the large boards in the PDP-6
and it was a long time before they used them again.
The large boards in the PDP-6 were an outgrowth of the original System Module(TM)
family, DEC's first product. These were used to build the PDP-1, PDP-4, and
PDP-5 successfully.

The problem with the PDP-6 boards, which are sui generis, is that they require
solder connections along two opposite edges of the board. Repair/replacement of
a board in a unit (for example, a 36 bit register, where each board is a bit)
requires unsoldering every neighboring board and moving the wires out of the way
in order to reach the board in question.

I have seen an actual PDP-6 (at SAIL, autumn 1984) and had the issue demonstrated
to me by people who knew wherof they spoke, so I'm not guessing about this.

The FlipChip(TM) was invented for the PDP-7, as an alternative to the physically
larger System Modules(TM). *Every* DEC computer manufactured after the
introduction of the PDP-7 used FlipChips (whether with discrete transistors or
with integrated circuits, gold fingers on one side of the board or both, single,
dual, quad, or hex height) and exactly one kind of backplane connector.

*No* DEC computer after the PDP-6 used PDP-6 style boards.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
John Levine
2020-07-25 19:29:06 UTC
Permalink
Post by Rich Alderson
The problem with the PDP-6 boards, which are sui generis, is that they require
solder connections along two opposite edges of the board. Repair/replacement of
a board in a unit (for example, a 36 bit register, where each board is a bit)
requires unsoldering every neighboring board and moving the wires out of the way
in order to reach the board in question.
I have seen an actual PDP-6 (at SAIL, autumn 1984) and had the issue demonstrated
to me by people who knew wherof they spoke, so I'm not guessing about this.
I actually used a PDP-6 but never got to look inside the cabinet. I gather the
connectors were also a problem and sites had a rubber mallet to tap the boards
to reseat them when the computer was flaky.
Post by Rich Alderson
The FlipChip(TM) was invented for the PDP-7, as an alternative to the physically
larger System Modules(TM). *Every* DEC computer manufactured after the
introduction of the PDP-7 used FlipChips (whether with discrete transistors or
with integrated circuits, gold fingers on one side of the board or both, single,
dual, quad, or hex height) and exactly one kind of backplane connector.
For quite a while, yes, but the later models of PDP-8 and PDP-11 used
larger boards they didn't call flip chips.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Alfred Falk
2019-08-14 19:34:27 UTC
Permalink
Post by Rich Alderson
Post by Quadibloc
Edson de Castro, who designed the original Nova, and used to work at
DEC... left because DEC went with the PDP-11 instead of his design. So
he wasn't working on the PDP-11.
This is the mythology. The real story is a little more complicated, as
of Wild Hare. I met a number of original DG folks.
Wild Hare? They still exist?! Wow!
Post by Rich Alderson
I've seen the 16-bit design which EdC pitched to DEC management. It
looks like a 16-bit extended PDP-8, rather than either a Nova or a
PDP-11.
I have heard that a factor in DEC's rejection of DeCastro's design was the
15" boards with 200+ contacts, which was deemed impractical to test and
manufacture reliably.
Post by Rich Alderson
The Nova came out and was in production for a full year before DEC
started on the PDP-11. It was beating out the PDP-8/i (then the latest
model) for sales, which is why DEC went with a 16-bit system.
So no, DEC didn't go with EdC's 16-bit design--but neither did Data General.
John Levine
2019-08-14 22:29:18 UTC
Permalink
Post by Alfred Falk
I have heard that a factor in DEC's rejection of DeCastro's design was the
15" boards with 200+ contacts, which was deemed impractical to test and
manufacture reliably.
I can believe it. The PDP-6 had large boards which had reliability problems. I gather
a standard piece of the kit was a rubber mallet to tap all the boards and reseat the
connectors.

The PDP-7 through PDP-10 and early PDP-11s used small Flip Chip cards and wire wrapped
backplanes which worked well.
--
Regards,
John Levine, ***@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2020-07-24 10:51:26 UTC
Permalink
Post by googlegroups jmfbahciv
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
The guy who left to start up Data General, Edson de Castro, designed the PDP-5
architecture which was later used in the PDP-8. He left specifically because they
rejected his design for their 16-bit computer, and made the PDP-11 instead.

John Savard
Quadibloc
2020-07-24 10:54:15 UTC
Permalink
Post by Quadibloc
Post by googlegroups jmfbahciv
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
The guy who left to start up Data General, Edson de Castro, designed the PDP-5
architecture which was later used in the PDP-8. He left specifically because they
rejected his design for their 16-bit computer, and made the PDP-11 instead.
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.

https://history-computer.com/ModernComputer/Electronic/PDP-11.html

John Savard
Quadibloc
2020-07-24 11:00:52 UTC
Permalink
Post by Quadibloc
Post by Quadibloc
Post by googlegroups jmfbahciv
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.
The guy who left to start up Data General, Edson de Castro, designed the PDP-5
architecture which was later used in the PDP-8. He left specifically because they
rejected his design for their 16-bit computer, and made the PDP-11 instead.
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
And this page

http://hampage.hu/pdp-11/birth.html

has some more information.

John Savard
John Levine
2020-07-24 15:54:57 UTC
Permalink
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.

So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2020-07-24 16:23:18 UTC
Permalink
Post by John Levine
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
Incidentally, from some other sources, I see that MacFarland brought over the
"nucleus" of the PDP-11 design. In "What Have We Learned from the PDP-11",
Gordon Bell notes that giving it byte addressing was one of the significant ways
in which it was an improvement on previous architectures.

In the absence of a more detailed account of the development of the PDP-11, I'm
afraid that "speculation" is all we have.

John Savard
Gareth Evans
2020-07-24 16:28:05 UTC
Permalink
Post by John Levine
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
But Little Endian is the obvious and logical approach,
otherwise when dealing with multi precision you have to fart
about to get to the least significant byte when presented
with the address of the variable in memory.

The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.

HTH YMMV EOE
Peter Flass
2020-07-24 16:56:56 UTC
Permalink
Post by Gareth Evans
Post by John Levine
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
But Little Endian is the obvious and logical approach,
otherwise when dealing with multi precision you have to fart
about to get to the least significant byte when presented
with the address of the variable in memory.
The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.
No, big-endian is the logical approach for any machine bigger that an 8008,
since words are operated on, and brought into the ALU, as a unit.
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
--
Pete
Radey Shouman
2020-07-24 18:08:21 UTC
Permalink
Post by Peter Flass
Post by Gareth Evans
Post by John Levine
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
But Little Endian is the obvious and logical approach,
otherwise when dealing with multi precision you have to fart
about to get to the least significant byte when presented
with the address of the variable in memory.
The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.
No, big-endian is the logical approach for any machine bigger that an 8008,
since words are operated on, and brought into the ALU, as a unit.
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.

Least significant digit first is also exactly how numbers are written
when they are being calculated by hand, which was the point of
zero-based notation in the first place.

* I would guess this is true of other languages using Arabic script,
like Farsi and Urdu. No idea about Hebrew.
r***@gmail.com
2020-07-25 00:54:21 UTC
Permalink
Post by Radey Shouman
Post by Peter Flass
Post by Gareth Evans
Post by John Levine
Post by Quadibloc
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.
https://history-computer.com/ModernComputer/Electronic/PDP-11.html
I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
But Little Endian is the obvious and logical approach,
otherwise when dealing with multi precision you have to fart
about to get to the least significant byte when presented
with the address of the variable in memory.
No, big-endian is the logical approach for any machine bigger that an 8008,
since words are operated on, and brought into the ALU, as a unit.
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Least significant digit first is also exactly how numbers are written
when they are being calculated by hand, which was the point of
zero-based notation in the first place.
ACE and DEUCE held values internally in "Chinese" binary, that it with the
least-significant bit on the left.

Being serial machines, the least-significant bit of a word
emerged first from storage on its way to the adders.
Quadibloc
2020-07-25 08:57:37 UTC
Permalink
Post by Radey Shouman
Post by Peter Flass
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!

But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".

John Savard
Gareth Evans
2020-07-25 10:57:19 UTC
Permalink
Post by Quadibloc
Post by Radey Shouman
Post by Peter Flass
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
The writing of numbers by humans has nothing whatsoever to do
with the internal operation of computers.

But am I right in remembering that the PDP11 had a mix of Big
and Little Endian; the basic processor being Little but some of
the floating point options being Big?
John Levine
2020-07-25 17:23:40 UTC
Permalink
Post by Gareth Evans
But am I right in remembering that the PDP11 had a mix of Big
and Little Endian; the basic processor being Little but some of
the floating point options being Big?
Yes, some of the multi-word arithmetic formats were middle-endian.

They straigtened it all out on the VAX, but the fact that it was
so hard to make things consistently little-endian tells us that
the arguement that it's more "natural" is silly.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2020-07-25 18:39:52 UTC
Permalink
Post by Gareth Evans
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
The writing of numbers by humans has nothing whatsoever to do
with the internal operation of computers.
Computers operate on character strings internally, and those character strings
get sent to printers to produce text for humans to read.
Post by Gareth Evans
But am I right in remembering that the PDP11 had a mix of Big
and Little Endian; the basic processor being Little but some of
the floating point options being Big?
Yes. I think it was big-endian due to a lack of communication... as the PDP-11
was the first attempt to make a computer consistently little-endian, this was a
novel and unfamiliar concept, and so if the team making the floating-point add-
on didn't have that concept clearly and firmly explained to them, naturally they
would just make the floating-point processor in the way they would assume to be
right and natural and the same as every other computer used.

John Savard
Peter Flass
2020-07-25 21:43:37 UTC
Permalink
Post by Quadibloc
Post by Gareth Evans
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
The writing of numbers by humans has nothing whatsoever to do
with the internal operation of computers.
Computers operate on character strings internally, and those character strings
get sent to printers to produce text for humans to read.
Post by Gareth Evans
But am I right in remembering that the PDP11 had a mix of Big
and Little Endian; the basic processor being Little but some of
the floating point options being Big?
Yes. I think it was big-endian due to a lack of communication... as the PDP-11
was the first attempt to make a computer consistently little-endian, this was a
novel and unfamiliar concept, and so if the team making the floating-point add-
on didn't have that concept clearly and firmly explained to them, naturally they
would just make the floating-point processor in the way they would assume to be
right and natural and the same as every other computer used.
I thought I read somewhere that FP in the early -11s was originally
software only. (maybe from Bell?)
--
Pete
Bob Eager
2020-07-25 22:21:38 UTC
Permalink
Post by Peter Flass
Post by Quadibloc
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian
is "the way humans think of numbers".
The writing of numbers by humans has nothing whatsoever to do with the
internal operation of computers.
Computers operate on character strings internally, and those character
strings get sent to printers to produce text for humans to read.
But am I right in remembering that the PDP11 had a mix of Big and
Little Endian; the basic processor being Little but some of the
floating point options being Big?
Yes. I think it was big-endian due to a lack of communication... as the
PDP-11 was the first attempt to make a computer consistently
little-endian, this was a novel and unfamiliar concept, and so if the
team making the floating-point add- on didn't have that concept clearly
and firmly explained to them, naturally they would just make the
floating-point processor in the way they would assume to be right and
natural and the same as every other computer used.
I thought I read somewhere that FP in the early -11s was originally
software only. (maybe from Bell?)
Pretty sure our 11/20 didn't have it.

As it happens, I've been doing a disassembler for the PDP-11 over the
last couple of days. I identified the EIS floating point (four
instructions, very simple) and the FP-11 floating point (lots of
instructions, a bit arcane).
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
John Levine
2020-07-25 22:49:01 UTC
Permalink
Post by Bob Eager
Post by Peter Flass
I thought I read somewhere that FP in the early -11s was originally
software only. (maybe from Bell?)
Pretty sure our 11/20 didn't have it.
I happen to have a pdp11/20/15/r20 handbook in my hand and I can
assure you there was no floating point available, only an optional
extended arithmetic peripheral that did what EIS later did, in a
different way.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
John Levine
2020-07-25 22:34:37 UTC
Permalink
Post by Peter Flass
I thought I read somewhere that FP in the early -11s was originally
software only. (maybe from Bell?)
The FP hardware was always optional although on the larger machines
I think everyone got it.

On the original 11/20 had an optional Unibus arithemetic device that
did multiply and divide and multiple shift, no hardware FP at all.

On the 11/45 and 11/70 it was an additional full sized board that fit
into a reserved backplane slot. For the later 11/23 it was a couple of
chips that plugged into the CPU board.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Radey Shouman
2020-07-25 18:38:14 UTC
Permalink
Post by Quadibloc
Post by Radey Shouman
Post by Peter Flass
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!
That would require knowing how they write their numbers. I don't, do you?
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
John Savard
--
Quadibloc
2020-07-25 18:42:10 UTC
Permalink
Post by Radey Shouman
That would require knowing how they write their numbers. I don't, do you?
As a matter of fact, as a coin collector, yes, I know that old coins from India,
Thailand, Burma and so on that have dates written using their versions of the
digits from India have the most significant digit on the left.

John Savard
Peter Flass
2020-07-25 21:43:34 UTC
Permalink
Post by Radey Shouman
Post by Quadibloc
Post by Radey Shouman
Post by Peter Flass
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!
That would require knowing how they write their numbers. I don't, do you?
How are abacuses (abaci?) organized. My impression is big-endian.
Post by Radey Shouman
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
John Savard
--
Pete
John Levine
2020-07-25 22:35:47 UTC
Permalink
Post by Peter Flass
How are abacuses (abaci?) organized. My impression is big-endian.
It's entirely up to the operator. There's no mechanical connection between the columns.

I always used mine big-endian but if I were left handed I probably
would have done it little-endian.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Joe Pfeiffer
2020-07-25 22:54:45 UTC
Permalink
Post by Peter Flass
Post by Radey Shouman
Post by Quadibloc
Post by Radey Shouman
Post by Peter Flass
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.
Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!
That would require knowing how they write their numbers. I don't, do you?
How are abacuses (abaci?) organized. My impression is big-endian.
The normal convention is most significant digits are on the left, just
like writing numbers. But since there is no byte order in an abacus I
don't see how the terms even apply to it.
Post by Peter Flass
Post by Radey Shouman
Post by Quadibloc
But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".
John Savard
Quadibloc
2020-07-25 09:03:42 UTC
Permalink
Post by Peter Flass
No, big-endian is the logical approach for any machine bigger that an 8008,
since words are operated on, and brought into the ALU, as a unit.
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.
Basically, while Danny Cohen, in his article "On Holy Wars and a Plea for Peace"
is _basically_ right that the differences between big-endian and little-endian
are small: little-endian allows addition to start with the first piece of a
number read from memory, and big-endian allows comparison to begin with the
first piece of a number read from memory...

and I can't agree that "big-endian is the way people think of numbers" in the
sense of being how humans are hard-wired to think of numbers; the Arabs manage
to write decimal numbers in little-endian fashion...

because big-endian compares to the cultural convention for writing numbers in
the Western world, I noted one small advantage of big-endian that Danny Cohen
overlooked.

That's where I began this thread:

Inside a computer, numbers that appear in _text strings_ are in big-endian
order.

So if a computer happens to have a packed decimal data type, if it's big-endian, then there is no conflict between packed decimal numbers

- having the same endianness as character strings, for ease of conversion,

- having the same endianness as binary numbers, to allow the ALU to use common
circuitry for decimal and binary arithmetic with less overhead.

This is a small thing, but I think it's enough to tilt the balance.

John Savard
Ahem A Rivet's Shot
2020-07-25 09:54:07 UTC
Permalink
On Sat, 25 Jul 2020 02:03:42 -0700 (PDT)
Post by Quadibloc
- having the same endianness as binary numbers, to allow the ALU to use
common circuitry for decimal and binary arithmetic with less overhead.
This is a small thing, but I think it's enough to tilt the balance.
I would think this a large thing in the days of discrete transistor
and SSL to MSL integration.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
John Levine
2020-07-25 17:25:08 UTC
Permalink
Post by Ahem A Rivet's Shot
Post by Quadibloc
- having the same endianness as binary numbers, to allow the ALU to use
common circuitry for decimal and binary arithmetic with less overhead.
This is a small thing, but I think it's enough to tilt the balance.
I would think this a large thing in the days of discrete transistor
and SSL to MSL integration.
The VAX had decimal number formats but by then transistors were pretty cheap.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Quadibloc
2020-07-24 17:32:36 UTC
Permalink
Post by Gareth Evans
The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.
In that case, Big-Endian does not go far enough. Clearly we also have to change
computers over from doing their arithmetic in incomprehensible binary to
calculating everything in decimal so that the contents of storage will make
sense.

We have the technology to do so now without wasting copious amounts of memory -
Chen-Ho encoding!

John Savard
Peter Flass
2020-07-24 18:45:19 UTC
Permalink
Post by Quadibloc
Post by Gareth Evans
The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.
In that case, Big-Endian does not go far enough. Clearly we also have to change
computers over from doing their arithmetic in incomprehensible binary to
calculating everything in decimal so that the contents of storage will make
sense.
Made sense years ago. It also allows unlimited-precision arithmetic, with
no concerns about word size.
Post by Quadibloc
We have the technology to do so now without wasting copious amounts of memory -
Chen-Ho encoding!
John Savard
--
Pete
Scott Lurndal
2020-07-24 19:32:26 UTC
Permalink
Post by Peter Flass
Post by Quadibloc
Post by Gareth Evans
The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.
In that case, Big-Endian does not go far enough. Clearly we also have to change
computers over from doing their arithmetic in incomprehensible binary to
calculating everything in decimal so that the contents of storage will make
sense.
Made sense years ago. It also allows unlimited-precision arithmetic, with
no concerns about word size.
Well, to be fair, even a BCD machine generally has an underlying
"word" size of some form. Burroughs medium systems, while supporting
operand lengths of up to 100 digits, still fetched from memory in 10 digit
(40 bit) chunks.

Note that on those systems the most significant digit had the lowest
address, and the hardware adder started with the most significant digit
(if the field lengths of both operands were different, the shorter had
implied leading zeros). The algorithm worked form MSD to LSD so that
it could catch overflow immediately and not store a partial result on
overflow. It did this by counting leading digit-by-digit sums of
9 until overflow was detected or an single digit add summed to less
than 9 (proving the overflow of the receiving field was not possible)
before storing the result.
Ahem A Rivet's Shot
2020-07-24 19:54:24 UTC
Permalink
On Fri, 24 Jul 2020 10:32:36 -0700 (PDT)
Post by Quadibloc
In that case, Big-Endian does not go far enough. Clearly we also have to
change computers over from doing their arithmetic in incomprehensible
binary to calculating everything in decimal so that the contents of
storage will make sense.
Careful now, you'll re-invent ENIAC soon.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Quadibloc
2020-07-25 08:55:40 UTC
Permalink
Post by Ahem A Rivet's Shot
On Fri, 24 Jul 2020 10:32:36 -0700 (PDT)
Post by Quadibloc
In that case, Big-Endian does not go far enough. Clearly we also have to
change computers over from doing their arithmetic in incomprehensible
binary to calculating everything in decimal so that the contents of
storage will make sense.
Careful now, you'll re-invent ENIAC soon.
Although I did have tongue in cheek as I typed that, it was the IBM 7070 I was
thinking of.

ENIAC is interesting for other entirely different reasons: basically, it's what
we now call a dataflow architecture.

John Savard
Dan Espen
2020-07-24 20:40:47 UTC
Permalink
Post by Quadibloc
The only justification for Big Endian seems to come from lazy
programmers who need their hands held and nose wiped when looking at
core dumps.
In that case, Big-Endian does not go far enough. Clearly we also have
to change computers over from doing their arithmetic in
incomprehensible binary to calculating everything in decimal so that
the contents of storage will make sense.
We have the technology to do so now without wasting copious amounts of
memory - Chen-Ho encoding!
Calculating everything in decimal makes a lot of sense. For business
applications the input and output must be decimal and the data has a
small amount of calculation done to it before it must be printed or
displayed.

Back in the day, I ran performance tests for decimal add vs. the CVB
and CVD instructions. The 2 conversion instructions took 10 times
longer than an add.

The S/360 does decimal arithmetic, but only after the data is packed.
That's a trade off of space for ease of use. Having started programming
on 14xx equipment, I miss the simplicity. I don't think the space
saving was a good trade-off.

The 14xx gave us numbers only limited in magnitude by storage size. The
key to that technology was an extra delimiting bit (the wordmark) in a
character. As much as I liked the word mark concept, I'm unsure that it
should have been carried forward.

I did a lot of Assembler and COBOL for business applications on S/360.
I can't think of any time it made sense to take our decimal input and
convert it to binary for efficiency reasons.
--
Dan Espen
John Levine
2020-07-24 19:30:36 UTC
Permalink
Post by John Levine
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
But Little Endian is the obvious and logical approach, ...
Like I said, we have plenty of uninformed speculation, but no actual
facts.

Personally, I've done more programming on little-endian machines than
big-endian but I don't feel strongly about it.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Ahem A Rivet's Shot
2020-07-24 20:06:08 UTC
Permalink
On Fri, 24 Jul 2020 15:54:57 -0000 (UTC)
Post by John Levine
So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.
I don't know either but I'd bet it wasn't any single reason but
rather an assessment that on balance it was probably the better option.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Christian Brunschen
2019-08-10 21:08:24 UTC
Permalink
Post by Quadibloc
Post by Dan Espen
Post by Quadibloc
Fundamentally, it isn't. But remember, this started back in the days
of small-
Post by Dan Espen
Post by Quadibloc
scale integration, if not discrete transistors.
Hardly an excuse. There were all kinds of machines built before then
that were big endian and worked fine, including IBM 14xx.
As far as I can tell, until the DEC PDP-11 every machine that had a
byte order was big-endian. Even the -11 has some big-endianness in
some of its multi-word arithmetic hardware.
I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.
An excellent discussion of endianness - both regarding bytes and bits -
is IEN137, https://www.ietf.org/rfc/ien/ien137.txt, published on 1st
April 1980, and this actually uses the PDP11 as one of the examples
(though no, it does not claim to describe its origin either).

// Christian
Andrew Swallow
2019-08-07 08:24:05 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Quadibloc
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.
Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.
Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.
On S/360 the packed decimal instructions have the address of the first
byte and a length. Using that, the hardware should know where
ALL the digits are.
Typically, the packed number is unpacked and printed with
either UNPK, ED, or EDMK. I suppose ED and EDMK could be
engineered to reverse nibble order, but we'd need a new instruction
to UNPK in reverse order.
I can't see how little endian is an optimization, how hard is it
for a machine to find all the bytes in a word, half word...just because
the instruction references the first byte, doesn't mean the machine
can't start working on the second or fourth.
It is easy if you have index registers and instructions that can extract
the digit or byte. Where you have to shift and mask to extract and
insert it gets harder.
Loading...