Discussion:
[fonc] The Elements of Computing Systems
Erik Terpstra
2011-01-03 15:51:02 UTC
Permalink
A book called 'The Elements of Computing Systems' [1] describes the
construction of a very simple computing system including its hardware
and software.
You start with a NAND gate and while gradually working through the
chapters you implement memory, a CPU and later on an assembler,
compiler, VM and a very basic shell.
All this is implemented in an emulator that is provided on the book's
website [2].

I was wondering if there are people (who are familiar with the
FONC/STEPS project) that know this book, and what their ideas are on
where the implementation strategy taken in this book would differ from
the implementation (of a VERY basic) STEPS like system.
I'd imagine the hardware implementation would not differ very much, but
that the software would take an entirely different route early on
(probably focusing on an OMeta like implementation as early as possible).

Any opinions on this would be greatly appreciated. I am quite curious if
a STEPS like strategy would be more efficient, easier or more succinct.

--Erik

[1]
http://www.amazon.com/Elements-Computing-Systems-Building-Principles/dp/026214087X

[2] http://www1.idc.ac.il/tecs/

P.S.: There is also a video that demonstrates some aspects of TECS:

P.S.2: You can find some sample chapters from the book here:
http://www1.idc.ac.il/tecs/plan.html
John Zabroski
2011-01-03 16:03:25 UTC
Permalink
Its a classic book. It is up there with Turbak and Gifford's Design
Concepts in Programming Languages.

The trouble with these classic texts is most good programmers only find out
about them after they've already learned the subject from an inferior
source.
Post by Erik Terpstra
A book called 'The Elements of Computing Systems' [1] describes the
construction of a very simple computing system including its hardware and
software.
You start with a NAND gate and while gradually working through the chapters
you implement memory, a CPU and later on an assembler, compiler, VM and a
very basic shell.
All this is implemented in an emulator that is provided on the book's
website [2].
I was wondering if there are people (who are familiar with the FONC/STEPS
project) that know this book, and what their ideas are on where the
implementation strategy taken in this book would differ from the
implementation (of a VERY basic) STEPS like system.
I'd imagine the hardware implementation would not differ very much, but
that the software would take an entirely different route early on (probably
focusing on an OMeta like implementation as early as possible).
Any opinions on this would be greatly appreciated. I am quite curious if a
STEPS like strategy would be more efficient, easier or more succinct.
--Erik
[1]
http://www.amazon.com/Elements-Computing-Systems-Building-Principles/dp/026214087X
[2] http://www1.idc.ac.il/tecs/
http://youtu.be/JtXvUoPx4Qs
http://www1.idc.ac.il/tecs/plan.html
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2011-01-05 15:56:13 UTC
Permalink
This post might be inappropriate. Click to display it.
Alan Kay
2011-01-03 16:05:23 UTC
Permalink
Hi Eric (and all)

I read this a few years ago and I really wanted to like this book much more than
I did.


I love the basic idea behind it. And I think a really good rendering of the
"from atoms to life" chain of meaning in computing would help many people,
especially students.

However, it was the way they decided to cut corners (with both HW and SW) that
was really bothersome (and not necessary). For example, their target language is
really quite weak and also unreliable in many ways, but there was no need for
them to go in a kind of "weak C" direction. With less word and better techniques
they could get a very strong very high level language directly supported by the
HW. Similarly, I don't think they had looked at the Mead-Conway book from the
late 70s to get a handle on simple ways to think of hardware organization, or at
Chuck Thacker's Alto architecture from PARC in the 70s to see how a few thousand
gates can really be organized into a powerful computation engine for emulation,
etc.

Cheers,

Alan




________________________________
From: Erik Terpstra <***@solidcode.net>
To: ***@vpri.org
Sent: Mon, January 3, 2011 7:51:02 AM
Subject: [fonc] The Elements of Computing Systems

A book called 'The Elements of Computing Systems' [1] describes the construction
of a very simple computing system including its hardware and software.
You start with a NAND gate and while gradually working through the chapters you
implement memory, a CPU and later on an assembler, compiler, VM and a very basic
shell.
All this is implemented in an emulator that is provided on the book's website
[2].

I was wondering if there are people (who are familiar with the FONC/STEPS
project) that know this book, and what their ideas are on where the
implementation strategy taken in this book would differ from the implementation
(of a VERY basic) STEPS like system.
I'd imagine the hardware implementation would not differ very much, but that the
software would take an entirely different route early on (probably focusing on
an OMeta like implementation as early as possible).

Any opinions on this would be greatly appreciated. I am quite curious if a STEPS
like strategy would be more efficient, easier or more succinct.

--Erik

[1]
http://www.amazon.com/Elements-Computing-Systems-Building-Principles/dp/026214087X

[2] http://www1.idc.ac.il/tecs/

P.S.: There is also a video that demonstrates some aspects of TECS:
http://youtu.be/JtXvUoPx4Qs
P.S.2: You can find some sample chapters from the book here:
http://www1.idc.ac.il/tecs/plan.html
John Zabroski
2011-01-03 16:34:01 UTC
Permalink
Alan,

So if I understand your criticism correctly,

Would I be putting words in your mouth to say that what you really don't
like about the book is that it isn't what FONC is trying to set out to
accomplish? In other words, you don't like it as much as you could because
you think you can do better. (Which is fine, I am the same way with a lot
of things I read, but when I am learning something for the first time, I
don't have such high standards, since I don't have enough material to
compare to.)

I had not heard of Carver Mead's book before. I just picked it up via an
online used books store. Thanks.
Post by Alan Kay
Hi Eric (and all)
I read this a few years ago and I really wanted to like this book much more
than I did.
I love the basic idea behind it. And I think a really good rendering of the
"from atoms to life" chain of meaning in computing would help many people,
especially students.
However, it was the way they decided to cut corners (with both HW and SW)
that was really bothersome (and not necessary). For example, their target
language is really quite weak and also unreliable in many ways, but there
was no need for them to go in a kind of "weak C" direction. With less word
and better techniques they could get a very strong very high level language
directly supported by the HW. Similarly, I don't think they had looked at
the Mead-Conway book from the late 70s to get a handle on simple ways to
think of hardware organization, or at Chuck Thacker's Alto architecture from
PARC in the 70s to see how a few thousand gates can really be organized into
a powerful computation engine for emulation, etc.
Cheers,
Alan
------------------------------
*Sent:* Mon, January 3, 2011 7:51:02 AM
*Subject:* [fonc] The Elements of Computing Systems
A book called 'The Elements of Computing Systems' [1] describes the
construction of a very simple computing system including its hardware and
software.
You start with a NAND gate and while gradually working through the chapters
you implement memory, a CPU and later on an assembler, compiler, VM and a
very basic shell.
All this is implemented in an emulator that is provided on the book's
website [2].
I was wondering if there are people (who are familiar with the FONC/STEPS
project) that know this book, and what their ideas are on where the
implementation strategy taken in this book would differ from the
implementation (of a VERY basic) STEPS like system.
I'd imagine the hardware implementation would not differ very much, but
that the software would take an entirely different route early on (probably
focusing on an OMeta like implementation as early as possible).
Any opinions on this would be greatly appreciated. I am quite curious if a
STEPS like strategy would be more efficient, easier or more succinct.
--Erik
[1]
http://www.amazon.com/Elements-Computing-Systems-Building-Principles/dp/026214087X
[2] http://www1.idc.ac.il/tecs/
http://youtu.be/JtXvUoPx4Qs
http://www1.idc.ac.il/tecs/plan.html
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2011-01-03 17:06:06 UTC
Permalink
Hi John,

I was just criticizing the book in general, and not from the standpoint of the
STEPS project (for one thing, I think STEPS itself has too many goals and
details and sophistications to really serve relatively early in a learner's
pathway to understanding computing).


And, as I said, I really like the idea of starting with "atoms" and working up
to "life" -- for all of us, and not just students -- and it's a good exercise to
think about how to do this without (a) having too much material, or (b)
weakening deep ideas.

The times I've thought about it previously involved choosing a language target
carefully and then working backwards to the minimal understandable architecture
that would support this. I think this is the big flaw in the organization of
TECS. They could just as easily gone for a much higher level, more powerful,
more realistic language, and more easily implemented it both from below and from
"aside". I'm guessing the authors are more versed in HW than in SW or in
programming languages.


For example, the simplest better thing they could have done is to implement a
Lisp, and then write a compiler and assembler in Lisp (just as the original MIT
folks did in the phases from Lisp 1 to 1.5). They could have gone a step further
to ISWIM, or the mythical Lisp 2, or the real M-Lisp, or the more interesting
Lisp 70. They could have made their basis Meta II, and generated everything from
there. Etc.

Some previous candidates in the times I've thought about this have been Wirth's
(1960's) Euler with a better parsing and microcode scheme, ISWIM (ditto), a
somewhat different version of Smalltalk-72, etc. Today, some of Ian's and Alex's
minimal bootstraps suggest themselves, not just for STEPS, but also for a very
simple atoms-to-life model.

The Alto did not have a lot of transistors in it, but was a very fine
meta-machine, so one might think about how to posit an even simpler Alto, but
that would still have its meta-capabilities. The Mead-Conway "regular
architectures" (also developed at PARC) hint strongly about how easy it is to
make HW and how best to abstract it. The first RISC chip is also quite nice in
this regard (and real, but not nearly as well thought out as the Alto was).


So, my criticisms of TECS are mild -- but there's no doubt that it could be done
a lot better, and for the sake of the students it should be done a lot better.
That being said, I've read at least one other book with similar aims, and all I
remember about it was that it was quite a bit worse.

Cheers,

Alan





________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, January 3, 2011 8:34:01 AM
Subject: Re: [fonc] The Elements of Computing Systems

Alan,

So if I understand your criticism correctly,

Would I be putting words in your mouth to say that what you really don't like
about the book is that it isn't what FONC is trying to set out to accomplish?
In other words, you don't like it as much as you could because you think you can
do better. (Which is fine, I am the same way with a lot of things I read, but
when I am learning something for the first time, I don't have such high
standards, since I don't have enough material to compare to.)

I had not heard of Carver Mead's book before. I just picked it up via an online
used books store. Thanks.


On Mon, Jan 3, 2011 at 11:05 AM, Alan Kay <***@yahoo.com> wrote:

Hi Eric (and all)
Post by Alan Kay
I read this a few years ago and I really wanted to like this book much more than
I did.
I love the basic idea behind it. And I think a really good rendering of the
"from atoms to life" chain of meaning in computing would help many people,
especially students.
However, it was the way they decided to cut corners (with both HW and SW) that
was really bothersome (and not necessary). For example, their target language is
really quite weak and also unreliable in many ways, but there was no need for
them to go in a kind of "weak C" direction. With less word and better techniques
they could get a very strong very high level language directly supported by the
HW. Similarly, I don't think they had looked at the Mead-Conway book from the
late 70s to get a handle on simple ways to think of hardware organization, or
at Chuck Thacker's Alto architecture from PARC in the 70s to see how a few
thousand gates can really be organized into a powerful computation engine for
emulation, etc.
Cheers,
Alan
________________________________
Post by Alan Kay
Sent: Mon, January 3, 2011 7:51:02 AM
Subject: [fonc] The Elements of Computing Systems
A book called 'The Elements of Computing Systems' [1] describes the construction
of a very simple computing system including its hardware and software.
You start with a NAND gate and while gradually working through the chapters you
implement memory, a CPU and later on an assembler, compiler, VM and a very basic
shell.
All this is implemented in an emulator that is provided on the book's website
[2].
I was wondering if there are people (who are familiar with the FONC/STEPS
project) that know this book, and what their ideas are on where the
implementation strategy taken in this book would differ from the implementation
(of a VERY basic) STEPS like system.
I'd imagine the hardware implementation would not differ very much, but that the
software would take an entirely different route early on (probably focusing on
an OMeta like implementation as early as possible).
Any opinions on this would be greatly appreciated. I am quite curious if a
STEPS like strategy would be more efficient, easier or more succinct.
--Erik
[1]
http://www.amazon.com/Elements-Computing-Systems-Building-Principles/dp/026214087X
[2] http://www1.idc.ac.il/tecs/
http://youtu.be/JtXvUoPx4Qs
http://www1.idc.ac.il/tecs/plan.html
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2011-01-03 18:19:06 UTC
Permalink
Kind of a tangent, but something that has bugged me about the Alto and
pretty much every modern machine since is the ad-hoc design of the bootstrap
mechanism for fetching the machine image from somewhere on the network.

In moderm machines, the bootstrap is pretty primitive and something like
pxeboot. Redhat has hired hoardes of programmers to build tools to support
this process, which is really just a testimony to how ad-hoc the tool chain
is; how do we know when Redhat has perfected the process? What design
wisdom is there to suggest that the Alto was in any way better?
Post by Alan Kay
The Alto did not have a lot of transistors in it, but was a very fine
meta-machine, so one might think about how to posit an even simpler Alto,
but that would still have its meta-capabilities. The Mead-Conway "regular
architectures" (also developed at PARC) hint strongly about how easy it is
to make HW and how best to abstract it. The first RISC chip is also quite
nice in this regard (and real, but not nearly as well thought out as the
Alto was).
Alan Kay
2011-01-03 19:01:59 UTC
Permalink
Please say more ...

The Alto didn't have any hardware for this ... nor did it have any regular code
... it was microcoded and almost a Turing Machine in many ways. The main feature
of the hardware architecture was the 16-way zero-overhead multitasking of the
microcode pre-triggered by simple events.

Are you actually commenting on the way the first Ethernet interface was done?

Cheers,

Alan





________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, January 3, 2011 10:19:06 AM
Subject: Re: [fonc] The Elements of Computing Systems

Kind of a tangent, but something that has bugged me about the Alto and pretty
much every modern machine since is the ad-hoc design of the bootstrap mechanism
for fetching the machine image from somewhere on the network.

In moderm machines, the bootstrap is pretty primitive and something like
pxeboot. Redhat has hired hoardes of programmers to build tools to support this
process, which is really just a testimony to how ad-hoc the tool chain is; how
do we know when Redhat has perfected the process? What design wisdom is there
to suggest that the Alto was in any way better?
Post by Alan Kay
The Alto did not have a lot of transistors in it, but was a very fine
meta-machine, so one might think about how to posit an even simpler Alto, but
that would still have its meta-capabilities. The Mead-Conway "regular
architectures" (also developed at PARC) hint strongly about how easy it is to
make HW and how best to abstract it. The first RISC chip is also quite nice in
this regard (and real, but not nearly as well thought out as the Alto was).
John Zabroski
2011-01-03 22:59:51 UTC
Permalink
Post by Alan Kay
Please say more ...
The Alto didn't have any hardware for this ... nor did it have any regular
code ... it was microcoded and almost a Turing Machine in many ways. The
main feature of the hardware architecture was the 16-way zero-overhead
multitasking of the microcode pre-triggered by simple events.
Are you actually commenting on the way the first Ethernet interface was done?
Cheers,
Alan
Thought about this...

I think I confused three different projects. The Interim Dynabook OS for
Smalltalk-72, the Alto, and Butler Lampson's much later work on secure
distributed computing (
http://research.microsoft.com/en-us/um/people/blampson/Systems.html#DSsecurity).

I don't think I've ever read a hardware description of the Alto. I'm 26
years old and there are only so many bits an eye can see ;-)
Alan Kay
2011-01-04 04:33:24 UTC
Permalink
Hi John

Yes, we are talking about ideas spread out over time. The Alto happened in 1973
and the Ethernet was put into practical use a year or so later. And after that
came the Dolphin and Dorado computers (which were Alto-like but faster and
bigger). Smalltalk images in the latter part of the 70s included the three
microcode "personalities" for each of the machines that would turn them into the
same Smalltalk Virtual Machine. So one could get the image from the distributed
file system and run it efficiently regardless of what machine one was on.


Gerry Popek from UCLA spent a year at PARC and decided to make an "on the fly"
distributed process load-balancing network OS (called the LOCUS Distributed
Operating System -- there's a book of the same name from MIT Press in the late
80s -- it is well worth reading). This was essentially a modified Unix (he was
not at PARC anymore), with portable processes that could be automatically moved
around the network while in process. The implementation was working quite well
by 1985 on heterogeneous collections of PDP-11s, Macs, and PCs (using similar
"personality" hooks for the code) on the commercial Ethernet of the day. I tried
to get Apple to buy this, but to no avail.

So the idea that hardware on networks should just be caches for movable process
descriptions and the processes themselves goes back quite a ways. There's a real
sense in which MS and Apple never understood networking or operating systems (or
what objects really are), and when they decided to beef up their OSs, they went
to (different) very old bad mainframe models of OS design to try to adapt to
personal computers.

Cheers,

Alan




________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Mon, January 3, 2011 2:59:51 PM
Subject: Re: [fonc] The Elements of Computing Systems




On Mon, Jan 3, 2011 at 2:01 PM, Alan Kay <***@yahoo.com> wrote:

Please say more ...
Post by Alan Kay
The Alto didn't have any hardware for this ... nor did it have any regular code
... it was microcoded and almost a Turing Machine in many ways. The main feature
of the hardware architecture was the 16-way zero-overhead multitasking of the
microcode pre-triggered by simple events.
Are you actually commenting on the way the first Ethernet interface was done?
Cheers,
Alan
Thought about this...

I think I confused three different projects. The Interim Dynabook OS for
Smalltalk-72, the Alto, and Butler Lampson's much later work on secure
distributed computing (
http://research.microsoft.com/en-us/um/people/blampson/Systems.html#DSsecurity
).

I don't think I've ever read a hardware description of the Alto. I'm 26 years
old and there are only so many bits an eye can see ;-)
Andrew Gaylard
2011-01-04 05:30:24 UTC
Permalink
John,

[ warning: somewhat off-topic ]
Post by John Zabroski
Kind of a tangent, but something that has bugged me about the Alto and
pretty much every modern machine since is the ad-hoc design of the bootstrap
mechanism for fetching the machine image from somewhere on the network.
In moderm machines, the bootstrap is pretty primitive and something like
pxeboot.  Redhat has hired hoardes of programmers to build tools to support
this process, which is really just a testimony to how ad-hoc the tool chain
is; how do we know when Redhat has perfected the process?  What design
wisdom is there to suggest that the Alto was in any way better?
I can't speak about the Alto, but OpenFirmware [1] is an excellent way to
bootstrap a system. Modern PCs offer brilliant performance at an affordable
price, but the BIOS remains terribly primitive, even with PXE. I haven't seen
(U)EFI, so I can't comment on it.

The trouble with bootstrap firmware is that, as people think of more things
they'd like to do at install-/boot-time, so the code size and complexity grows.

For instance, suppose that we limit the PROM functionality to "just (a) provide
a console interface, and (b) be able to load an image (kernel) and run it".
We decide that we'd like load images over a serial port (simplest
thing possible).
So we need something like xmodem to transfer it. That's pretty minimal,
so we're looking good.

Then we get smart: why not allow an IP link over the serial port? So we add PPP,
UDP, and TFTP All this needs configuration information, so we need a way to
edit and store the settings (IP address of the TFTP server, etc.), and we need
a second serial port for the console.

Then it'd be nice to be able to ping the computer to confirm the PPP link's up,
so we add ICMP.

Then we realise that TFTP is slow due to network latency (no sliding window).
So we add TCP. Now we can boot images via a http:// URLs. That's cool.

But we'd like to be able to resolve names, to avoid having IP numbers in URLs.
So we add a DNS resolver. But that requires a new setting: nameserver.

Then we realise that it'd be pretty easy to extend the console interface to work
over TCP too, so we add a telnet server. Now we don't need a physical console
on the machine, so that's also cool.

Then someone complains that http and telnet are insecure, so we change to
https and ssh. That means the SSL PKI libraries.

Then we'd like IPv6.

... you get the picture. Before long, the PROM *is* a whole OS.

So one of the reasons I like OpenFirmware is that it's a reasonable balance
between having too little capability (the BIOS), and too much (the
example above).

Another is that it's completely programmable (FORTH) by the user.

Also, it's extensible, so add-in cards can contain their own code to
extend the base
functionality. So a USB interface card could contain code to explain
to the PROM
that it is a bus that needs walking, that certain bus devices are
bootable (disks,
memory sticks), and others can be console I/O (serial). The basic
firmware never
needs a USB protocol stack; it only needs to know the properties of console and
block devices.

Lastly, it's bytecoded, so it's independent of CPU architecture.


- Andrew

[1] http://en.wikipedia.org/wiki/IEEE1275
Casey Ransberger
2011-01-05 00:30:22 UTC
Permalink
I think the secret sauce with OpenFirmware lies in
Post by Andrew Gaylard
John,
[ warning: somewhat off-topic ]
Post by John Zabroski
Kind of a tangent, but something that has bugged me about the Alto and
pretty much every modern machine since is the ad-hoc design of the bootstrap
mechanism for fetching the machine image from somewhere on the network.
In moderm machines, the bootstrap is pretty primitive and something like
pxeboot. Redhat has hired hoardes of programmers to build tools to support
this process, which is really just a testimony to how ad-hoc the tool chain
is; how do we know when Redhat has perfected the process? What design
wisdom is there to suggest that the Alto was in any way better?
I can't speak about the Alto, but OpenFirmware [1] is an excellent way to
bootstrap a system. Modern PCs offer brilliant performance at an affordable
price, but the BIOS remains terribly primitive, even with PXE. I haven't seen
(U)EFI, so I can't comment on it.
The trouble with bootstrap firmware is that, as people think of more things
they'd like to do at install-/boot-time, so the code size and complexity grows.
For instance, suppose that we limit the PROM functionality to "just (a) provide
a console interface, and (b) be able to load an image (kernel) and run it".
We decide that we'd like load images over a serial port (simplest
thing possible).
So we need something like xmodem to transfer it. That's pretty minimal,
so we're looking good.
Then we get smart: why not allow an IP link over the serial port? So we add PPP,
UDP, and TFTP All this needs configuration information, so we need a way to
edit and store the settings (IP address of the TFTP server, etc.), and we need
a second serial port for the console.
Then it'd be nice to be able to ping the computer to confirm the PPP link's up,
so we add ICMP.
Then we realise that TFTP is slow due to network latency (no sliding window).
So we add TCP. Now we can boot images via a http:// URLs. That's cool.
But we'd like to be able to resolve names, to avoid having IP numbers in URLs.
So we add a DNS resolver. But that requires a new setting: nameserver.
Then we realise that it'd be pretty easy to extend the console interface to work
over TCP too, so we add a telnet server. Now we don't need a physical console
on the machine, so that's also cool.
Then someone complains that http and telnet are insecure, so we change to
https and ssh. That means the SSL PKI libraries.
Then we'd like IPv6.
... you get the picture. Before long, the PROM *is* a whole OS.
So one of the reasons I like OpenFirmware is that it's a reasonable balance
between having too little capability (the BIOS), and too much (the
example above).
Another is that it's completely programmable (FORTH) by the user.
Also, it's extensible, so add-in cards can contain their own code to
extend the base
functionality. So a USB interface card could contain code to explain
to the PROM
that it is a bus that needs walking, that certain bus devices are
bootable (disks,
memory sticks), and others can be console I/O (serial). The basic
firmware never
needs a USB protocol stack; it only needs to know the properties of console and
block devices.
Lastly, it's bytecoded, so it's independent of CPU architecture.
- Andrew
[1] http://en.wikipedia.org/wiki/IEEE1275
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Casey Ransberger
2011-01-05 00:41:21 UTC
Permalink
D'oh, I love touch screens. What I tried to say was that OF is mostly just Forth, and I think that was what made it so lovely to work with.

It was nice to be able, for example, to interactively troubleshoot PPC based Mac hardware using Open Firmware. I seem to be having some trouble articulating the thought, but I think it really jibes with the idea that an operating system is a collection of things that don't fit into a language (paraphrasing Design Principles Behind Smalltalk.)

It dawns on me that if you only include the barest essentials, it's not hard to have the box pull the rest of it's "not-so-firm" ware from the network, even redefine assumptions about what the firmware should and shouldn't do.

I'm picturing something like the Maru bootstrap, where one simply defines ones (lisp, forth, what have you) in firmware, but only for bootstrapping it's replacement (which may salt the implementation to taste) on boot...

Or perhaps I'm repeating something that's already been said or is obvious?

It seems as though soft and firm might converge.
Post by Andrew Gaylard
John,
[ warning: somewhat off-topic ]
Post by John Zabroski
Kind of a tangent, but something that has bugged me about the Alto and
pretty much every modern machine since is the ad-hoc design of the bootstrap
mechanism for fetching the machine image from somewhere on the network.
In moderm machines, the bootstrap is pretty primitive and something like
pxeboot. Redhat has hired hoardes of programmers to build tools to support
this process, which is really just a testimony to how ad-hoc the tool chain
is; how do we know when Redhat has perfected the process? What design
wisdom is there to suggest that the Alto was in any way better?
I can't speak about the Alto, but OpenFirmware [1] is an excellent way to
bootstrap a system. Modern PCs offer brilliant performance at an affordable
price, but the BIOS remains terribly primitive, even with PXE. I haven't seen
(U)EFI, so I can't comment on it.
The trouble with bootstrap firmware is that, as people think of more things
they'd like to do at install-/boot-time, so the code size and complexity grows.
For instance, suppose that we limit the PROM functionality to "just (a) provide
a console interface, and (b) be able to load an image (kernel) and run it".
We decide that we'd like load images over a serial port (simplest
thing possible).
So we need something like xmodem to transfer it. That's pretty minimal,
so we're looking good.
Then we get smart: why not allow an IP link over the serial port? So we add PPP,
UDP, and TFTP All this needs configuration information, so we need a way to
edit and store the settings (IP address of the TFTP server, etc.), and we need
a second serial port for the console.
Then it'd be nice to be able to ping the computer to confirm the PPP link's up,
so we add ICMP.
Then we realise that TFTP is slow due to network latency (no sliding window).
So we add TCP. Now we can boot images via a http:// URLs. That's cool.
But we'd like to be able to resolve names, to avoid having IP numbers in URLs.
So we add a DNS resolver. But that requires a new setting: nameserver.
Then we realise that it'd be pretty easy to extend the console interface to work
over TCP too, so we add a telnet server. Now we don't need a physical console
on the machine, so that's also cool.
Then someone complains that http and telnet are insecure, so we change to
https and ssh. That means the SSL PKI libraries.
Then we'd like IPv6.
... you get the picture. Before long, the PROM *is* a whole OS.
So one of the reasons I like OpenFirmware is that it's a reasonable balance
between having too little capability (the BIOS), and too much (the
example above).
Another is that it's completely programmable (FORTH) by the user.
Also, it's extensible, so add-in cards can contain their own code to
extend the base
functionality. So a USB interface card could contain code to explain
to the PROM
that it is a bus that needs walking, that certain bus devices are
bootable (disks,
memory sticks), and others can be console I/O (serial). The basic
firmware never
needs a USB protocol stack; it only needs to know the properties of console and
block devices.
Lastly, it's bytecoded, so it's independent of CPU architecture.
- Andrew
[1] http://en.wikipedia.org/wiki/IEEE1275
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
John Zabroski
2011-01-05 15:52:27 UTC
Permalink
On Tue, Jan 4, 2011 at 7:41 PM, Casey Ransberger
Post by Casey Ransberger
D'oh, I love touch screens. What I tried to say was that OF is mostly just
Forth, and I think that was what made it so lovely to work with.
It was nice to be able, for example, to interactively troubleshoot PPC
based Mac hardware using Open Firmware. I seem to be having some trouble
articulating the thought, but I think it really jibes with the idea that an
operating system is a collection of things that don't fit into a language
(paraphrasing Design Principles Behind Smalltalk.)
I remember reading a Chuck Moore quote similar to Dan Ingalls, but with a
slightly different perspective. I know its been quoted on Lambda the
Ultimate before. Anyone know what I am thinking of? Can't recall it.
Basically, as I understand Chuck Moore's way of thinking, he kind of
questions everything, to the point of asking, "Why do we even need a file
system [abstraction]?" and so and so.
K. K. Subramaniam
2011-01-05 16:27:38 UTC
Permalink
Post by John Zabroski
I remember reading a Chuck Moore quote similar to Dan Ingalls, but with a
slightly different perspective.
http://www.ultratechnology.com/fsc99.htm

? .. Subbu
Alan Kay
2011-01-05 16:43:33 UTC
Permalink
A nice Goethe quote: "We should all share in the excitement of discovery without
vain attempts to claim priority".

So we can be happy that Chuck Moore did a few things when he did, without
worrying about when the ideas first appeared. Computing -- like natural science
-- has always been ripe for multiple discoveries of the same ideas -- and more
so than natural science because our "field that never quite became a field"
doesn't really care about its own history. (This often leads to "reinventions of
the wheel that are actually flat tires", but this is a side point.)

On the other hand, I personally cherish the real inventions and inventors in our
field -- for example John McCarthy, Ivan Sutherland, etc., who also built on the
past but in startling and even almost magical ways to produce qualitatively
different and more powerful POVs which are so needed in our design-centric
field.

Cheers,

Alan





________________________________
From: John Zabroski <***@gmail.com>
To: Fundamentals of New Computing <***@vpri.org>
Sent: Wed, January 5, 2011 7:52:27 AM
Subject: Re: [fonc] The Elements of Computing Systems




On Tue, Jan 4, 2011 at 7:41 PM, Casey Ransberger <***@gmail.com>
wrote:

D'oh, I love touch screens. What I tried to say was that OF is mostly just
Forth, and I think that was what made it so lovely to work with.
Post by Casey Ransberger
It was nice to be able, for example, to interactively troubleshoot PPC based Mac
hardware using Open Firmware. I seem to be having some trouble articulating the
thought, but I think it really jibes with the idea that an operating system is a
collection of things that don't fit into a language (paraphrasing Design
Principles Behind Smalltalk.)
I remember reading a Chuck Moore quote similar to Dan Ingalls, but with a
slightly different perspective. I know its been quoted on Lambda the Ultimate
before. Anyone know what I am thinking of? Can't recall it. Basically, as I
understand Chuck Moore's way of thinking, he kind of questions everything, to
the point of asking, "Why do we even need a file system [abstraction]?" and so
and so.
Reuben Thomas
2011-01-05 19:55:41 UTC
Permalink
Post by Alan Kay
On the other hand, I personally cherish the real inventions and inventors in
our field -- for example John McCarthy, Ivan Sutherland, etc.,
who also
built on the past but in startling and even almost magical ways to produce
qualitatively different and more powerful POVs which are so needed in our
design-centric field.
might have been expressly written?
--
http://rrt.sc3d.org
Carl Gundel
2011-01-07 22:06:36 UTC
Permalink
I wouldn't be a subscriber to this list if not for Charles Moore and his
Forth system, which led me to recognize great ideas in Smalltalk, which led
me here.

-Carl

-----Original Message-----
From: fonc-***@vpri.org [mailto:fonc-***@vpri.org] On Behalf Of
Reuben Thomas
Sent: Wednesday, January 05, 2011 2:56 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] The Elements of Computing Systems
Post by Alan Kay
On the other hand, I personally cherish the real inventions and inventors in
our field -- for example John McCarthy, Ivan Sutherland, etc.,
who also
built on the past but in startling and even almost magical ways to produce
qualitatively different and more powerful POVs which are so needed in our
design-centric field.
might have been expressly written?
--
http://rrt.sc3d.org
Casey Ransberger
2011-01-07 22:28:40 UTC
Permalink
Haha while we're sharing, I missed the interactivity of a Basic I used as a kid (the REPL was also the editor,) which led me from Java to Obj-C (more dynamic, but memory management sucked to worry about) to Perl and Ruby (dynamism with UNIX integration, got my garbage collector back) then Lisp and Squeak (conceptual clarity and simplicity, and I had ceased to care as much about UNIX as long as my bits were portable,) whereupon I arrived at this list. It's been a fun trek:)
Post by Carl Gundel
I wouldn't be a subscriber to this list if not for Charles Moore and his
Forth system, which led me to recognize great ideas in Smalltalk, which led
me here.
-Carl
-----Original Message-----
Reuben Thomas
Sent: Wednesday, January 05, 2011 2:56 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] The Elements of Computing Systems
Post by Alan Kay
On the other hand, I personally cherish the real inventions and inventors
in
Post by Alan Kay
our field -- for example John McCarthy, Ivan Sutherland, etc.,
who also
built on the past but in startling and even almost magical ways to produce
qualitatively different and more powerful POVs which are so needed in our
design-centric field.
might have been expressly written?
--
http://rrt.sc3d.org
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Reuben Thomas
2011-01-07 22:23:55 UTC
Permalink
Post by Alan Kay
Computing -- like natural
science -- has always been ripe for multiple discoveries of the same ideas
-- and more so than natural science because our "field that never quite
became a field" doesn't really care about its own history.
This is an illusion, BTW, caused by computer science's fecundity over
its brief period of existence.
--
http://rrt.sc3d.org
Julian Leviston
2011-01-06 02:33:28 UTC
Permalink
Yes! I couldn't agree more. We *need* this kind of thought. Every time we think about something, we think about it for the first time... this is the kind of thought that yields "pink" ideas not limited in creativity.

Julian.
I remember reading a Chuck Moore quote similar to Dan Ingalls, but with a slightly different perspective. I know its been quoted on Lambda the Ultimate before. Anyone know what I am thinking of? Can't recall it. Basically, as I understand Chuck Moore's way of thinking, he kind of questions everything, to the point of asking, "Why do we even need a file system [abstraction]?" and so and so.
Jecel Assumpcao Jr.
2011-01-03 22:14:19 UTC
Permalink
This post might be inappropriate. Click to display it.
Shawn Morel
2011-01-04 00:43:19 UTC
Permalink
Somewhat of a tangent - since Alan mentioned the Alto architecture. Chuck Thacker is working on a few interesting HW prototyping platforms at MS research.

The FPGA prototype system:
http://research.microsoft.com/en-us/news/features/bee3.aspx

More interestingly, though, is beehive:
http://projects.csail.mit.edu/beehive/Beehive-2010-01-MIT.pdf

shawn
Post by Jecel Assumpcao Jr.
I am *very* interested in this subject - not only do I hope that the
Squeak computer I am building will be itself an educational object, but
I am also helping two related projects. I'll briefly describe those two
projects before making comments on the "Nand to Tetris" course, but I
should mention that both of these take the "students should learn the
way I learned" approach with which I don't agree with at all. And it
isn't easy to help while letting them explore on their own. Watching
them happily test several little programs without pointing out that none
of their examples use indexed addressing and that they will need it a
lot later on is not easy for me.
Etienne Delacroix might be known to some in this list, but probably not
to most. An artist and a physicist, he is now back in his native Belgium
but spent most of the past decade roaming Brazil, Uruguay and other
countries in the region. He gave workshops to both university students
and to children where he used electronic components salvaged from old
computers to create interesting art. They would learn what transistors
are, how to use TTLs chips and then would mix Javascript software and
such to get their results. Etienne's own projects often included a Z80
processor.
Lately he became interested in opening up the black box that is the Z80
or Pentium II and studied several online texts to see how to build a
processor out of TTLs. He was not interested in simulators or FPGA
implementations but wants something that he, and his students, can
touch. I helped him play around with such radical stuff as the
Subtract-And-Branch-If-Negative processor (around 12 TTLs) and he was
able to simulate several versions of his e-cpu, which currently only has
an 8 bit address. Together we evaluated some 16 to 20 educational
processors (including "Hack" of the "Elements" course). If there is
interest I could compile a list of links to the ones that are available
online.
The other project is a book that two professors at my university want to
write. They teach introduction to digital logic and at the end of the
course the students have traditionally built a multiplication circuit
with an adder and control logic, but recently they have been doing
simple processors. Their idea is that the book (to be in Portuguese,
which will seriously limit their audience) would be roughly the course
they have been giving followed by a course on compiler writting with a C
compiler for their processor. Their processor is a simple 16 bit RISC
with instructions for reading from the keyboard and writing to the
screen (with redefinable characters - the students have done several
classic video games with this) implemented in entry level FPGAs
development boards. They are considering adopting this board for their
text: http://www.altera.com/b/nios-bemicro-evaluation-kit.html
This option doesn't have the video output like their older boards. The
student doing the compiler has not yet taken any courses on compilers,
which I see as a major problem and they see as an advantage. Their idea,
like Etienne, is that someone who is also learning is in a far better
position to teach than some expert who doesn't remember what it was like
not to know stuff. I also advised them that they should have some
interpreted environement running on their machine, not just
cross-compiled C. Even TinyBasic would do. Otherwise their
readers/students will have an initial experience in computing more
typical of the 1960s than the late 1970s/early 1980s.
Given this context, I found the material in "The Elements of Computing
Systems" very interesting. I got a little worried when I saw that the
authors seemed confused about what "von Neumann" and "Harvard"
architectures mean, but the rest of their stuff is great. The use of
simulators instead of actual hardware lowers the cost for their
audience, but I do feel that there is some educational value in actually
being able to touch something you built. The exagerated simplicity of
Hack reduces the time spent designing the hardware but makes programming
it much more complicated (and not at all typical of assembly languages
people actually program in). But the idea is to develop only a single
program for Hack: a much nicer virtual machine. So it might be best to
think of Hack assembly as microcode.
One thing that is very hard to balance in an educational object is the
"low floor, high ceiling" thing. You have to make it simple enough to be
learned in a short time, but powerful enough to be useful in real life.
The "Elements" objects focus on the floor, so as soon as students have
learned their lesson the objects are thrown away never to be seen again.
Since they are just simulations anyway, they could hardly have any
practical value after the course so this seems like a good decision. If
I had to choose between teaching someone Logo to only later have them
replace it with something else or starting with something like C++ or
Java that they might use for years and years, I would go with the Logo
approach every time. But I would find it sad to see it thrown away.
While Basic is itself limited, for Logo the problem is more the
implementations than the language.
Etienne's solution will lead to a throw-away TTL processor. Once you
have understood what it does, you will want a Z80 instead for your next
project. The design of the two professors is a bit limited, but there is
no reason why it couldn't be used instead of Nios II or Microblaze even
in commercial projects. Chuck Thacker's series of TinyComputer
processors also have an extremely high ceiling. One detail I mentioned
to the professors is that if they take VHDL or Verilog as their starting
point (as Chuck does and they are currently doing) then they will be
left with a huge black box. The "start with just NAND" solution that
"Elements" and Etienne took will result in a much deeper understanding.
They don't have to be mutually exclusive - it is probably good to have
the students design the same processor both ways.
The result of STEPS will be something both learnable and usable - you
won't want to throw it away to go back to your Windows/Linux/Mac PC
instead. I see that as the major difference compared to "Elements" and
the two projects I described.
-- Jecel
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Jecel Assumpcao Jr.
2011-01-07 22:41:20 UTC
Permalink
Shawn Morel wrote on Mon, 03 Jan 2011 16:43:19 -0800
Post by Shawn Morel
Somewhat of a tangent - since Alan mentioned the Alto architecture.
Chuck Thacker is working on a few interesting HW prototyping
platforms at MS research.
http://research.microsoft.com/en-us/news/features/bee3.aspx
http://projects.csail.mit.edu/beehive/Beehive-2010-01-MIT.pdf
I had mentioned this, but thanks for providing these links.
Post by Shawn Morel
[...] Chuck Thacker's series of TinyComputer
processors also have an extremely high ceiling. One detail I mentioned
to the professors is that if they take VHDL or Verilog as their starting
point (as Chuck does and they are currently doing) then they will be
left with a huge black box. The "start with just NAND" solution that
"Elements" and Etienne took will result in a much deeper understanding.
They don't have to be mutually exclusive - it is probably good to have
the students design the same processor both ways. [...]
There are interesting differences between the various versions of Tiny
Computer. I'll try to compile a list with more links and some comments
about these differences from an educational viewpoint next week.

-- Jecel

Loading...