Discussion:
[fonc] The problem with programming languages
John Pratt
2012-05-07 14:14:32 UTC
Permalink
The problem with programming languages and computers in general is that they hijack existing human concepts and words, usurping them from everyday usage and flattening out their meanings.
Carl Gundel
2012-05-07 14:26:21 UTC
Permalink
People do that every day without using a programming language at all. ;-)

-Carl

-----Original Message-----
From: fonc-***@vpri.org [mailto:fonc-***@vpri.org] On Behalf Of John
Pratt
Sent: Monday, May 07, 2012 10:15 AM
To: ***@vpri.org
Subject: [fonc] The problem with programming languages


The problem with programming languages and computers in general is that they
hijack existing human concepts and words, usurping them from everyday usage
and flattening out their meanings.
BGB
2012-05-07 14:48:03 UTC
Permalink
Post by Carl Gundel
People do that every day without using a programming language at all. ;-)
I think pretty much every field does this.

programmers, doctors, lawyers, engineers, ... all have their own
specialized versions of the language, with many terms particular to the
domain, and many common-use terms which are used in particular ways with
particular definitions which may differ from those of the "common use"
of the words.

decided not really to go into examples.


sometimes, mixed with "tradition" and similar, this can lead to some
often rather confusing language-use patterns: constructions involving
obscure grammar, words and phrases from different languages (Latin,
Italian, French, ...), ...

so, programming is by no means unique here.
Post by Carl Gundel
-Carl
-----Original Message-----
Pratt
Sent: Monday, May 07, 2012 10:15 AM
Subject: [fonc] The problem with programming languages
The problem with programming languages and computers in general is that they
hijack existing human concepts and words, usurping them from everyday usage
and flattening out their meanings.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Clinton Daniel
2012-05-08 06:07:31 UTC
Permalink
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)

Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.

Clinton
Post by John Pratt
The problem with programming languages and computers in general is that they hijack existing human concepts and words, usurping them from everyday usage and flattening out their meanings.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-08 06:13:32 UTC
Permalink
I disagree. We do our best. This is always the case.

The problem with language is ... there is no problem. The "problem" is with people and their lack of awareness.

I agree that "our best" currently sucks, though.

Words aren't the things they refer to - they're just pointers. The only way to precisely use language is to realise that it's not precise, and therefore stipulate DSLs.

What's your point?

Julian
Post by Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Clinton
Post by John Pratt
The problem with programming languages and computers in general is that they hijack existing human concepts and words, usurping them from everyday usage and flattening out their meanings.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Barbour
2012-05-08 06:28:15 UTC
Permalink
Post by Julian Leviston
What's your point?
I like my PLs to be point free, as much as possible. ;)

Regards,

Dave
--
bringing s-words to a pen fight
Julian Leviston
2012-05-08 06:55:23 UTC
Permalink
But I wasn't asking you. :P

:)
Post by Julian Leviston
What's your point?
I like my PLs to be point free, as much as possible. ;)
Regards,
Dave
--
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-08 10:45:50 UTC
Permalink
This is why tiny languages (Alan calls them POLs, I believe: problem-oriented-languages) are so important.

A language being anything that involves "communication"... including user interface interaction.

Julian
I suppose my point is that for new users, the analogies formed by
reusing existing terms are uncertain in that you don't know which
parts of the analogy carry across to the concept in question. Once
you're familiar with the concept itself, you know which parts apply
and which don't, but the point of reusing terms in the first place is
to help in learning the concept.
If you invent a new term, you don't get the problem of inferring
properties that don't carry across (or missing properties that aren't
analogous), but you burden new users with finding analogies
themselves.
In the end I agree that people are the problem, but I think we should
make things as easy as possible to learn by using analogies where
appropriate and inventing new terms where analogies would be
counter-productive. Where that line rests, however, is much of what
makes the issue difficult.
Clinton
Post by Julian Leviston
I disagree. We do our best. This is always the case.
The problem with language is ... there is no problem. The "problem" is with people and their lack of awareness.
I agree that "our best" currently sucks, though.
Words aren't the things they refer to - they're just pointers. The only way to precisely use language is to realise that it's not precise, and therefore stipulate DSLs.
What's your point?
Julian
Post by Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Clinton
Post by John Pratt
The problem with programming languages and computers in general is that they hijack existing human concepts and words, usurping them from everyday usage and flattening out their meanings.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-08 13:11:06 UTC
Permalink
Sorry it wasn't obvious what I was saying there...

They're important because when they're tiny, it's very easy to learn them...

Julian
Post by Julian Leviston
This is why tiny languages (Alan calls them POLs, I believe: problem-oriented-languages) are so important.
A language being anything that involves "communication"... including user interface interaction.
Julian
I suppose my point is that for new users, the analogies formed by
reusing existing terms are uncertain in that you don't know which
parts of the analogy carry across to the concept in question. Once
you're familiar with the concept itself, you know which parts apply
and which don't, but the point of reusing terms in the first place is
to help in learning the concept.
If you invent a new term, you don't get the problem of inferring
properties that don't carry across (or missing properties that aren't
analogous), but you burden new users with finding analogies
themselves.
In the end I agree that people are the problem, but I think we should
make things as easy as possible to learn by using analogies where
appropriate and inventing new terms where analogies would be
counter-productive. Where that line rests, however, is much of what
makes the issue difficult.
Clinton
Post by Julian Leviston
I disagree. We do our best. This is always the case.
The problem with language is ... there is no problem. The "problem" is with people and their lack of awareness.
I agree that "our best" currently sucks, though.
Words aren't the things they refer to - they're just pointers. The only way to precisely use language is to realise that it's not precise, and therefore stipulate DSLs.
What's your point?
Julian
Post by Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Clinton
Post by John Pratt
The problem with programming languages and computers in general is that they hijack existing human concepts and words, usurping them from everyday usage and flattening out their meanings.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Barbour
2012-05-08 06:36:37 UTC
Permalink
Post by Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Yeah. I've had trouble with this balance before. We need to acknowledge the
path dependence in human understanding.

My impression: it's connotation, more than denotation, that interferes with
human understanding.

"Naming is two-way: a strong name changes the meaning of a thing, and a
strong thing changes the meaning of a name." - Harrison Ainsworth (@hxa7241)

Regards,

Dave
--
bringing s-words to a pen fight
Julian Leviston
2012-05-08 06:56:30 UTC
Permalink
Naming poses no problem so long as you define things a bit. :P

Humans parsing documents without proper definitions are like coders trying to read programming languages that have no comments

(pretty much all the source code I ever read unfortunately)

J
Post by Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Yeah. I've had trouble with this balance before. We need to acknowledge the path dependence in human understanding.
My impression: it's connotation, more than denotation, that interferes with human understanding.
Regards,
Dave
--
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
David Goehrig
2012-05-08 14:18:48 UTC
Permalink
Post by Julian Leviston
Humans parsing documents without proper definitions are like coders trying to read programming languages that have no comments
One of the under appreciated aspects of system like TeX with the ability to do embedded programming, or a system like Self with its Annotations as part of the object, or even python's .__doc__ attributes is that they provide context for the programmer.

A large part of the reason that these are under appreciated is that most programs aren't sufficiently well factored to take advantage of these capabilities. As a human description of what the code does and why will invariably take about a paragraph of human text per line of code, a 20 line function requires a pamphlet of documentation to provide sufficient context.

Higher order functions, objects, actors, complex messaging topologies, exception handling (and all manner of related nonlocal exits), and the like only compound the context problem as they are "non-obvious". Most of the FP movement is a reaction against "non-obvious" programming. Ideally this would result in a positive "self-evident" model, but in the real world we end up with Haskell monads (non-obvious functional programming).

In the end the practical art is to express your code in such a way as the interpretation of the written word and the effective semantics of the program are congruent. Or in human terms "you say what you mean, and the program does what it says". I have a code sample I use in programming interviews which reads effectively

function(name) {
method = this[name]
return method.apply(this,arguments.after(0));
}

And the question I typically ask, after showing the definitions of after I ask the simple question if I call this function as

fn('add', 1, 2)

What is the value of arguments.after(0)

In about 2 out of 25 interviews for senior level devs I get the right answer. For almost all non-programmers I've talked to, with the little bit of context "programmers often start counting from 0" they get the right answer, without having read the formal definition in the base language 2 lines earlier. What I've learned in asking this question in interviews is that the context one carries with them often colors their interpretation of the question. Usually 5 out of 25 will be confused because their favorite framework defines "after" to mean something else entirely, and can't grok the new contextual definition.

The interesting bit is the other 18 people who either fail to answer the question entirely, don't know how functions pass arguments, or come up with bizarrely wrong answers. Usually these 18 fail because they can not interpret what the program does in the specific context of a function call. They don't have a model of the state machine in their head. No amount of formal definition will let them process that information. These programmers get by through cribbing and trial and error. As one described his methodology: "I feed it inputs until I get what looks like the right answer".

For these people precise definitions, formal language, clever idioms, or finely tuned mathematical constructs do not matter, because they flip burgers with more care. And therein lies the crux of the issue, we may be smart enough to understand these machines, but the majority of people working in industry do not. And the programmers who become managers at large firms choose obtuse, inexpressive, cumbersome languages like Java, because they're hiring those 23 I'm turning down.
Jarek Rzeszótko
2012-05-08 16:20:43 UTC
Permalink
Natural languages are commonly much more ambiguous and you could say
"fuzzy" (as in fuzzy logic) than (currently popular) programming languages
and hence switching between those two has to cause some difficulties.

Example: I have been programming in Ruby for 7 years now, for 5 years
professionally, and yet when I face a really difficult problem the best way
still turns out to be to write out a basic outline of the overall algorithm
in pseudo-code. It might be a personal thing, but for me there are just too
many irrelevant details to keep in mind when trying to solve a complex
problem using a programming language right from the start. I cannot think
of classes, method names, arguments etc. until I get a basic idea of how
the given computation should work like on a very high level (and with the
low-level details staying "fuzzy"). I know there are people who feel the
same way, there was an interesting essay from Paul Graham followed by a
very interesting comment on MetaFilter about this:

http://www.paulgraham.com/head.html
http://www.metafilter.com/64094/its-only-when-you-have-your-code-in-your-head-that-you-really-understand-the-problem#1810690

There is also the Pseudo-code Programming Process from Steve McConnell and
his "Code Complete":

http://www.coderookie.com/2006/tutorial/the-pseudocode-programming-process/

Another thing is that the code tends to evolve quite rapidly as the
constraints of a given problem are explored. Plenty of things in almost any
program end up being the way they are because of those constraints that
frequently were not obvious in the start and might not be obvious from just
reading the code - that's why people often rush to do a complete rewrite of
a program just to run into the same problems they had with the original
one. The question now is how much more time would documenting those
constraints in the code take and how much time would it save with future
maintenance of the code. I guess the amount of this context that would be
beneficial varies with applications a lot.

If you mention TeX, I think literate programming is pretty relevant to this
discussion too, and I am personally looking forward to trying it out one
day. Knuth himself said he would not be able to write TeX without literate
programming, and the technique is of course partially related to what I've
said above regarding pseudocode:

http://www.literateprogramming.com/

Cheers,
Jaros³aw Rzeszótko
Post by Julian Leviston
Post by Julian Leviston
Humans parsing documents without proper definitions are like coders
trying to read programming languages that have no comments
One of the under appreciated aspects of system like TeX with the ability
to do embedded programming, or a system like Self with its Annotations as
part of the object, or even python's .__doc__ attributes is that they
provide context for the programmer.
A large part of the reason that these are under appreciated is that most
programs aren't sufficiently well factored to take advantage of these
capabilities. As a human description of what the code does and why will
invariably take about a paragraph of human text per line of code, a 20 line
function requires a pamphlet of documentation to provide sufficient context.
Higher order functions, objects, actors, complex messaging topologies,
exception handling (and all manner of related nonlocal exits), and the like
only compound the context problem as they are "non-obvious". Most of the
FP movement is a reaction against "non-obvious" programming. Ideally this
would result in a positive "self-evident" model, but in the real world we
end up with Haskell monads (non-obvious functional programming).
In the end the practical art is to express your code in such a way as the
interpretation of the written word and the effective semantics of the
program are congruent. Or in human terms "you say what you mean, and the
program does what it says". I have a code sample I use in programming
interviews which reads effectively
function(name) {
method = this[name]
return method.apply(this,arguments.after(0));
}
And the question I typically ask, after showing the definitions of after I
ask the simple question if I call this function as
fn('add', 1, 2)
What is the value of arguments.after(0)
In about 2 out of 25 interviews for senior level devs I get the right
answer. For almost all non-programmers I've talked to, with the little bit
of context "programmers often start counting from 0" they get the right
answer, without having read the formal definition in the base language 2
lines earlier. What I've learned in asking this question in interviews is
that the context one carries with them often colors their interpretation of
the question. Usually 5 out of 25 will be confused because their favorite
framework defines "after" to mean something else entirely, and can't grok
the new contextual definition.
The interesting bit is the other 18 people who either fail to answer the
question entirely, don't know how functions pass arguments, or come up with
bizarrely wrong answers. Usually these 18 fail because they can not
interpret what the program does in the specific context of a function call.
They don't have a model of the state machine in their head. No amount of
formal definition will let them process that information. These
programmers get by through cribbing and trial and error. As one described
his methodology: "I feed it inputs until I get what looks like the right
answer".
For these people precise definitions, formal language, clever idioms, or
finely tuned mathematical constructs do not matter, because they flip
burgers with more care. And therein lies the crux of the issue, we may be
smart enough to understand these machines, but the majority of people
working in industry do not. And the programmers who become managers at
large firms choose obtuse, inexpressive, cumbersome languages like Java,
because they're hiring those 23 I'm turning down.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-08 21:56:35 UTC
Permalink
Isn't this simply a description of your "thought clearing process"?

You think in English... not Ruby.

I'd actually hazard a guess and say that really, you think in a semi-verbal semi-phyiscal pattern language, and not very well formed one, either. This is the case for most people. This is why you have to write hard problems down... you have to bake them into physical form so you can process them again and again, slowly developing what you mean into a shape.

Julian
BGB
2012-05-09 01:07:49 UTC
Permalink
Post by Julian Leviston
Isn't this simply a description of your "thought clearing process"?
You think in English... not Ruby.
I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well formed
one, either. This is the case for most people. This is why you have to
write hard problems down... you have to bake them into physical form
so you can process them again and again, slowly developing what you
mean into a shape.
in my case I think my thinking process is a good deal different.

a lot more of my thinking tends to be a mix of visual/spatial thinking,
and thinking in terms of glyphs and text (often source-code, and often
involving glyphs and traces which I suspect are unique to my own
thoughts, but are typically laid out in the same "character cell grid"
as all of the text).

I guess it could be sort of like if text were rammed together with
glyphs and PCB traces or similar, with the lines weaving between the
characters, and sometimes into and out of the various glyphs (many of
which often resemble square boxes containing circles and dots, sometimes
with points or corners, and sometimes letters or numbers, ...).

things may vary somewhat, depending on what I am thinking about the time.


my memory is often more like collections of images, or almost like
"pages in a book", with lots of information drawn onto them, usually in
a white-on-black color-scheme. there is typically very little color or
movement.

sometimes it may include other forms of graphics, like pictures of
things I have seen, objects I can imagine, ...


thoughts may often use natural-language as well, in a spoken-like form,
but usually this is limited either to when talking to people or when
writing something (if I am trying to think up what I am writing, I may
often hear "echoes" of various ways the thought could be expressed, and
of text as it is being written, ...). reading often seems to bypass this
(and go more directly into a visual form).


typically, thinking about programming problems seems to be more like
being in a "storm" of text flying all over the place, and then bits of
code flying together from the pieces.

if any math is involved, often any relevant structures will be
themselves depicted visually, often in geometry-like forms.

or, at least, this is what it "looks like", I really don't actually know
how it all works, or how the thoughts themselves actually work or do
what they do.

I think all this counts as some form of "visual thinking" (though I
suspect probably a non-standard form based on some stuff I have read,
given that "colors, movement, and emotions" don't really seem to be a
big part of this).


or such...
Post by Julian Leviston
Post by Jarek Rzeszótko
Example: I have been programming in Ruby for 7 years now, for 5 years
professionally, and yet when I face a really difficult problem the
best way still turns out to be to write out a basic outline of the
overall algorithm in pseudo-code. It might be a personal thing, but
for me there are just too many irrelevant details to keep in mind
when trying to solve a complex problem using a programming language
right from the start. I cannot think of classes, method names,
arguments etc. until I get a basic idea of how the given computation
should work like on a very high level (and with the low-level details
staying "fuzzy"). I know there are people who feel the same way,
there was an interesting essay from Paul Graham followed by a very
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Jarek Rzeszótko
2012-05-09 07:13:19 UTC
Permalink
There is an excellent video by Feynman on a related note:



A damn good way to spend six minutes IMO...

Cheers,
Jaros³aw Rzeszótko
Post by Julian Leviston
Isn't this simply a description of your "thought clearing process"?
You think in English... not Ruby.
I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well formed one,
either. This is the case for most people. This is why you have to write
hard problems down... you have to bake them into physical form so you can
process them again and again, slowly developing what you mean into a shape.
in my case I think my thinking process is a good deal different.
a lot more of my thinking tends to be a mix of visual/spatial thinking,
and thinking in terms of glyphs and text (often source-code, and often
involving glyphs and traces which I suspect are unique to my own thoughts,
but are typically laid out in the same "character cell grid" as all of the
text).
I guess it could be sort of like if text were rammed together with glyphs
and PCB traces or similar, with the lines weaving between the characters,
and sometimes into and out of the various glyphs (many of which often
resemble square boxes containing circles and dots, sometimes with points or
corners, and sometimes letters or numbers, ...).
things may vary somewhat, depending on what I am thinking about the time.
my memory is often more like collections of images, or almost like "pages
in a book", with lots of information drawn onto them, usually in a
white-on-black color-scheme. there is typically very little color or
movement.
sometimes it may include other forms of graphics, like pictures of things
I have seen, objects I can imagine, ...
thoughts may often use natural-language as well, in a spoken-like form,
but usually this is limited either to when talking to people or when
writing something (if I am trying to think up what I am writing, I may
often hear "echoes" of various ways the thought could be expressed, and of
text as it is being written, ...). reading often seems to bypass this (and
go more directly into a visual form).
typically, thinking about programming problems seems to be more like being
in a "storm" of text flying all over the place, and then bits of code
flying together from the pieces.
if any math is involved, often any relevant structures will be themselves
depicted visually, often in geometry-like forms.
or, at least, this is what it "looks like", I really don't actually know
how it all works, or how the thoughts themselves actually work or do what
they do.
I think all this counts as some form of "visual thinking" (though I
suspect probably a non-standard form based on some stuff I have read, given
that "colors, movement, and emotions" don't really seem to be a big part of
this).
or such...
Example: I have been programming in Ruby for 7 years now, for 5 years
professionally, and yet when I face a really difficult problem the best way
still turns out to be to write out a basic outline of the overall algorithm
in pseudo-code. It might be a personal thing, but for me there are just too
many irrelevant details to keep in mind when trying to solve a complex
problem using a programming language right from the start. I cannot think
of classes, method names, arguments etc. until I get a basic idea of how
the given computation should work like on a very high level (and with the
low-level details staying "fuzzy"). I know there are people who feel the
same way, there was an interesting essay from Paul Graham followed by a
_______________________________________________
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
BGB
2012-05-09 12:48:39 UTC
Permalink
Post by Jarek Rzeszótko
http://youtu.be/Cj4y0EUlU-Y
A damn good way to spend six minutes IMO...
yep.


I was left previously trying to figure out whether "thinking using text"
was more linguistic/verbal or visual thinking, given it doesn't really
match well with either:
verbal thinking is generally described as people thinking with words and
sounds;
visual thinking is generally described as pictures / colors / emotions / ...

so, one can wonder, where does text fit?...

granted, yes, there is some mode-changing as well, as not everything
seems to happen the same way all the time, and I can often "push things
around" if needed (natural language can alternate between auditory and
textual forms, ...).

I have determined though that I can't really read and also "visualize"
the story (apparently, many other people do this), as all I can really
see at the time is the text. probably because my mind is more busy
trying to buffer up the text, and the space is already used up and so
can't be used for drawing pictures (unless I use up a lot of the space
for drawing a picture, in which case there isn't much space for holding
text, ...).

I can also write code while also listening to someone talk, such as in a
technical YouTube video or similar, since the code and person talking
are independent (and usually relevant visuals are sparse and can be
looked at briefly). but, I can't compose an email and carry on a
conversation with someone at the same time, because they interfere (but
I can often read and carry on a conversation though, though it is more
difficult to entirely avoid "topical bleed-over").


despite thinking with lots of text, I am also not very good at math, as
I still tend to find both arithmetic and "symbolic manipulation" type
tasks as fairly painful (but, these are used heavily in math classes).

when actually working with math, in a form that I understand, it is
often more akin to wireframe graphics. for example, I can "see" the
results of a dot-product or cross-product (I can see the orthogonal
cross-bars of a cross-product, ...), and can mentally let the system
"play out" (as annotated/diagrammed 3D graphics) and alter the results
and see what happens (and the "math" is the superstructure of lines and
symbols interconnecting the objects).

yet, I can't usually do this effectively in math classes, and usually
have to resort to much less effective strategies, such as trying to
convert the problem into a C-like form, and then evaluating this
in-head, to try to get an answer. similarly, this doesn't work unless I
can figure out an algorithm for doing it, or just what sort of thing the
question is even asking for, which is itself often problematic.

another irony is that I don't really like flowcharts, as I personally
tend to see them as often a very wasteful/ineffective way of
representing many of these sorts of problems. despite both being
visually-based, my thinking is not composed of flow-charts (and I much
prefer more textual formats...).


or such...
Post by Jarek Rzeszótko
Cheers,
Jaros?aw Rzeszótko
Post by Julian Leviston
Isn't this simply a description of your "thought clearing process"?
You think in English... not Ruby.
I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well
formed one, either. This is the case for most people. This is why
you have to write hard problems down... you have to bake them
into physical form so you can process them again and again,
slowly developing what you mean into a shape.
in my case I think my thinking process is a good deal different.
a lot more of my thinking tends to be a mix of visual/spatial
thinking, and thinking in terms of glyphs and text (often
source-code, and often involving glyphs and traces which I suspect
are unique to my own thoughts, but are typically laid out in the
same "character cell grid" as all of the text).
I guess it could be sort of like if text were rammed together with
glyphs and PCB traces or similar, with the lines weaving between
the characters, and sometimes into and out of the various glyphs
(many of which often resemble square boxes containing circles and
dots, sometimes with points or corners, and sometimes letters or
numbers, ...).
things may vary somewhat, depending on what I am thinking about the time.
my memory is often more like collections of images, or almost like
"pages in a book", with lots of information drawn onto them,
usually in a white-on-black color-scheme. there is typically very
little color or movement.
sometimes it may include other forms of graphics, like pictures of
things I have seen, objects I can imagine, ...
thoughts may often use natural-language as well, in a spoken-like
form, but usually this is limited either to when talking to people
or when writing something (if I am trying to think up what I am
writing, I may often hear "echoes" of various ways the thought
could be expressed, and of text as it is being written, ...).
reading often seems to bypass this (and go more directly into a
visual form).
typically, thinking about programming problems seems to be more
like being in a "storm" of text flying all over the place, and
then bits of code flying together from the pieces.
if any math is involved, often any relevant structures will be
themselves depicted visually, often in geometry-like forms.
or, at least, this is what it "looks like", I really don't
actually know how it all works, or how the thoughts themselves
actually work or do what they do.
I think all this counts as some form of "visual thinking" (though
I suspect probably a non-standard form based on some stuff I have
read, given that "colors, movement, and emotions" don't really
seem to be a big part of this).
or such...
Post by Julian Leviston
Post by Jarek Rzeszótko
Example: I have been programming in Ruby for 7 years now, for 5
years professionally, and yet when I face a really difficult
problem the best way still turns out to be to write out a basic
outline of the overall algorithm in pseudo-code. It might be a
personal thing, but for me there are just too many irrelevant
details to keep in mind when trying to solve a complex problem
using a programming language right from the start. I cannot
think of classes, method names, arguments etc. until I get a
basic idea of how the given computation should work like on a
very high level (and with the low-level details staying
"fuzzy"). I know there are people who feel the same way, there
was an interesting essay from Paul Graham followed by a very
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Jarek Rzeszótko
2012-05-09 07:11:35 UTC
Permalink
Yes, your description of the "thought clearing process" is very accurate,
the main point being two interesting issues this process raises:

A. Considering that the pseudo-code could have contributed more to my
understanding of the problem that the final code, how much is lost by not
having such "intermediate" artefacts somehow be part of the final code?
What would be the best way to incorporate them? There is a partial analogy
in mathematics where the mathematicians job is pretty much generating
proofs but there is much less information about how those were discovered.

B. Can programming languages be "brought" closer to the way we think yet
still stay executable by the computer... ? (So that A is not relevant
anymore)

B1. ... via a particular programming style?

B2. ... by an appropriate design?

Such division helps to systematise some of the approaches you already
mentioned.

For example, with literate programming which implements a solution to A) I
could keep the pseudo-code as part of the program in an interesting way,
but with the amount of cross-referencing than goes on, I wonder how easy it
is to remember where each part of a large program is located. Also, I have
a hard time imaging working with multiple persons concurrently on a large
literate program... But I don't have much experience here.

For B1), I learned lots of little ways of writing "readable" code from
Martin Fowlers "Refactoring", tricks like this:

http://martinfowler.com/refactoring/catalog/introduceExplainingVariable.html
http://martinfowler.com/refactoring/catalog/decomposeConditional.html

But this can only go so far - it works in business programming but with
really complicated things it isn't enough to make an "ordinary" programming
language a good "tool for thought".

B2) is more serious, with the before-mentioned Lisp macros, POLs etc. I
again (unfortunately) do not have experience programming in this style.

I will be very vague here, but maybe you could somehow invert B) and
instead of finding a way to program that's similar to the way we currently
think, find a programming language that would be convenient to reason in. I
have things like program derivation in mind here, an interesting modern
example is this paper:

http://www.cs.tufts.edu/~nr/comp150fp/archive/richard-bird/sudoku.pdf

Maybe a good idea for an advanced programming course would be to take two
weeks off from ordinary work, pick a couple of non-trivial algorithmic
problems and, say, try to solve two of them via literate programming, two
of them via constructing a language bottom-up in Lisp or Smalltalk, two of
them via derivation etc. On the other hand, it seems quite reasonable to
ask whether this matters at all during problem solving, maybe the same
mostly unconscious mental facilities do the work anyway and then this would
matter only for communicating the solution to other people. Anyway, maybe
that's what I will try to do during my vacation this year :)

Cheers,
Jarek
Post by Julian Leviston
Isn't this simply a description of your "thought clearing process"?
You think in English... not Ruby.
I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well formed one,
either. This is the case for most people. This is why you have to write
hard problems down... you have to bake them into physical form so you can
process them again and again, slowly developing what you mean into a shape.
Julian
Example: I have been programming in Ruby for 7 years now, for 5 years
professionally, and yet when I face a really difficult problem the best way
still turns out to be to write out a basic outline of the overall algorithm
in pseudo-code. It might be a personal thing, but for me there are just too
many irrelevant details to keep in mind when trying to solve a complex
problem using a programming language right from the start. I cannot think
of classes, method names, arguments etc. until I get a basic idea of how
the given computation should work like on a very high level (and with the
low-level details staying "fuzzy"). I know there are people who feel the
same way, there was an interesting essay from Paul Graham followed by a
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-09 07:43:45 UTC
Permalink
Yeah, Martin is always quite good at explaining these things...

I found most of his works quite confirming.

The trouble seems to be (and this is an issue I have with computers) as Alan points to... (but perhaps broader than he intended)... Optimisation should be separated from Intent Expression.

Higher level languages are, by definition, to aid humans in understanding - and yet they are themselves not able to be decoded / de-composited.

For CRAP's sake, we use the word "code" to talk about programming... but that's not what it should be... imagine a world of layered information processing systems... all dealing with a tiny slice... all smalltalk-like compiled down to optimised systems behind the scenes...(ie on the fly compilation and optimisation)...

At its most useful, each "system" should be a machine that uses its own language, defined for inspection in the system itself... and explains its requirements... each tiny bit of code should be expressed in no more than a paragraph.

This would then engender two things:
- infinite pedagogic inspection, down to the "bare metal"
- infinite accessibility, because things would be separated into their proper detail scope

One of the troubles seems to be that people don't seem to feel that you can engineer this into a "language" or process (for want of a better term).

So we end up with extremely fragile system such as SmallTalk (when I first tried it, I broke my windows... I didn't know what to click, and I had to "STOP" everything, and not save changes to the world because I ended up in some context I had no idea about).

As system designers, I think we could learn a fair bit from game designers...

Blizzard / Diablo III for example:


In particular, the comment "At Blizzard, we don't like standalone tutorials very much... we don't like people showing up and giving you a wall of text... we want you to learn to play the game... by playing it"

The equivalent of a wall of text to read would be learning a programming language... ideally programming is fun... the fun part is when you get the computer to do what you want it to do. Programming languages stand in the way of doing this. Ideally programming a computer should be MORE fun than playing a video game (more fun because it should be as fun as playing a game with the added advantage of being useful in the "real world", too).

Thinking about ToyLog... as an extremely tiny toy version of a programming language, or TileScript as a first step towards having a proper non-text-based programming language. Seriously guys, it's 2012... why is it even possible that I can make syntax errors any more?

What Blizzard and many other game shops have been doing for a long time is building the learning into the game itself... "Hardcore Games For Everyone Is What Blizzard Does"... they were talking about how they bring EVERYONE up to the level of hardcore gamer (that's to say someone who's keenly interested in gaming)... they do this through a slow ramp up process (actually it's as fast as you want to go)... and this is what language designers need to do.

Anyway enough of me ranting. I gotta go do some stuff so that I can build these systems I keep ranting about.

Julian
A. Considering that the pseudo-code could have contributed more to my understanding of the problem that the final code, how much is lost by not having such "intermediate" artefacts somehow be part of the final code? What would be the best way to incorporate them? There is a partial analogy in mathematics where the mathematicians job is pretty much generating proofs but there is much less information about how those were discovered.
B. Can programming languages be "brought" closer to the way we think yet still stay executable by the computer... ? (So that A is not relevant anymore)
B1. ... via a particular programming style?
B2. ... by an appropriate design?
Such division helps to systematise some of the approaches you already mentioned.
For example, with literate programming which implements a solution to A) I could keep the pseudo-code as part of the program in an interesting way, but with the amount of cross-referencing than goes on, I wonder how easy it is to remember where each part of a large program is located. Also, I have a hard time imaging working with multiple persons concurrently on a large literate program... But I don't have much experience here.
http://martinfowler.com/refactoring/catalog/introduceExplainingVariable.html
http://martinfowler.com/refactoring/catalog/decomposeConditional.html
But this can only go so far - it works in business programming but with really complicated things it isn't enough to make an "ordinary" programming language a good "tool for thought".
B2) is more serious, with the before-mentioned Lisp macros, POLs etc. I again (unfortunately) do not have experience programming in this style.
http://www.cs.tufts.edu/~nr/comp150fp/archive/richard-bird/sudoku.pdf
Maybe a good idea for an advanced programming course would be to take two weeks off from ordinary work, pick a couple of non-trivial algorithmic problems and, say, try to solve two of them via literate programming, two of them via constructing a language bottom-up in Lisp or Smalltalk, two of them via derivation etc. On the other hand, it seems quite reasonable to ask whether this matters at all during problem solving, maybe the same mostly unconscious mental facilities do the work anyway and then this would matter only for communicating the solution to other people. Anyway, maybe that's what I will try to do during my vacation this year :)
Cheers,
Jarek
Isn't this simply a description of your "thought clearing process"?
You think in English... not Ruby.
I'd actually hazard a guess and say that really, you think in a semi-verbal semi-phyiscal pattern language, and not very well formed one, either. This is the case for most people. This is why you have to write hard problems down... you have to bake them into physical form so you can process them again and again, slowly developing what you mean into a shape.
Julian
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2012-05-08 22:51:42 UTC
Permalink
Hi Jarek

I think your main point is a good one ... this is why I used to urge Smalltalk programmers to initially stay away from the libraries full of features and just use the kernel language as a "runnable pseudo-code" to sketch an outline of a simple solution. As you say, this helps to gradually find a better architecture for the more detailed version, and perhaps gives some hints of where to look in the library when it is time to optimize. This was perhaps even easier and nicer in Smalltalk-72 where you could make up on the fly a "pseudo-code that actually ran and could be debugged" (and it had almost no library full of features....)

This is one of many good arguments for finding ways to "separate meaning from optimization"

Cheers,

Alan
Post by Carl Gundel
________________________________
Sent: Tuesday, May 8, 2012 9:20 AM
Subject: Re: [fonc] The problem with programming languages
Natural languages are commonly much more ambiguous and you could say "fuzzy" (as in fuzzy logic) than (currently popular) programming languages and hence switching between those two has to cause some difficulties.
http://www.paulgraham.com/head.html
http://www.metafilter.com/64094/its-only-when-you-have-your-code-in-your-head-that-you-really-understand-the-problem#1810690
http://www.coderookie.com/2006/tutorial/the-pseudocode-programming-process/
Another thing is that the code tends to evolve quite rapidly as the constraints of a given problem are explored. Plenty of things in almost any program end up being the way they are because of those constraints that frequently were not obvious in the start and might not be obvious from just reading the code - that's why people often rush to do a complete rewrite of a program just to run into the same problems they had with the original one. The question now is how much more time would documenting those constraints in the code take and how much time would it save with future maintenance of the code. I guess the amount of this context that would be beneficial varies with applications a lot.
http://www.literateprogramming.com/
Cheers,
Jarosław Rzeszótko
Post by David Goehrig
Post by Julian Leviston
Humans parsing documents without proper definitions are like coders trying to read programming languages that have no comments
One of the under appreciated aspects of system like TeX with the ability to do embedded programming, or a system like Self with its Annotations as part of the object, or even python's .__doc__ attributes is that they provide context for the programmer.
A large part of the reason that these are under appreciated is that most programs aren't sufficiently well factored to take advantage of these capabilities.  As a human description of what the code does and why will invariably take about a paragraph of human text per line of code, a 20 line function requires a pamphlet of documentation to provide sufficient context.
Higher order functions, objects, actors, complex messaging topologies, exception handling (and all manner of related nonlocal exits), and the like only compound the context problem as they are "non-obvious".  Most of the FP movement is a reaction against "non-obvious" programming. Ideally this would result in a positive "self-evident" model, but in the real world we end up with Haskell monads (non-obvious functional programming).
In the end the practical art is to express your code in such a way as the interpretation of the written word and the effective semantics of the program are congruent. Or in human terms "you say what you mean, and the program does what it says".   I have a code sample I use in programming interviews which reads effectively
function(name) {
 method = this[name]
 return method.apply(this,arguments.after(0));
}
And the question I typically ask, after showing the definitions of after I ask the simple question if I call this function as
fn('add', 1, 2)
What is the value of arguments.after(0)
In about 2 out of 25 interviews for senior level devs I get the right answer.  For almost all non-programmers I've talked to, with the little bit of context "programmers often start counting from 0" they get the right answer, without having read the formal definition in the base language 2 lines earlier.  What I've learned in asking this question in interviews is that the context one carries with them often colors their interpretation of the question. Usually 5 out of 25 will be confused because their favorite framework defines "after" to mean something else entirely, and can't grok the new contextual definition.
The interesting bit is the other 18 people who either fail to answer the question entirely, don't know how functions pass arguments, or come up with bizarrely wrong answers. Usually these 18 fail because they can not interpret what the program does in the specific context of a function call. They don't have a model of the state machine in their head.  No amount of formal definition will let them process that information.  These programmers get by through cribbing and trial and error. As one described his methodology: "I feed it inputs until I get what looks like the right answer".
For these people precise definitions, formal language, clever idioms, or finely tuned mathematical constructs do not matter, because they flip burgers with more care. And therein lies the crux of the issue, we may be smart enough to understand these machines, but the majority of people working in industry do not. And the programmers who become managers at large firms choose obtuse, inexpressive, cumbersome languages like Java, because they're hiring those 23 I'm turning down.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Kevin Jones
2012-05-27 22:03:53 UTC
Permalink
Hi Alan,

In the videotaped presentation from HIPK (http://www.bradfuller.com/squeak/Kay-HPIK_2011.mp4) you made reference to the Burroughs 5000-series implementing capabilities. Could you elaborate on how capabilities were structured, stored and processed in the the B5000 series or point me to appropriate reading material?

Best regards,

Kevin Jones

P.S. - I really enjoy the work going on at VPRI.
Shawn Morel
2012-05-28 01:50:37 UTC
Permalink
Kevin,

I'll quote one of my earlier questions to the list - in it I had a few pointers that you might find a useful starting place.
Post by Kevin Jones
In the videotaped presentation from HIPK (http://www.bradfuller.com/squeak/Kay-HPIK_2011.mp4) you made reference to the Burroughs 5000-series implementing capabilities.
There's also a more detailed set of influences / references to Bob Barton and the B* architectures in part 3 of the early history of smalltalk: http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_III.html

"I liked the B5000 scheme, but Butler did not want to have to decode bytes, and pointed out that since an 8-bit byte had 256 total possibilities, what we should do is map different meanings onto different parts of the "instruction space." this would give us a "poor man's Huffman code" that would be both flexible and simple. All subsequent emulators at PARC used this general scheme." [Kay]

You should take the time to read that entire essay, it's chock-full of great idea launching points :)

Note that the Alto could simulate (I believe) 16 "instances". Not quite a full on bare metal VM the way VMware grossly virtualized an entire x86 system, but much more capable than what you'd call a hardware thread (e.g. processor cores or hyper threading).
Post by Kevin Jones
Could you elaborate on how capabilities were structured, stored and processed in the the B5000 series or point me to appropriate reading material?
I've been able to find some good pointers to the B5000, like this gem: http://www.cs.virginia.edu/brochure/images/manuals/b5000/descrip/descrip.html

and of course Barton's "A New approach to the Functional Design of a Digital Computer".

Another approach in the same "style" of rich system design would be the whirlwind: http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD694615&Location=U2&doc=GetTRDoc.pdf

The part I still can't find info about is the B220 data tape program written by the air force officer that had its own bootstrapping code on how to read its data format (very Forth-esque).

shawn
Post by Kevin Jones
Best regards,
Kevin Jones
P.S. - I really enjoy the work going on at VPRI.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Alan Kay
2012-05-28 12:44:47 UTC
Permalink
In a nutshell, the B5000 embodied a number of great ideas in its architecture. Design paper by Bob Barton in 1961, machine appeared ca 1962-3.

-- multiple CPUs
-- rotating drum secondary memory

-- automatic process switching

-- no assembly code programming, all was done in ESPOL (executive systems problem oriented language), an extended version of ALGOL 60.
-- this produced polish postfix code from a "pass and a half" compiler.
-- Tron note: the OS built in ESPOL was called the "MCP" (master control program).


-- direct execution of polish postfix code
-- code was reentrant

-- code in terms of 12 bit "syllables" (we would call them "bytes") 4 to a 48 bit word

-- automatic stack
-- automatic stack frames for parameters and temporary variables
-- code did not have any kind of addresses
The most unusual part was the treatment of memory and environments

-- every word was marked with a "flag bit" -- which was "outside" of normal program space -- and determined whether a word was a "value" (a number) or a "descriptor"


-- code was "granted" an "environment" in the form of (1) a "program reference table" (essentially an object instance) containing values and descriptors, and (2) a stack with frame. Code could only reference these via offsets to registers that themselves were not in code's purview. (This is the basis of the first capability scheme)

-- the protected descriptors were the only way resources could be accessed and used.


-- fixed and floating point formats were combined for numbers


-- array descriptors automatically checked for bounds violations, and allowed automatic swapping (one of the bits in an array descriptor was "presence" and if "not present" the address was a disk reference, if "present" the address was a core storage reference, etc.

-- procedure descriptors pointed to code.


-- a code syllable could ask for either a value to be fetched to the top of the stack ("right hand side" expressions), or a "name" (address) to be fetched (computing the "left hand side").

-- if the above ran into a procedure descriptor with a "value call" then the procedure would execute as we would expect. If it was a "name call" then a bit was set that the procedure could test so one could compute a "left hand side" for an expression. In other words, one could "simulate data". (The difficulty of simulating a sparse array efficiently was a clue for me of how to think of object-oriented simulation of classical computer structures.)

-----------------


As for the Alto in 1973, it had a register bank, plus 16 program counters into microcode (which could execute 5-6 instructions within each 750ns main memory cycle). Conditions/signals in the machine were routed through separate logic to determine which program counter to use for the next microinstruction. There was no delay between instruction executions i.e. the low level hardware tasking was "zero-overhead".

(The overall scheme designed by Chuck Thacker was a vast improvement over my earlier Flex Machine -- which had 4 such program counters, etc.)

The low level tasks replaced almost all the usual hardware on a normal computer: disk and display and keyboard controllers, general I/O, even refresh for the DRAM, etc.


This was a great design: about 160 MSI chips plus memory.

Cheers,

Alan
Post by Carl Gundel
________________________________
Sent: Sunday, May 27, 2012 6:50 PM
Subject: Re: [fonc] Question about the Burroughs B5000 series and apability-based computing
Kevin,
I'll quote one of my earlier questions to the list - in it I had a few pointers that you might find a useful starting place.
Post by Kevin Jones
In the videotaped presentation from HIPK (http://www.bradfuller.com/squeak/Kay-HPIK_2011.mp4) you made reference to the Burroughs 5000-series implementing capabilities.
There's also a more detailed set of influences / references to Bob Barton and the B* architectures in part 3 of the early history of smalltalk: http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_III.html
"I liked the B5000 scheme, but Butler did not want to have to decode bytes, and pointed out that since an 8-bit byte had 256 total possibilities, what we should do is map different meanings onto different parts of the "instruction space." this would give us a "poor man's Huffman code" that would be both flexible and simple. All subsequent emulators at PARC used this general scheme." [Kay]
You should take the time to read that entire essay, it's chock-full of great idea launching points :)
Note that the Alto could simulate (I believe) 16 "instances". Not quite a full on bare metal VM the way VMware grossly virtualized an entire x86 system, but much more capable than what you'd call a hardware thread (e.g. processor cores or hyper threading).
Post by Kevin Jones
Could you elaborate on how capabilities were structured, stored and processed in the the B5000 series or point me to appropriate reading material?
I've been able to find some good pointers to the B5000, like this gem: http://www.cs.virginia.edu/brochure/images/manuals/b5000/descrip/descrip.html
and of course Barton's "A New approach to the Functional Design of a Digital Computer".
Another approach in the same "style" of rich system design would be the whirlwind: http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD694615&Location=U2&doc=GetTRDoc.pdf
The part I still can't find info about is the B220 data tape program written by the air force officer that had its own bootstrapping code on how to read its data format (very Forth-esque).
shawn
Post by Kevin Jones
Best regards,
Kevin Jones
P.S. - I really enjoy the work going on at VPRI.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Julian Leviston
2012-05-08 22:58:10 UTC
Permalink
By the way,

This paragraph from Graham's essay, and in fact, his constant reiteration of it in most of his work, is perhaps the most under-rated idea that we have in the programming industry. It's actually not just the programming industry... My emphasis added:

You can magnify the effect of a powerful language by using a style called bottom-up programming, where you write programs in multiple layers, the lower ones acting as programming languages for those above. If you do this right, you only have to keep the topmost layer in your head.

This isn't just a style - it's programming to a micro-interface, and programming in extremely tiny chunks... it's also what FONC seem to be doing with the idea of POLs and Ometa translating between them. The interface *is* the micro-language... what's inside the interface is simply the implementation of the micro-language. (or POL, if you like).

One technique I use that is particularly helpful is naming my "variables" really long descriptive names. Effectively I use variable names as comments. But this is just because I program in languages that don't support a visual combining of infinitely recursive sub-languages. The LISPS apparently support this according to Graham, but in the end when I program in LISP I sill end up writing files of text, often using arcane symbols that feels like I'm a fyre-weilding mage from yester-epoch. That feels like an epic fail to me.

Julian
Post by Carl Gundel
Natural languages are commonly much more ambiguous and you could say "fuzzy" (as in fuzzy logic) than (currently popular) programming languages and hence switching between those two has to cause some difficulties.
http://www.paulgraham.com/head.html
http://www.metafilter.com/64094/its-only-when-you-have-your-code-in-your-head-that-you-really-understand-the-problem#1810690
http://www.coderookie.com/2006/tutorial/the-pseudocode-programming-process/
Another thing is that the code tends to evolve quite rapidly as the constraints of a given problem are explored. Plenty of things in almost any program end up being the way they are because of those constraints that frequently were not obvious in the start and might not be obvious from just reading the code - that's why people often rush to do a complete rewrite of a program just to run into the same problems they had with the original one. The question now is how much more time would documenting those constraints in the code take and how much time would it save with future maintenance of the code. I guess the amount of this context that would be beneficial varies with applications a lot.
http://www.literateprogramming.com/
Cheers,
Jaros³aw Rzeszótko
Post by Julian Leviston
Humans parsing documents without proper definitions are like coders trying to read programming languages that have no comments
One of the under appreciated aspects of system like TeX with the ability to do embedded programming, or a system like Self with its Annotations as part of the object, or even python's .__doc__ attributes is that they provide context for the programmer.
A large part of the reason that these are under appreciated is that most programs aren't sufficiently well factored to take advantage of these capabilities. As a human description of what the code does and why will invariably take about a paragraph of human text per line of code, a 20 line function requires a pamphlet of documentation to provide sufficient context.
Higher order functions, objects, actors, complex messaging topologies, exception handling (and all manner of related nonlocal exits), and the like only compound the context problem as they are "non-obvious". Most of the FP movement is a reaction against "non-obvious" programming. Ideally this would result in a positive "self-evident" model, but in the real world we end up with Haskell monads (non-obvious functional programming).
In the end the practical art is to express your code in such a way as the interpretation of the written word and the effective semantics of the program are congruent. Or in human terms "you say what you mean, and the program does what it says". I have a code sample I use in programming interviews which reads effectively
function(name) {
method = this[name]
return method.apply(this,arguments.after(0));
}
And the question I typically ask, after showing the definitions of after I ask the simple question if I call this function as
fn('add', 1, 2)
What is the value of arguments.after(0)
In about 2 out of 25 interviews for senior level devs I get the right answer. For almost all non-programmers I've talked to, with the little bit of context "programmers often start counting from 0" they get the right answer, without having read the formal definition in the base language 2 lines earlier. What I've learned in asking this question in interviews is that the context one carries with them often colors their interpretation of the question. Usually 5 out of 25 will be confused because their favorite framework defines "after" to mean something else entirely, and can't grok the new contextual definition.
The interesting bit is the other 18 people who either fail to answer the question entirely, don't know how functions pass arguments, or come up with bizarrely wrong answers. Usually these 18 fail because they can not interpret what the program does in the specific context of a function call. They don't have a model of the state machine in their head. No amount of formal definition will let them process that information. These programmers get by through cribbing and trial and error. As one described his methodology: "I feed it inputs until I get what looks like the right answer".
For these people precise definitions, formal language, clever idioms, or finely tuned mathematical constructs do not matter, because they flip burgers with more care. And therein lies the crux of the issue, we may be smart enough to understand these machines, but the majority of people working in industry do not. And the programmers who become managers at large firms choose obtuse, inexpressive, cumbersome languages like Java, because they're hiring those 23 I'm turning down.
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
BGB
2012-05-08 14:21:28 UTC
Permalink
Post by Julian Leviston
Naming poses no problem so long as you define things a bit. :P
Humans parsing documents without proper definitions are like coders
trying to read programming languages that have no comments
(pretty much all the source code I ever read unfortunately)
I think this is probably why (at least in my case), I tend to "think in
code" a lot more than "think in natural language" or "think in concepts".

like, a person is working with code, so they have this big pile of code
in-mind, and see it and think about its behavior, ...


because, yes, comments often are a bit sparse.

personally though, I think that the overuse of comments to describe what
code is doing is kind of pointless, as the person reading the code will
generally figure this one out easily enough on their own ("x=i;
//assign i to x", yes, I really needed to know this...).

comments are then more often useful to provide information about why
something is being done, or information about intended behavior or
functioning.

as well as describing things which "should be" the case, but aren't yet
(stuff which is not yet implemented, design problems or bugs, ...).


nevermind side-uses for things like:
putting in license agreements;
putting in documentation comments;
embedding commands for various kinds of tools (although, in many cases,
"magic macros" may make more sense, such as in C);
...


oddly, I suspect I may be a bit less brittle than some people when it
comes both to reading natural language, and reading code, especially
given how many long arguments about "pedantics A vs pedantics B" there
seem to be going around.

this is usually justified with claims like "programming is all about
being precise", even when it amounts to a big argument over things like
"whether or not X and Y are 'similar' despite being
not-functionally-equivalent due to some commonality in terms of the ways
they are used or in terms of the aesthetic properties of their interface
and similarity in terms of externally visible behavior", or "does
feature X if implemented as a library feature in language A still count
as X, when X would normally be implemented as a compiler-supported
feature in language B", ... (example, whether or not it is possible to
implement and use "dynamic typing" in C and C++ code).

and, also recently, an argument over "waterfall method" vs "agile
development", ...

nevermind the issue of "meaning depends on context" vs "meaning is built
on absolute truths".

I have usually been more on the "lax side" of "the externally visible
behavior is mostly what people actually care about" and "it doesn't
really matter if a feature is built-in to the compiler or a
library-provided feature, provided it works" (and, yes, I also just so
happen to believe that "meaning depends on context" as well, as well as
that the "waterfall method" is also "inherently broken", ...).

but, alas, better would be if there were a good way to avoid these sorts
of arguments altogether.


but alas...
Post by Julian Leviston
J
On Mon, May 7, 2012 at 11:07 PM, Clinton Daniel
The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)
Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties as an
existing term.
Yeah. I've had trouble with this balance before. We need to
acknowledge the path dependence in human understanding.
My impression: it's connotation, more than denotation, that
interferes with human understanding.
"Naming is two-way: a strong name changes the meaning of a thing, and
a strong thing changes the meaning of a name." - Harrison Ainsworth
Regards,
Dave
--
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
http://vpri.org/mailman/listinfo/fonc
Loading...