Discussion:
[Axiom-developer] A modest proposal
d***@axiom-developer.org
2007-06-28 08:32:56 UTC
Permalink
Now that we've had a full, frank, and open debate about William Sit's
proposal I have a counter-proposition. It can be summed up in a single
word:

Contribute.

A contribution to an open source project is one in which a developer
submits a diff-Naur patch that fixes a bug or adds a feature. The
diff-Naur patch is against either the shipping version or the upcoming
version. It's not a novel concept. It is used by hundreds of projects.
Posted today is a 40kb patch to SBCL for ANSI-compatible modern casing
(similar to the axiom downcase). Posted today are diff-Naur changes to
GNU binutils. These are changes in my email stream inbox. They
contain diff-Naur changesets by people who contribute to the projects.
People who did the hard work of tracking a bug and developing a fix or
the tedious work of changing a global feature in hundreds of files.

Waldek, of course you support William's proposal. It implies that you
have no work to do and that I have to refit all of my changes into
your code base. Pretty sweet deal. Do you really find it beyond your
powers to decode the changes you made to fix hyperdoc? Did you
document the way hyperdoc works so we can all understand? Is the
changeset way too subtle? Or just a lot of hard, tedious work that
might slow you down? It took the bettter part of the last 6 months
teasing out clean, single-issue diff-Naur changesets of the "Waldek"
branch and contributing them to silver. I know it is time-consuming
and slow. But your "contributions" were made despite a glaring lack of
effort on your part to contribute.

Waldek, your work is very valuable and we pay close attention to
it. It is likely that you are well-intentioned and are not trying to
co-opt the rest of the project. You can contribute if you would only
take the time. It would improve the trunk considerably.

Gaby, it is highly annoying that you continue to spout sarcasm
(BillPage) So, please don't downcase all file names
(Gaby) You understand the value of incremental improvements.
I wish we were more in sharing that view.
(Please don't say I don't understand what you wrote.)
Yet you made a valuable, massive, monolithic change to the
build system. At no time did you try to incrementally improve the
trunk by submitting diff-Naur patches. Now you have the problem of
creating an autoconf changeset that will make the purely syntactic
downcase changeset look puny. If you really believed in that philosophy
we would have seen a stream of diff-Naur patches from you against the
trunk. But there has been no "incremental" stream of changsets. We know
that making these changesets is time-consuming and slow. It took the
better part of the last 6 months teasing out diff-Naur patches of the
"BI" branch and contributiong them to silver. Thus your "contributions"
were made despite a glaring lack of effort on your part to contribute.

Gaby, your work is very valuable and we pay close attention to
it. It is likely that you are well-intentioned and are not trying to
co-opt the rest of the project. You can contribute if you would only
take the time. It would improve the trunk considerably.

Bill Page, your comments about downcasing files is very poorly founded.
A mono-cased Axiom would eliminate the port issue to Windows (something
dear to your heart, according to you). It would also establish the
beginnings of a system-wide standard of using monocase everywhere.
Thus code can reliably down-case a string and expect that they get
the right filesystem names. Do you find that global downcasing will
cause a disruptive impact on your code contributions? It hardly seems
so. You claim to only want to spend your time on new algebra
(wouldn't we all?). The downcase issue will have little impact
on your new algebra. But it will make future Windows ports easier.
It is time consuming and slow to figure out how to build Axiom on
Windows. But you're the primary Windows person on the project. It
would be nice if you did more than just point at some repository that
contains a "windows port" and tried to figure out how to diff-Naur the
changes as a contribution. It would also be nice if you didn't criticize
changesets that don't impact you.

William, "contributing" to a project also implies supporting stated
project goals. You write
Let's forget about documentation for the moment because documentation
slows development effort.
Yet one of the PRIMARY Axiom goals is documentation. Every file is
literate. Sure it is time-consuming and slow. I know because I have
I know you are worried about correctness, but we can develop a plan
to verify correctness by co-opting resources from the mailing list
and parceling out specific tests to individuals. Your regression
tests can still be run after each major build.
Umm, no. It takes a lot of time to construct those regression tests.
Try it sometime. Where is the regression test suite for your code?
If you won't write it then who can you "co-opt" from the mailing list
So if it is not too difficult, merging your changes into wh-sandbox
would be the fastest way to a new release that "just works"
Oh, really? So it "just works" as in:
<http://wiki.axiom-developer.org/366Gcl267CrashesBuildingWhSandbox/diff>
from today's mailbox. wh-sandbox crashes in build. We're not talking about
The lack of documentation is not a big problem because it would be a
waste to document code that is not final.
You mean, like documenting the fast changing partial differential code
you wrote 20 years ago? Is documenting that code a waste of time? You
already have the technical papers written. Is it that hard to adapt
them to produce even minimal documentation?



All of these remarks are stinging and pointed. So were the remarks
directed at me. Raise your eyes. Look toward cooperating by contributing
your time and energy to goals that go beyond the personal. Spend SOME of
your time on documenting/merging/testing your work. We need to start
working like SBCL, like GNU binutils, like Linux, like every other
project. There are no quick fixes. It is all hard work. We need to
change the "best-branch-wins" attitude so we can work together to
build a great system.


So we've entertained the "Sit Proposal". Now it is time to entertain
the "Daly Proposal". We don't need a web page to vote. Here's how to
submit your vote.... Figure out a needed feature (e.g. Hyperdoc fix),
make a diff-Naur changeset, document, test, and post it.

Contribute.

Tim
Ondrej Certik
2007-06-28 09:15:46 UTC
Permalink
Hi,

yes, I think Axiom should have just one official branch and it should
just work. And that official branch should be the one in Debian (it
will get to Ubuntu automatically) and other distributions that people
use. This way the possible new contributors will know, that their
patch, if accepted will get to the Axiom that people use and to all
distributions. And that is the motivation - that his new code will be
used by people (and will not be lost somewhere in some branch, that
maybe will not "win").

Ondrej
Post by d***@axiom-developer.org
Now that we've had a full, frank, and open debate about William Sit's
proposal I have a counter-proposition. It can be summed up in a single
Contribute.
A contribution to an open source project is one in which a developer
submits a diff-Naur patch that fixes a bug or adds a feature. The
diff-Naur patch is against either the shipping version or the upcoming
version. It's not a novel concept. It is used by hundreds of projects.
Posted today is a 40kb patch to SBCL for ANSI-compatible modern casing
(similar to the axiom downcase). Posted today are diff-Naur changes to
GNU binutils. These are changes in my email stream inbox. They
contain diff-Naur changesets by people who contribute to the projects.
People who did the hard work of tracking a bug and developing a fix or
the tedious work of changing a global feature in hundreds of files.
Waldek, of course you support William's proposal. It implies that you
have no work to do and that I have to refit all of my changes into
your code base. Pretty sweet deal. Do you really find it beyond your
powers to decode the changes you made to fix hyperdoc? Did you
document the way hyperdoc works so we can all understand? Is the
changeset way too subtle? Or just a lot of hard, tedious work that
might slow you down? It took the bettter part of the last 6 months
teasing out clean, single-issue diff-Naur changesets of the "Waldek"
branch and contributing them to silver. I know it is time-consuming
and slow. But your "contributions" were made despite a glaring lack of
effort on your part to contribute.
Waldek, your work is very valuable and we pay close attention to
it. It is likely that you are well-intentioned and are not trying to
co-opt the rest of the project. You can contribute if you would only
take the time. It would improve the trunk considerably.
Gaby, it is highly annoying that you continue to spout sarcasm
(BillPage) So, please don't downcase all file names
(Gaby) You understand the value of incremental improvements.
I wish we were more in sharing that view.
(Please don't say I don't understand what you wrote.)
Yet you made a valuable, massive, monolithic change to the
build system. At no time did you try to incrementally improve the
trunk by submitting diff-Naur patches. Now you have the problem of
creating an autoconf changeset that will make the purely syntactic
downcase changeset look puny. If you really believed in that philosophy
we would have seen a stream of diff-Naur patches from you against the
trunk. But there has been no "incremental" stream of changsets. We know
that making these changesets is time-consuming and slow. It took the
better part of the last 6 months teasing out diff-Naur patches of the
"BI" branch and contributiong them to silver. Thus your "contributions"
were made despite a glaring lack of effort on your part to contribute.
Gaby, your work is very valuable and we pay close attention to
it. It is likely that you are well-intentioned and are not trying to
co-opt the rest of the project. You can contribute if you would only
take the time. It would improve the trunk considerably.
Bill Page, your comments about downcasing files is very poorly founded.
A mono-cased Axiom would eliminate the port issue to Windows (something
dear to your heart, according to you). It would also establish the
beginnings of a system-wide standard of using monocase everywhere.
Thus code can reliably down-case a string and expect that they get
the right filesystem names. Do you find that global downcasing will
cause a disruptive impact on your code contributions? It hardly seems
so. You claim to only want to spend your time on new algebra
(wouldn't we all?). The downcase issue will have little impact
on your new algebra. But it will make future Windows ports easier.
It is time consuming and slow to figure out how to build Axiom on
Windows. But you're the primary Windows person on the project. It
would be nice if you did more than just point at some repository that
contains a "windows port" and tried to figure out how to diff-Naur the
changes as a contribution. It would also be nice if you didn't criticize
changesets that don't impact you.
William, "contributing" to a project also implies supporting stated
project goals. You write
Let's forget about documentation for the moment because documentation
slows development effort.
Yet one of the PRIMARY Axiom goals is documentation. Every file is
literate. Sure it is time-consuming and slow. I know because I have
I know you are worried about correctness, but we can develop a plan
to verify correctness by co-opting resources from the mailing list
and parceling out specific tests to individuals. Your regression
tests can still be run after each major build.
Umm, no. It takes a lot of time to construct those regression tests.
Try it sometime. Where is the regression test suite for your code?
If you won't write it then who can you "co-opt" from the mailing list
So if it is not too difficult, merging your changes into wh-sandbox
would be the fastest way to a new release that "just works"
<http://wiki.axiom-developer.org/366Gcl267CrashesBuildingWhSandbox/diff>
from today's mailbox. wh-sandbox crashes in build. We're not talking about
The lack of documentation is not a big problem because it would be a
waste to document code that is not final.
You mean, like documenting the fast changing partial differential code
you wrote 20 years ago? Is documenting that code a waste of time? You
already have the technical papers written. Is it that hard to adapt
them to produce even minimal documentation?
All of these remarks are stinging and pointed. So were the remarks
directed at me. Raise your eyes. Look toward cooperating by contributing
your time and energy to goals that go beyond the personal. Spend SOME of
your time on documenting/merging/testing your work. We need to start
working like SBCL, like GNU binutils, like Linux, like every other
project. There are no quick fixes. It is all hard work. We need to
change the "best-branch-wins" attitude so we can work together to
build a great system.
So we've entertained the "Sit Proposal". Now it is time to entertain
the "Daly Proposal". We don't need a web page to vote. Here's how to
submit your vote.... Figure out a needed feature (e.g. Hyperdoc fix),
make a diff-Naur changeset, document, test, and post it.
Contribute.
Tim
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
Ralf Hemmecke
2007-06-28 09:48:54 UTC
Permalink
Post by d***@axiom-developer.org
All of these remarks are stinging and pointed. So were the remarks
directed at me. Raise your eyes. Look toward cooperating by contributing
your time and energy to goals that go beyond the personal. Spend SOME of
your time on documenting/merging/testing your work. We need to start
working like SBCL, like GNU binutils, like Linux, like every other
project. There are no quick fixes. It is all hard work. We need to
change the "best-branch-wins" attitude so we can work together to
build a great system.
So we've entertained the "Sit Proposal". Now it is time to entertain
the "Daly Proposal". We don't need a web page to vote. Here's how to
submit your vote.... Figure out a needed feature (e.g. Hyperdoc fix),
make a diff-Naur changeset, document, test, and post it.
Tim, in order to help Gaby and Waldek to submit patches (which at least
Gaby said, he will do), it is very important that they know the svn
revision numbers of the changes that you took out.

*Please, post the ranges of SVN revisions* (something like
234:256,332:341, etc.) that you took from Waldek's or Gaby's branch and
post them to the list.

If that is done then we probably would not have to vote about taking
over Waldek's branch and just move it to trunk. That this will cause
problems for you is clear. But at the moment, there is no easy way to
relate the patches that you have applied to trunk to revision numbers of
Gaby's and Waldek's branch. That is one of the current problems. It is
not the unwillingness to contribute to trunk.
It is the problem that we cannot agree on *one* SCM and that the
relation between trunk and build-improvements and wh-sandbox is unclear.

Ralf
Waldek Hebisch
2007-06-28 10:48:39 UTC
Permalink
Post by d***@axiom-developer.org
<http://wiki.axiom-developer.org/366Gcl267CrashesBuildingWhSandbox/diff>
from today's mailbox. wh-sandbox crashes in build. We're not talking about
"just works" here.
Tim, have you tried Gentoo? You will probably (I write probably
because two Gentoo systems may be quite different) notice that
Gentoo gcl crashes on simple Lisp code that normally gcl handles
without problem. My conclusion is that Gentoo gcl is broken.

The problem vanishes if one uses self-build gcl-2.6.8-pre. This
of course risks restarting discussion about using tools provided
with distribution versus our own tools. At least my experience
was that gcl-2.6.6 simply does not work for Axiom, but it is
still common in older installations. I have mixed experience
using gcl-2.6.7, on a few machines self-build gcl-2.6.7 works
just fine, but on some other I noticed problems. If I were
making a release tarball now I would probably include gcl-2.6.8-pre
inside and made it a default.
--
Waldek Hebisch
***@math.uni.wroc.pl
Stephen Wilson
2007-06-28 13:10:23 UTC
Permalink
Post by d***@axiom-developer.org
Now that we've had a full, frank, and open debate about William Sit's
proposal I have a counter-proposition. It can be summed up in a single
Contribute.
I have read virtually very email on this list since 2004. The basic
expectations have never changed for a developer who wishes to help
move axiom forward. Work on a change, document it, submit a patch.

I am free to use whatever SCM I choose. I am free to pursue whatever
problem my heart desires. There are no restrictions, there are no
bottlenecks.

If I have a problem to solve I can hack away in whatever manner suits
me. I can document my work in the process. I can test and verify my
changes locally using whatever technique gives me confidence. Once
satisfied, using any tool of my choosing, I can post a diff against
Silver in a matter of seconds. It is my responsibility, and mine
alone, to ensure that the change is a quality change.

Silver is a moving target. If a conflict were to emerge between my
change and Silver, I deal with it.

The process is so blindingly simple I find it hard to believe it even
needs discussion.


Steve
William Sit
2007-06-28 17:46:57 UTC
Permalink
<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
Tim:
<blockquote TYPE=CITE>
<pre>> The lack of documentation is not a big problem because it would be a
waste to document code that is not final.
You mean, like documenting the fast changing partial differential code
you wrote 20 years ago? Is documenting that code a waste of time? You
already have the technical papers written. Is it that hard to adapt
them to produce even minimal documentation?</pre>
</blockquote>

<p><br>Okay, guilty as charged (although you ignored the phrase "that is
not final" in my argument, and so your example does not refute my argument;
I also have no pde software.).
<p>I am not a system developer and I am not really qualified to discuss
the technical work done by you, or the others. So far I have also been
holding back on my view on pamphlets. But since you press the issue, I
feel I should speak out.
<br>I believe documenting the build improvement and rearrangement of the
boot and compiling sequence faces the same issues below (perhaps simpler
if without the theory part).&nbsp;
<p>I am not convinced of the merits of the pamphlet way of documentation.
Everyone else seems to be convinced of the usefulness of pamphlets. I am
not. I am also not against documentation, but the way a pamphlet is composed
is simply not the way I would do documentation for algebra code. The flow
of pamphlet content also does not reflect the way how a mathematical algorithm
is developed and implemented. There are many aspects in this process: theory,
algorithm, data representation, and code; most of the time in that order,
but often, new insight from experimental implementation and computation
provide feedback and one cycles back to the beginning: to prove some more
theoretical results that lead to better algorithms, requiring improved
data structure, and added or simplified code. To me, it would be a waste
of time to document carefully at that stage (yes, there has to be some
documentation to remind oneself: certainly proofs have to be written down,
reasons for change of data structure noted, and why certain parts of the
code need changes -- but these need not be "carefully" documented -- it
only need to be for personal use; and during this development cycle, I
don't want to be distracted by the rigid format required by a pamphlet,
and besides, that format is still under experimentation). When the project
is completed to satisfaction, such as when one is ready to write up a paper
for publication, then it is time to carefully document the final data structure
and code.&nbsp; So, yes, my Axiom code should be better documented, but
not in a pamphlet. Moreover, I believe the style of programming (for example,
choosing meaningful variable identifiers and function interfaces) is far
more important for clarity of the code than any documentation.
<p>Your intention of a pamphlet is to be "self-contained": to include theory,
algorithm, data-structure and code, and examples (for regression tests).
Let's say I have already written up a paper on the theory and algorithm,
and I have the code with embedded documentations, and I have ran examples.
These are all in different files and they can be easily read and understood
by one "with ordinary skills in the art", perhaps with some cross-referencing.&nbsp;
Why should I now repackage all these files into one huge pamphlet, broken
into "chunks",&nbsp; intersperse explanation of data-structure, and code
among theory, distracting from the flow of the theoretical development,
or code development? In order to "glue" these coherently, the original
documents have to be rewritten, in a way that one cannot recover them even
after "noweave" or "notangle".&nbsp; The logical development of theory
need not correspond to the logical development of code, requiring constant
shuffling of these chunks. By dicing up the code into chunks, one has to
then test that after reassembly, no accidental errors are introduced even
when the original code had been tested thoroughly. For what? What is wrong
with just placing all the original files into one directory or zipped file?
<p>You also seem to think a pamphlet file should be a "tutorial" so that
anyone can follow, and not only that, the pamphlet should capture the thoughts
during the development process as well. That is simply not necessary. I
can understand developing a few sample tutorials for pedagogical purposes
(like your dhmatrix pamphlet), but this is not the most efficient method
to transfer knowledge. While ideally and in principle, any one should be
able to learn any subject matter from scratch given sufficient time and
a good teacher or book, this is not the case in the real world because
one must learn "fast" or else be crowned "too slow" (that's a polite version
of "stupid").&nbsp; Everyone is "too slow" for some subjects. It does not
mean *every* book should be written at the same level.
<p>To be useful, documentation should be aimed at people already with sufficient
background. It is a highly difficult skill to write at just the right level
for a targeted class of readers. Too much detail, and people skim it or
get bored. Not enough or carelessly written, it is hard to follow. Is it
worth it? Definitely, but not in the form of a pamphlet.
<br>&nbsp;
<p>So what I "propose" is simply a kind of "restart" for Axiom.
<br>IIRC, even Linus said they simply "restarted" when they move to git.&nbsp;
Obviously, if you feel that the burden is on you to merge your branch to
wh-sandbox, you are darn right.&nbsp; Your response is certainly not unexpected.
That is why my suggestion ("proposal" if you like) was conditioned on your
approval and qualified by "if it is not too difficult".&nbsp; You are the
one who asked me to make my suggestion public and you said you would follow
majority opinion. I certainly did not originate this idea of "best branch
win".&nbsp; Rather my starting point was stated in my email:
<p>"All I would like is simply to suggest, in my
<br>uneducated opinion, a possibly more efficient path of lesser
<br>resistance to bring the release version of Axiom up-to-date
<br>asap so we can attract more users and developers."
<p>Certainly, if&nbsp; you disagree, then it is not a "more efficient path
of lesser resistance".
<p>Thanks for considering it anyway.
<p>William
<br>&nbsp;
<br>&nbsp;</html>
d***@axiom-developer.org
2007-06-28 22:54:09 UTC
Permalink
William,

Rather than starting by refuting your points in detail let me ask the
question....If you know that literate programming is a PRIMARY design
goal of Axiom and you don't support that goal, then wouldn't it make
sense to take the freely available original sources, which are not
literate, and start a new Sourceforge project (say, OpenAxiom,
RawAxiom, whatever)? That way you can define the goals to be anything
you like.

Axiom, as it exists in this project, is an experiment in developing
software designed to be maintained, modified, and expanded by people
at least 30 years in the future. The fundamental conjecture is that
without documentation complex code cannot survive.

To that end the Axiom project adopted Knuth's technology of literate
programming. To quote Patrick McPhee:

Without wanting to be elitist, the thing that will prevent
literate programming from becoming a mainstream method is that
it requires thought and discipline. The mainstream is established
by people who want fast results while using roughly the same
methods that everyone else seems to be using, and literate
programming is never going to have that kind of appeal. This
doesn't take away from the usefulness of the approach.




Indeed I hear, underlying email on this list, both kinds of complaint:

a) "we want fast results" ...therefore throw out the trunk and let the
latest, greatest, branch win. This is the essence of your argument.
So what do we do when Mark Botch shows up with the latest, greatest
branch? Or Stephen's branch suddenly has a new feature we want? Do
we throw Waldek's out, rinse and repeat? Surely this is the "fastest
way" to get the new features. Do we each "do our own thing" and
fail to contribute?

Fast is NOT a project goal. Literate Programming with deep documentation
is a project goal. Correctness is a project goal. Being a research
platform for new ideas is a project goal. But FAST is not. There is no
need to "get it running by September". Development that requires thought
and discipline takes time. We have time. There are no deadlines. The
Axiom project has a "30 year horizon".


b) "using roughly the same methods that everyone else seems to be using"...
which is exactly how Scratchpad was developed. William Sit came to
visit at IBM Research, wrote some code, documented nothing and left.
I don't object to this kind of programming and I am not advocating
that all projects should switch to literate programming everywhere.
But the Axiom project on Sourceforge and Savannah has a PRIMARY
goal of developing a literate CAS using Knuth's technology.
To me, it would be a waste of time to document carefully at that
stage (...[snip]...). it only need to be for personal use;
That's fine. Do anything you want for personal use. But if you want to
write code in the Axiom project and you want it in the distribution
then your "personal code" in your "own branch" has graduated to the
stage that an unknown number of people will use it, maintain it,
modify it, and support it. So the "Axiom Distribution" strives for
literate documentation, written for people first, not machines.

If you spend the time to figure out how an algorithm works (e.g.
how the compiler resolves types, how hyperdoc builds pages, or
how the databases are structured (see src/interp/daase.lisp)
then we need you to leverage that effort so that we can understand,
maintain, and modify it later. Otherwise it gets lost. It gets
"too hard to merge".
and during the development cycle I don't want to be distracted by
the rigid format required by a pamphlet. And besides, that format
is still under experimentation
I don't recall that we've defined the shape of pamphlets. Ralf has
done pioneering work with ALLPROSE. I've made some initial attempts
with DH matrices and quaternions. Pamphlets are hardly a rigid format.
They certainly don't constrain your development or experimentation.
They are only latex files with some special tags. And in the near
future they will be pure latex files once we write latex macros
that replace the current chunk-name syntax for the special tags.
So they are not "home-grown" technology but general-purpose technology
over 20 years old with a known-good example (See TeX: The Program by
Knuth ISBN 0-201-13437-3)
So, yes, my Axiom code should be better documented, but not in a pamphlet.
Which raises two questions. First, why are you trying to associate yourself
with a project which has the goal of putting everything in a pamphlet?
Second, you're not doing documentation anyway so why is this an issue?
I believe the style of programming (for example, choosing meaningful
variable identifiers and function interfaces) is far more important
for clarity of the code than any documentation.
That's a nice belief. And it is not wrong, just trivial. You surely
chose the variable names in your algebra code to be insightful. Or they
would be if we understood the theory behind the code, the mindset that
you had when you wrote it and unit tests to see what it actually does.
I assure you that my variable names chosen in Axiom, most times, have
followed this. I certainly do this every time I program. So tell me,
how does the interpreter resolve types? Reference the variable names,
explain the algorithm by incanting the variable names in some order,
and show your work. You have one hour.
Why should I now repackage all these files into one huge pamphlet,
broken into "chunks", intersperse explanation of data-structure, and
code among theory, distracting from the flow of the theoretical
development.....
Why bother to use chapter 2 of a dissertation to prove a lemma?
Why introduce the theory of networks to explain the FFT algorithm?
Why not give students the final theorems in the course and simply
state "All the results follow from the theorems, which have carefully
chosen names"?

These pamphlet files are intended to be read by humans, learned by
humans, and maintained by humans. Pamphlet should explain the "why",
the motivation, the theory, the ideas, the magic. None of that is
in the code no matter how clever you are in choosing variable names.
although you ignored the phrase "that is not final" in my argument
No, I didn't ignore it. I'm simply asking when you might consider
work final? (Clearly Manual Bronstein's work is final. He's dead.
However he and his wife have given permission for Axiom to use his
work to document Axiom. That's a future task.) You are still around.
When is your work "final" and when do you think you'll document it?
Are we going to pass the documentation task onto the future? Why will
they be any more inclined to document your work than you are? Do you
really want some third-rate pseudo-mathematician like myself doing it?

If the new build system is stable (and this is entirely up to the
discretion of the developer) and it is going to be the basis for the
new build system then it needs to be documented. One of the primary
reasons is that, of all of the people, I use the build system on a
daily basis. For the past few years I've almost always had at least
one system doing a build (albeit with ancient, buggy, worthless Makes
that don't use standard methods). So of all of the people this
change will impact I am surely center stage.


The Axiom project has no fundamental opinions about which SCM to
use. We argue over SVN, CVS, Git, Arch, etc. All of that can be
changed. Even the build system can be replaced eventually. But
literate programming is fundamental. The Axiom project, as it is
defined, is literate.


It is NOT important for the goals of this project that we "get there
FAST" or that "we use the standard methods". If those are your needs
then, by all means, start a Sourceforge project that achieves those
goals. Make a Scratchpad or FASTAxiom or STANDARDAxiom or SITAxiom
project. Take a copy of Waldek's branch and release it under that
name. Axiom, and by that I mean, THIS Sourceforge project, does not
have those goals. My ONLY request is that you choose a different name
for your new project and stop referring to it as "Axiom".



I freely admit that the goals of the project "Axiom" as it exists
today flow from my prior experience with IBM Axiom. When I got the
distribution from NAG and did all of the work to set up and build this
project I hoped that people would be inspired to achieve something new
and different. Would they build a literate CAS? And a CAS that stresses
correctness, possibly provable correctness. A CAS that can form the
basis of a science of computational mathematics for the next hundred
years. A CAS to carry on the research tradition with fundamental
changes like provisos. Will people be motivated to take the time to
"do it right"? Can people be motivated to work toward a future they
will never see?

If you want to work on this project please respect the fundamental goals.
That's not an unreasonable request.

Tim
William Sit
2007-06-29 02:36:41 UTC
Permalink
<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
***@axiom-developer.org wrote:
<br>&nbsp;
<blockquote TYPE=CITE>You state:
<br>> So, yes, my Axiom code should be better documented, but not in a
pamphlet.
<p>Which raises two questions. First, why are you trying to associate yourself
<br>with a project which has the goal of putting everything in a pamphlet?
<br>Second, you're not doing documentation anyway so why is this an issue?</blockquote>
For the first, see below. For the second, no, it is not an issue for me.
It is one for YOU.
<p>[sniped]
<blockquote TYPE=CITE>If you want to work on this project please respect
the fundamental goals.</blockquote>

<blockquote TYPE=CITE>That's not an unreasonable request.</blockquote>
No, that is not unreasonable. I have been respecting this fundamental goal
of literate programming even though I disagree (that is the reason I have
kept quiet on the issue until recently "provoked" by your sarcastic personal
criticisms -- by the way, have I ever personally criticized your work or
even ideas before now?). Respect does not mean agreement. I associate with
the Axiom project because there are other goals besides literate programming,
say to spread the original ideas of Axiom and enlarge user base.
<p>It is not true that:
<blockquote TYPE=CITE>
<pre>William Sit came to visit at IBM Research, wrote some code, documented nothing and left.</pre>
</blockquote>

<p><br>When Bob Sutor developed hyperdoc and wrote the Axiom book, I documented
according to his hyperdoc requirements the packages on differential polynomial
rings (I recall he explicitly said there was not enough room to have similar
documentation for other packages I wrote). There are tests you can run
(in Examples of the spad source) as regression tests that you can even
try if hyperdoc is running as designed. But hyperdoc has not been working.
Without hyperdoc working, Axiom (the algebra code) is very difficult to
use. I participated in MathAction mostly to help myself and others understand
Axiom's sometimes seemingly absurd outputs. Hyperdoc is not available in
its full form. Hyperdoc is a way of documentation, as designed by the original
Axiom team. It still is way ahead of current browser technologies in many
ways. (I have articulated on this, see the archives).
<p>If "contributing to literate programming in the style of Knuth" is a
prerequisite for participation, then you are correct that I should not
hang around any more. But you are wrong. Even though you started the revival
of Axiom and make literate programming a primary goal, there are many more
aspects that users want. Not every user of Axiom is a developer. Not every
user wants to understand the theory behind a particular algorithm. Not
every user cares how Axiom is built. Every user wants Axiom to "just work"
on their platform of choice. That should be another primary goal of your
project. "just work" is not simply "just build" or "just compute" (both
essential, but not comprehensive).
<p>My parametric linear equation package IS documented: in a published
paper in the Journal of Symbolic computation, and in the source code. It
is not in a pamphlet style or what you call "literate with Knuth technology".&nbsp;
You have spent time to convert my IBM Script source to LaTeX, in the hope
I will expand it to a pamphlet. (and I haven't). But have you tried to
read and understand the paper? If you did not (and I believe you did not)
then what good would a pamphlet do? If you did, and have difficulty following
the mathematics, would putting the same mathematics in a pamphlet help?
and did you ever ask? I'll be very glad to explain or give talks on the
subject. Where do you think the paper needs improvement? Do you think by
literate programming the paper and code, you can just lie down on your
bed and read and understand them without effort? Is literate programming
the panacea?
<p>I respect your philosophy of literate programming, and I only ask you
to&nbsp; respect that there are other ways to make code (and theory) understandable.
You don't have to agree.
<p>I do not intend to convince you or others of my way of working (or unconvince
others of literate programming). I don't try to win in philosophy fights,
only in logical and mathematical discussions. I have no need to create
a SitAxiom branch because I have no agenda. I am a user of Axiom. I'll
use it when it suits my purpose and works. I'll use other CAS if other
CAS suits my purpose and works. I know my limits and I won't be able to
contribute to build-development work. I'll contribute to algebra code when
I start writing Axiom code again. Meanwhile, I am still learning about
the subtleties of Axiom (algebra code).
<p>Once again, I am not out to convince anyone of anything philosophical.
Please continue and carry on with your good work.
<p>My apologies for bringing this to the forum. It was not my intention.
As Donnie Brasco said in the namesake movie, let's "forget about it".
<p>William
<br>&nbsp;
<br>&nbsp;</html>
Ralf Hemmecke
2007-06-29 10:07:15 UTC
Permalink
Hello,

sorry if I prolong this thread although much has already been said.

Literate programming can currently be seen as religion. Even though TeX
survived until today, there must be some reason why a lot of people
don't follow it.

I must say, I am happy that Tim set LP as a main goal for Axiom. I think
it is an important one.

But not only from Tim's and William's mails I learned that it should not
be the only goal. Following LP too strictly, does not work. At least not
at the moment. Currently, more important than LP is the need to attract
more developers. Currently, the rules that everything must per properly
documented should be a bit relaxed. Axiom currently is not documented
properly and it will not be for another 10 years. We have a lot of
legacy code.

But without new and ambitious developers, Axiom will become even more
uninteresting. Axiom must spread to the world and attract users and
developers. If you set the entry barrrier too high. Axiom is going to
become a Tim-only project.

Tim, that does *not* mean that I am against LP. Quite the contrary. LP
should be preached again and again. And it should definitely be the
essential goal of the Axiom project.

What I want to say is: Tim is right and William is right. (sic!)

But LP is not everything. Think about why there are two books "TeX the
Program" and "The TeXbook". The first one describes the program and the
second one is for users to teach them how to use the program.

I don't quite know whether Tim wants to have both kinds of documentation
in the Axiom project. If I understood correctly, that is the "facets of
the crystal" view. Yes, Axiom should be a big monster which not only
contains code.

It should be a collection of
1 code,
2 API descriptions (this is what the +++ comments are)
3 explanation of
motivations,
tricks that were used in the implementation,
things that are not used in the code (and the reason for it)
4 informal description of why the code is there
5 informal overall description of how some piece of code works
6 formal descriptions of the theory behind the code
7 proofs that the code fulfils its specification (its API)
8 test scripts that tests the code (we don't have automated
program verification yet)
(This list is certainly not complete.)

William (I have not actually looked at your code so excuse me if I am
wrong), what you contributed was/is in the code and API part and maybe
in the test and formal description part (1,2,6,8). But what I think is
very important is also the interconnection between all of this. Can you
point me to some text in current Axiom that explains *why* (4) you have
designed your contribution as it is now?

I agree that some people are not interested in the code and in the
description of the code, but for developers who maintain code it is very
important to know about design issues. That is a different issue from
the actual mathematical theory and it is important for maintainers of
the code.

Axiom should have everything and it should be able to show to some
person exactly the amount of detail s/he wants to see. If someone is
only interested in the theory, extract something that doesn't show the
implementation details. But some people would not only like to see the
theory, but they want to see a "running theory", they want an
interactive paper where they see the theory and can compute with the
algorithms implemented by the paper. Other people like to see the actual
algorithm in a specific programming language. Some people are not
interested to be distracted by a lot of text and want to concentrate on
more focused details of the implementation. Axiom should be able to
extract all these different forms in nicely human readable formats. All
that should be provided by a (or several) pamphlet(s).

The important thing is that the information is there. I am not so sure
that we already have a good format of how a pamphlet should look like.

I also agree that it is hard to actually write good LP documents. I made
my experience with Aldor-Combinat. Look at it at
http://www.risc.uni-linz.ac.at/people/hemmecke/AldorCombinat/ .
It has a lot of documentation. But I myself would not say that it is a
proper literate program. Try it out, read it. I guess, you will not be
able to understand the overall project goals. I have not written that
but still the program is working. Now, if I would require proper LP
documentation, I would never be able to release this code at all. I try
hard to add a lot of design decisions, but all that is still not enough.

And that Aldor-Combinat experience lets me think that it is better to
release code even if it is not properly documented. So in this sense I
do not fully agree with Tim. I find it better if I throw something at
the public and have referees that tell me, hey there is something wrong,
this and that is not understandable. So I have a chance to extend
code+documentation at places where people complain. I am committed to
LP, but I also have only limited time so it's better if there is really
a community develepment. I am all for feedback. That should be our
"quality assurance" way of makeing Axiom incrementally better.

That is my little comment to the issue...
I'm sorry if I wasted your time.

Ralf
Bill Page
2007-06-29 19:23:36 UTC
Permalink
Post by Ralf Hemmecke
Literate programming can currently be seen as religion. Even though TeX
survived until today, there must be some reason why a lot of people
don't follow it.
I must say, I am happy that Tim set LP as a main goal for Axiom. I think
it is an important one.
But not only from Tim's and William's mails I learned that it should not
be the only goal. Following LP too strictly, does not work. At least not
at the moment. Currently, more important than LP is the need to attract
more developers. Currently, the rules that everything must per properly
documented should be a bit relaxed. Axiom currently is not documented
properly and it will not be for another 10 years. We have a lot of
legacy code.
But without new and ambitious developers, Axiom will become even more
uninteresting. Axiom must spread to the world and attract users and
developers. If you set the entry barrrier too high. Axiom is going to
become a Tim-only project.
...
I agree with both Ralf and William Sit on this issue. Like Ralf, I
think that I am a strong supporter of the *concept* of literate
programming, but that the experiment in literate programming as
defined by Tim Daly in the current Axiom open source project is (for
the most part) a failure. And I do not think that this is simply
because insufficient effort has been devoted to developing this part
of the project. Or rather I should say it the other way: insufficient
effort has be devoted to literate programming in the Axiom project
*because* the current approach to literate programming in the project
is a failure. I think the Knuth-style literate programming (pamphlet)
methodology is just not suitable to the task.

But I am not sure what to do about this. I think that already the
Axiom project has suffered a very significant and maybe even critical
lose of interest on the part of other possible contributors at least
in part because of the insistence on this approach. It complicates the
build environment and puts a extra layer between the developer and the
system. It is clear that developers do not want to be reading their
source code from a dvi viewer, two steps removed from the problem on
which they are focused. And at the same time the raw pamphlet format
source code is even more awkward and obscure than the original
"illiterate" source code by the interposed presence of coding and
documentation which is normally otherwise "out of the way".

These comments (by me, Ralf, William and others) should *not* be
construed as in anyway being against documentation or even against the
concept of literate programming. But as Ralf says, we have to face up
to these uncomfortable facts or risk the death of the Axiom project
due to placing a barrier which no developer other than the one who
originated the idea is willing to climb.

Regards,
Bill Page.
Stephen Wilson
2007-06-29 20:45:08 UTC
Permalink
Hi Bill,

I share some of the concerns about literate programming, but I'd like
to spell out some of my thoughts.
Post by Bill Page
I agree with both Ralf and William Sit on this issue. Like Ralf, I
think that I am a strong supporter of the *concept* of literate
programming, but that the experiment in literate programming as
defined by Tim Daly in the current Axiom open source project is (for
the most part) a failure. And I do not think that this is simply
because insufficient effort has been devoted to developing this part
of the project. Or rather I should say it the other way: insufficient
effort has be devoted to literate programming in the Axiom project
*because* the current approach to literate programming in the project
is a failure. I think the Knuth-style literate programming (pamphlet)
methodology is just not suitable to the task.
But I am not sure what to do about this. I think that already the
Axiom project has suffered a very significant and maybe even critical
lose of interest on the part of other possible contributors at least
in part because of the insistence on this approach. It complicates the
build environment and puts a extra layer between the developer and the
system.
The build environment is not an issue for me at all. If there is
complication, it is due to the fact that there is no clean integration
between the tools required to both build the system and extract the
code/document bits. This is one reason why I am excited about
asdf-literate, as it should significantly simplify the build as it
understands the model. We need the proper tools for the job, and if
they do not yet exist, we can create them.
Post by Bill Page
It is clear that developers do not want to be reading their
source code from a dvi viewer, two steps removed from the problem on
which they are focused.
I dont do that myself. I write my code as most other programmers do.
I write it and use traditional comments as a form of documentation. I
do most of my coding in Lisp, which for the most part does not care
about things like the order of definitions. So I can code and
document and build towards a literate document. Its not yet a
pamphlet, and the comments lack LaTeX markup, but its a form of a
literate document. It is no different than any other pice of code
written by any other programmer, except that the comments might seem a
tad verbose.

Once I have polished a file, once it is looking good and stable and Im
relatively sure it is up to snuff, It is pretty straight forward to
convert to a pamphlet. I dont resent the need to convert the code. I
view it as an opportunity to audit and re-check. Something you can
never do enough of.

So the rpocess of writing a pamphlet file is something of a support
structure for writing nice code -- something you want anyways before
you even think about pushing a change out to the mainstream.

Of course, others might write literately from the beginning, but thats
not how I do it. I would find that approach to be restrictive and
unproductive. But thats just me. Fortunately, nobody is telling me
how I should go about it.

Moreover, very few files in axiom are truely literate. They are just
shells, just raw code waiting to be improved.
Post by Bill Page
And at the same time the raw pamphlet format source code is even
more awkward and obscure than the original "illiterate" source code
by the interposed presence of coding and documentation which is
normally otherwise "out of the way".
This is one point to which I partially agree with. The format is
somehow `odd' -- it is not native to the programming language in which
your writing. However, it is fairly native to the task of writing a
document, but that is what your doing.

Leo, from the small amount I know of it, trys to blend the two notions
of coding and documenting. Some might prefer that way of working and
thinking. Thats OK with me. I feel that the main issue is having
high quality tools available. I do pretty good with emacs, but it
certainly does not suit everyone.
Post by Bill Page
These comments (by me, Ralf, William and others) should *not* be
construed as in anyway being against documentation or even against the
concept of literate programming. But as Ralf says, we have to face up
to these uncomfortable facts or risk the death of the Axiom project
due to placing a barrier which no developer other than the one who
originated the idea is willing to climb.
This is the main point of my post. If you agree in principle with the
goals of the Axiom project then there is no real hill. You can get to
the top either by doing somersaults of by taking the elevator. The
challenge is discovering for yourself what the path of least
resistance is, what works for you.

The Axiom project, quite naturally, is based on a few fundamental
axioms. These first principles are not the same as those found in
other projects. But just like in science or math, when a new set of
fundamental ideas are suggested as a foundation for the discipline,
there is always resistance and criticism, years upon years of
resistance and criticism.

If you believe in the goals, if you believe in the axioms, the
philosophys, stand by them and press on.


Sincerley,
Steve
Ralf Hemmecke
2007-06-29 22:28:37 UTC
Permalink
Post by Stephen Wilson
Post by Bill Page
It is clear that developers do not want to be reading their
source code from a dvi viewer, two steps removed from the problem on
which they are focused.
Huh? I am sure, you have not used ALLPROSE. Your code is only one click
away.
Post by Stephen Wilson
I dont do that myself. I write my code as most other programmers do.
Because you are trained in programming. But what you do is writing the
program for yourself. Not for other people.

Well, that is fine. I think I also do that to a great deal. But if I
write mathematical code, it is quite useful that I can see the formulas
in .dvi and have the code very close to it. I am not good at writing
ascii art and always putting something like ++ in front of a line is
painful.

Anyway, write your code as you like, nobody is forcing. The only thing
is that one should not forget that LP is not about documenting a
program. It is about documenting an idea where the documentation is
equipped with real code. Human understandable, of course. If that is the
final product that you give to the public nobody cares how it was
produced. I have not yet heard of someone who has good guidelines of how
to produce a nice literate document.

Additionally, one should also distinguish at least 3 cases.
1) An idea was already published together with pseudo code.
2) The idea is developed and at the same time coded (i.e. it is
relatively clear how to put it into a programming language.)
3) The idea might be clear but the design in the actual programming
language is not. It might be that the programming language does
not allow certain constructs that one has in mind. So there is some
time to experiment with the programming language first until one can
put a report on the different attempts into a nice pamphlet.

For 1) it should be better to start immediately with the text and just
turn the pseudo code into actual code. There might be some restructuring
necessary, but it is certainly the easiest task of the three.

I think 2) should be the future way of writing papers that describe
algorithms. Instead of pseudo code there should be a high-level language
that can be used instead of pseudo code. And LP helps here quite a lot.

Case 3) is something that appeared several times to me. We currently
face that in Aldor-Combinat. The theory is clear, it is written in a
book, but how this is translated into Aldor is totally unclear. There
are some ideas, but it seems for the ideal version, the Aldor language
is simply too weak. But to find that out I have to do some experiments
with the language. Sure, I do them without the pamphlet burden, but in
my case (ie with ALLPROSE) that only means that my code chunks are huge
and unstructured (similar to current Axiom). In the end I might throw
away a lot of code, but at least I should have an account of what has
been tried out and what turned out to be bad and why. For future
developers that might be important (even the bad and stupid code) so
they safe time in not trying again the wrong route. Or they see that I
falsely classified something as the wrong route. I'm not perfect.

Ralf
Stephen Wilson
2007-06-29 22:48:09 UTC
Permalink
Post by Ralf Hemmecke
Post by Stephen Wilson
I dont do that myself. I write my code as most other programmers do.
Because you are trained in programming. But what you do is writing the
program for yourself. Not for other people.
Sorry, I dont understand why you would jump to that conclusion.
Although I have not released a literate document for inclusion in the
project, I certainly have stuff pending. Im taking my time.

I will be writing for others, including myself. In ten years time the
code I write today may as well have been written by another.

[...]
Post by Ralf Hemmecke
Anyway, write your code as you like, nobody is forcing. The only thing
is that one should not forget that LP is not about documenting a
program. It is about documenting an idea where the documentation is
equipped with real code. Human understandable, of course.
Im pretty sure we are on the same page. Im not at all certain what it
was about my post that implied otherwise.
Post by Ralf Hemmecke
If that is the final product that you give to the public nobody
cares how it was produced. I have not yet heard of someone who has
good guidelines of how to produce a nice literate document.
Sure. Guidelines are probably a waste of time. As I said, the
challenge is for the individual to figure out what works for them.
Post by Ralf Hemmecke
Additionally, one should also distinguish at least 3 cases.
[...]

Yes, absolutely. There are at _least_ 3 cases. Probably half a dozen
more which any one of us will encounter during or work. One thing is
clear, at least to me: Literate programming does not get in the way
given a little bit of imagination.

Take care,
Steve
William Sit
2007-06-30 10:34:35 UTC
Permalink
Post by William Sit
[sniped]
I don't quite know whether Tim wants to have both kinds of documentation
in the Axiom project. If I understood correctly, that is the "facets of
the crystal" view. Yes, Axiom should be a big monster which not only
contains code.
It should be a collection of
1 code,
2 API descriptions (this is what the +++ comments are)
3 explanation of
motivations,
tricks that were used in the implementation,
things that are not used in the code (and the reason for it)
4 informal description of why the code is there
5 informal overall description of how some piece of code works
6 formal descriptions of the theory behind the code
7 proofs that the code fulfils its specification (its API)
8 test scripts that tests the code (we don't have automated
program verification yet)
(This list is certainly not complete.)
William (I have not actually looked at your code so excuse me if I am
wrong), what you contributed was/is in the code and API part and maybe
in the test and formal description part (1,2,6,8). But what I think is
very important is also the interconnection between all of this. Can you
point me to some text in current Axiom that explains *why* (4) you have
designed your contribution as it is now?
The code for PLEQN (parametric linear equations) was my
first Axiom code and I must admit that there are rooms for
improvement (even the variable names there weren't good).
But I did spend some time in designing the user interface
(or function interfaces for "psolve"). Bronstein complained
once that there were too many options (overloading psolve),
but he did not say which ones should be deleted. The code
was developed on and for an IBM main frame when memory was
at a premium (16MB). The code (even the algorithm) is not
efficient (even though it may be theoretically orders of
magnitude better than Gaussian elimination methods -- see
Section 9 of paper) and that is the reason for the options
to save partial computations to disk and this is possible
because the algorithm is by nature very parallel (which is
also the source of inefficiency; there is an efficiency
related open problem stated in the paper and so far I have
heard nothing new).

My paper has a long section on implementation issues,
(Section 7, although it is not Axiom specific, as required
by the journal referees) and SCRATCHPAD (former Axiom)
examples to use the code (Section 8). If you read these two
sections, you will find it is very close to the ideal of
literate programming (I didn't know what LP was at the
time). Interestingly, my submitted version interwined the
theory with the code but the referees wanted me to separate
them! Both the algorithm (and documentation) can be improved
to give better results (but not necessarily more efficient)
using another package QALGSET I developed about 6 years
later (1998). I have always wanted to rewrite the packages
but trying to do it taking in all the aspects (1 to 8) above
(of course I meant similar standards in different words) is
a big, big project, even for me. I fell behind when Axiom
changed to A# (former Aldor) and there are packages that
were dead code and I am even more behind currently. As a
researcher, my immediate concerns were (still are) to
produce new results (not necessarily related to Axiom), not
revisiting old code or even bringing them up to date
whenever the platform changes (Spad to A# to Aldor). But
actually, the days when I may refresh the packages is
getting closer because of applications to differential
equations.
Post by William Sit
I agree that some people are not interested in the code and in the
description of the code, but for developers who maintain code it is very
important to know about design issues. That is a different issue from
the actual mathematical theory and it is important for maintainers of
the code.
Axiom should have everything and it should be able to show to some
person exactly the amount of detail s/he wants to see. If someone is
only interested in the theory, extract something that doesn't show the
implementation details. But some people would not only like to see the
theory, but they want to see a "running theory", they want an
interactive paper where they see the theory and can compute with the
algorithms implemented by the paper. Other people like to see the actual
algorithm in a specific programming language. Some people are not
interested to be distracted by a lot of text and want to concentrate on
more focused details of the implementation. Axiom should be able to
extract all these different forms in nicely human readable formats. All
that should be provided by a (or several) pamphlet(s).
The important thing is that the information is there. I am not so sure
that we already have a good format of how a pamphlet should look like.
This is exactly my objections to the pamphlet format. It
would be easier from the author's viewpoint to create
different files for different uses, with cross-references
among them. It would be much harder to design one single
file that captures all possible views in a coherent way and
still be able to be unraveled as readable for various
separate views. To do the former you need a lot of "glue"
and the "glue" cannot be simply removed without
disconnecting the flow for individual views. To use a
compiler analogy, you need a lot of "ifdef"s and this
complicates the creative process as well as the logical flow
and reduces clarity because logical blocks may be dissected
into small chunks that spread across many pages.
Post by William Sit
I also agree that it is hard to actually write good LP documents. I made
my experience with Aldor-Combinat. Look at it at
http://www.risc.uni-linz.ac.at/people/hemmecke/AldorCombinat/ .
It has a lot of documentation. But I myself would not say that it is a
proper literate program. Try it out, read it. I guess, you will not be
able to understand the overall project goals. I have not written that
but still the program is working. Now, if I would require proper LP
documentation, I would never be able to release this code at all. I try
hard to add a lot of design decisions, but all that is still not enough.
And that Aldor-Combinat experience lets me think that it is better to
release code even if it is not properly documented. So in this sense I
do not fully agree with Tim. I find it better if I throw something at
the public and have referees that tell me, hey there is something wrong,
this and that is not understandable. So I have a chance to extend
code+documentation at places where people complain. I am committed to
LP, but I also have only limited time so it's better if there is really
a community develepment. I am all for feedback. That should be our
"quality assurance" way of makeing Axiom incrementally better.
I just skipped through AldorCombinat and except for the
theory behind species, your documentation for the code and
usage is quite extensive. I am however overwhelmed by the
hundreds of chunks and occasionally the extra link
information can be distracting (of course they are useful
for debugging and code changes). Just one question: In
Section 8.1, just before the bottom ToDo, the two formulae
at the end of lines, do you mean $\cup_{U \subseteq L, U
finite} \{U\}$ (and similarly for F[U])?

We all have limited time and that is why the priority of the
Axiom project should be to increase user base. If there are
more users, Axiom will be used at more universities
(especially if a fully functioning Windows version is
available) and even commercial houses (but they don't
"count" in my books); we will have more students who can do
a lot of work such as documentation (writing pamphlets if
that is the standard) as undergraduate or master theses.
Doctoral students can develop better models, new and newer
algorithms for Axiom's foundation and implement them.

William
Ralf Hemmecke
2007-06-30 12:46:17 UTC
Permalink
Post by William Sit
Post by Ralf Hemmecke
The important thing is that the information is there. I am not so sure
that we already have a good format of how a pamphlet should look like.
This is exactly my objections to the pamphlet format. It
would be easier from the author's viewpoint to create
different files for different uses, with cross-references
among them. It would be much harder to design one single
file that captures all possible views in a coherent way and
still be able to be unraveled as readable for various
separate views.
I don't disagree with you. As you might know, I would call a collection
of files that describe some idea/code/design issues by the name
pamphlet. So a pamphlet would be a kind of zip file similar to what an
open office file is. But I should rather chose another name in order not
to confuse people with what is currently understood by pamphlet.

It is totally nontrivial to use the same information in different views.
That is a burden for the author and I haven't yet seen a good tool that
helps to break information into such information atoms.
Post by William Sit
I just skipped through AldorCombinat and except for the
theory behind species, your documentation for the code and
usage is quite extensive. I am however overwhelmed by the
hundreds of chunks and occasionally the extra link
information can be distracting (of course they are useful
for debugging and code changes).
Yep. I don't claim it is the best that can be done. LP for me is an
experiment. The current form is quite helpful for development, but it is
not linearly human readable. I agree. I try to figure out myself how LP
should look like in the daily programming life. So any comment from
outside is welcome.
Post by William Sit
Just one question: In
Section 8.1, just before the bottom ToDo, the two formulae
at the end of lines, do you mean $\cup_{U \subseteq L, U
finite} \{U\}$ (and similarly for F[U])?
Oh, yes. But as you can probably read there. I need a much better
description. I'd like to formulate that in a categorial way, but I would
need that a species is not a functor F: B->B but an L indexed something.
If you can think of a way how to incorporate the type L business into a
categorial setting, I would be over grateful.
Post by William Sit
We all have limited time and that is why the priority of the
Axiom project should be to increase user base. If there are
more users, Axiom will be used at more universities
(especially if a fully functioning Windows version is
available) and even commercial houses (but they don't
"count" in my books); we will have more students who can do
a lot of work such as documentation (writing pamphlets if
that is the standard) as undergraduate or master theses.
Doctoral students can develop better models, new and newer
algorithms for Axiom's foundation and implement them.
I support that view very much.

Ralf
d***@axiom-developer.org
2007-06-28 22:58:57 UTC
Permalink
Ondrej,
Post by Ondrej Certik
yes, I think Axiom should have just one official branch and it should
just work. And that official branch should be the one in Debian...
Camm Maguire did the debian version. As far as I'm aware Camm is the
only debian-committer on this mailing list. Debian has a whole series
of constraints and rules which make it a challenge to maintain. It is
true that Debian would give Axiom considerably more exposure.

Are you a debian-committer? Can you take up the task of making a
Debian release of Gold?

Tim
Ondrej Certik
2007-06-28 23:12:57 UTC
Permalink
Post by d***@axiom-developer.org
Camm Maguire did the debian version. As far as I'm aware Camm is the
only debian-committer on this mailing list. Debian has a whole series
of constraints and rules which make it a challenge to maintain. It is
true that Debian would give Axiom considerably more exposure.
Are you a debian-committer? Can you take up the task of making a
Debian release of Gold?
I am not a Debian developer, but I use Debian and I have some packages
in Debian:

http://qa.debian.org/developer.php?login=***@certik.cz

but I always need to find a sponsor (a Debian developer) who uploads
the package for me. Yes, I can update the package, but I don't have
time to sort out bugs and stuff. But if you help me with compiling (I
had problems with compiling the wh-sandbox), I can do that. But then I
need to find a sponsor, which is really hard.

Ondrej
d***@axiom-developer.org
2007-06-28 23:13:13 UTC
Permalink
Ralf,
Tim, in order to help Gaby and Waldek submit patches (which at least
Gaby said, he will do), it is very important that they know thesvn
revision numbers of the changes that you took out.
*Please, post the ranges of SVN revisions* (something like
234:256,332:341, etc.) that you took from Waldek's or Gaby's branch and
post them to the list
If you look at the SVN revisions you'll see that the "changesets"
that are posted have two problems. First, a fair portion of the
changesets reference the new Makefiles in addition to changes made
in particular files. Thus, the revision cannot be applied.
Second, "changeset" changes, that is, a clean, complete, single-idea
change do not occur everywhere in the revision list. Thus there are
"mixed changesets" that are both applied and not applied or partial.

At Gaby's request I reviewed all of the SVN revisions and my changes
and posted a document at
<http://lists.gnu.org/archive/html/axiom-developer/2007-05/msg00320.html>

These changes are arranged by "topic" and would have been complete
changesets in the new git/svn-trunk version but were made prior to that.

The kind of diff-Naur patch I've asked from Gaby (autoconf) or
Waldek (hyperdoc) are not represented in any of the changes I made.
Thus there should be minimal collisions.

Tim
d***@axiom-developer.org
2007-06-28 23:24:21 UTC
Permalink
Ondrej,

As noted, Camm both did the prior port and sponsored Axiom.
If we can do the port I believe he can be asked to sponsor it.

Tim
Ondrej Certik
2007-06-29 08:28:58 UTC
Permalink
Post by d***@axiom-developer.org
Ondrej,
As noted, Camm both did the prior port and sponsored Axiom.
If we can do the port I believe he can be asked to sponsor it.
OK. When you merge the repositories and make a release, let me know
which exact version you would like to have in Debian and if Camm
doesn't have time to update the package, I can do it.

Ondrej
Camm Maguire
2007-07-03 20:01:27 UTC
Permalink
Greetings!

Yes, I would be happy to upload a new version. But please be advised
that it is not a light task. We managed to get axiom working on all
12 Debian platforms, a feat which easily could take several months.
So it would be great if there was a clear slow-moving official release
target somewhere.

Of course, it could be argued that portability to all these machines
is not all that important, in which case we can configure the package
accordingly.

Take care,
Post by d***@axiom-developer.org
Ondrej,
As noted, Camm both did the prior port and sponsored Axiom.
If we can do the port I believe he can be asked to sponsor it.
Tim
_______________________________________________
Axiom-developer mailing list
http://lists.nongnu.org/mailman/listinfo/axiom-developer
--
Camm Maguire ***@enhanced.com
==========================================================================
"The earth is but one country, and mankind its citizens." -- Baha'u'llah
Bill Page
2007-07-03 20:27:40 UTC
Permalink
Post by Camm Maguire
Yes, I would be happy to upload a new version. But please be advised
that it is not a light task. We managed to get axiom working on all
12 Debian platforms, a feat which easily could take several months.
So it would be great if there was a clear slow-moving official release
target somewhere.
I am confident that it will be *much* easier to get working versions
Axiom based on the new build system in the build-improvements and
wh-sandbox branches. Both of these versions use an approach very
similar to the one that you used for Debian.
Post by Camm Maguire
Of course, it could be argued that portability to all these machines
is not all that important, in which case we can configure the package
accordingly.
I think the presence of an up to date version of Axiom on the Debian
platforms would be a very good thing. The wh-sandbox branch has many
critical fixes to both the algebra and to hyperdoc.

Regards,
Bill Page.
Camm Maguire
2007-07-04 17:38:16 UTC
Permalink
Greetings!
Post by Bill Page
Post by Camm Maguire
Yes, I would be happy to upload a new version. But please be advised
that it is not a light task. We managed to get axiom working on all
12 Debian platforms, a feat which easily could take several months.
So it would be great if there was a clear slow-moving official release
target somewhere.
I am confident that it will be *much* easier to get working versions
Axiom based on the new build system in the build-improvements and
wh-sandbox branches. Both of these versions use an approach very
similar to the one that you used for Debian.
Post by Camm Maguire
Of course, it could be argued that portability to all these machines
is not all that important, in which case we can configure the package
accordingly.
I think the presence of an up to date version of Axiom on the Debian
platforms would be a very good thing. The wh-sandbox branch has many
critical fixes to both the algebra and to hyperdoc.
I would love Tim's and all the other developers' blessing as to which
snapshot of which branch is worthy to represent axiom in Debian. Do
we have consensus yet? I can work with either, new or old.

Take care,
Post by Bill Page
Regards,
Bill Page.
--
Camm Maguire ***@enhanced.com
==========================================================================
"The earth is but one country, and mankind its citizens." -- Baha'u'llah
Ondrej Certik
2007-07-03 21:15:52 UTC
Permalink
Post by Camm Maguire
Of course, it could be argued that portability to all these machines
is not all that important, in which case we can configure the package
accordingly.
I am just curious - isn't it a problem for Debian build servers, that
the package builts for 12 hours or even more?

Ondrej
Camm Maguire
2007-07-03 21:41:28 UTC
Permalink
Greetings!
Post by Ondrej Certik
Post by Camm Maguire
Of course, it could be argued that portability to all these machines
is not all that important, in which case we can configure the package
accordingly.
I am just curious - isn't it a problem for Debian build servers, that
the package builts for 12 hours or even more?
Were it only that simple. Typically, many, many failures are required
to get a working build on a lesser known machine. gcl/axiom flushes
out instabilities in gcc, binutils, and several other very low-level
parts of the toolchain. That said, getting all the builds working
flushes out bugs in gcl too.

Witness the issues at present: gcl switch statements apparently
generate bogus jump table assembler on mips, gcc-object-inserted
symbols __divq/__remq on alpha are apparently uwrappable (*), new
GPREL_32 relocs in the new .rodata section needed support on alpha,
new GNU_HASH section types required binutils patches, etc.

(*) As you know, GCL loads object (.o) files into a running image,
which can then be dumped and executed later. It therefore needs the
ability to relocate all symbols in the .o file. Relocating to
addresses in external shared libraries is dangerous, as the lib may
not be in the same place on image restart. 2.6.x had a plt mechanism,
which attempted to force gcc to provide local addresses by compiling
in functions using the addresses of the external functions. This
broke with subsequenct gcc developements, so even with the plt,
relocations were being set to shared library addresses on some
machines. Now, 2.7.0 redirects all such calls through a pointer in
the C source which is reset on image startup. __divq and the like,
alas, have no representative in the C source, and cannot be handled
thus. Ideally, we can define a wrapper function and relocate to that,
but this procedure has been problematic with mcount (for example) on
s390 and ppc. (just a taste ... :-)

Take care,
Post by Ondrej Certik
Ondrej
--
Camm Maguire ***@enhanced.com
==========================================================================
"The earth is but one country, and mankind its citizens." -- Baha'u'llah
Ondrej Certik
2007-07-04 19:01:33 UTC
Permalink
Post by Camm Maguire
Were it only that simple. Typically, many, many failures are required
to get a working build on a lesser known machine. gcl/axiom flushes
out instabilities in gcc, binutils, and several other very low-level
parts of the toolchain. That said, getting all the builds working
flushes out bugs in gcl too.
At least the bugs in the toolchain are discovered.
Post by Camm Maguire
Witness the issues at present: gcl switch statements apparently
generate bogus jump table assembler on mips, gcc-object-inserted
symbols __divq/__remq on alpha are apparently uwrappable (*), new
GPREL_32 relocs in the new .rodata section needed support on alpha,
new GNU_HASH section types required binutils patches, etc.
(*) As you know, GCL loads object (.o) files into a running image,
which can then be dumped and executed later. It therefore needs the
ability to relocate all symbols in the .o file. Relocating to
addresses in external shared libraries is dangerous, as the lib may
not be in the same place on image restart. 2.6.x had a plt mechanism,
which attempted to force gcc to provide local addresses by compiling
in functions using the addresses of the external functions. This
broke with subsequenct gcc developements, so even with the plt,
relocations were being set to shared library addresses on some
machines. Now, 2.7.0 redirects all such calls through a pointer in
the C source which is reset on image startup. __divq and the like,
alas, have no representative in the C source, and cannot be handled
thus. Ideally, we can define a wrapper function and relocate to that,
but this procedure has been problematic with mcount (for example) on
s390 and ppc. (just a taste ... :-)
So the problem is actually in GCL, that it is still not stable enough
to do some serious work on top of it?

I am just surprised, how many problems there are just with compiling.
I myself use Python/C/C++/Fortran and never had experienced any
problems like that. So maybe the lisp is not a mature platform for
larger projects? My naive opinion is that one should try to stick to
ways of doing programs that everyone does, so that such low-level bugs
are already fixed.

Ondrej

d***@axiom-developer.org
2007-06-29 06:27:46 UTC
Permalink
William,

I apologize that I've been sarcastic to you in public. As you know
from personal experience I hold you and your opinions in high esteem.
Please accept my apology.

Tim
William Sit
2007-06-29 19:44:08 UTC
Permalink
Post by d***@axiom-developer.org
William,
I apologize that I've been sarcastic to you in public. As you know
from personal experience I hold you and your opinions in high esteem.
Please accept my apology.
Tim
A bit of sarcasm was exactly what's needed to bring the
issues out! But be careful when you next use it. Apology
(not needed) accepted. Thanks for your kind comment above.

William
d***@axiom-developer.org
2007-06-29 08:11:09 UTC
Permalink
William,
Post by William Sit
My parametric linear equation package IS documented: in a published
paper in the Journal of Symbolic computation, and in the source code.
It is not in a pamphlet style or what you call "literate with Knuth
technology.
You have spent time to convert my IBM script to LaTeX, in the hope
I will expand it to a pamphlet. (and I haven't).
Yes, I spent approximately 3 weeks of evenings recasting your paper
into latex hoping to lower your "personal cost" of contributing it
and its ideas as documentation. Despite that effort you still objected.
That's fine. It is your choice to contribute or not. I didn't demand
that you do, I simply tried to make it straightforward. I've done
other "behind the scenes" work, including rewriting Barry's thesis,
which has yet to bear fruit (due to lack of time as Barry has quite
generously agreed to let his work be used in pamphlets).
Post by William Sit
But have you tried to read the paper? If you did not (and I believe
you did not) then what good would a pamphlet do?
A pamphlet form would document the ideas for the future users of Axiom
in a form we can use. There are thousands of mathematics textbooks but
not many that explain the theory and the computational mathematics, in
context with a particular implementation and its choices. A pamphlet
is not just a copy of the paper. It should explain the "why" of the
code in the context of the theory. We need to know not just the theory
but how specific Axiom code implements it, what are its design
constraints, what are its limitations, etc.

I did read your paper, in detail, along with converting it. But my
primary focus at the time was making it into latex. The effort was
not for me but for people who might need to understand your code.

If my goal was to document your code rather than recast your paper I
would have spend considerably more time trying to understand the
mathematics. For instance, I documented dhmatrix by understanding how
they work. I helped Scott use them as the basis for the pictures in
the Jenks book. I just started documenting the quaternions (see the
new quat.spad). You'll recall I sent you some private links related to
the theory of exterior and geometric algebras that I found during the
course of understanding quaternions. I'm still studying those papers
for the mathematics and hope to expand the quaternion and octonions
domains in a more general setting. Indeed, if I could find the time
there are some wonderful ideas that need to be reduced to working
code. We've had discussions about your new algebra work, which I
believe I do understand. But not to the level of being able to reduce
the ideas to code. You do and you can.

The point I'm trying to make is that I believe I could have understood
the beginnings of your algebra on differential polynomials from a pamphlet
you wrote using your paper. But my time was spent "on making the
machinery work", which is needed rather than using my time to understand
the algebra to the point where I could document it (which would have
been a pleasant luxury). You're clearly much more qualified to explain
your own work.

That said, you have recently agreed to allow me to use the paper I
converted as the basis for a pamphlet form of your work. Since you
disagree so strongly about making pamphlets perhaps we can compromise.
I'll make an effort to understand your algebra and an effort to write
a pamphlet form. All I ask is that you have the patience to review it,
explain what I don't understand, and correct the nonsense. Perhaps when
you've seen it done you'll be more convinced of the power of literate
programming.
Post by William Sit
I respect your philosophy of literate programming, and I only ask you
to repect that there are other ways to make code (and theory)
understandable.
Of course there are other ways to make code understandable. But other
code, e.g. Microsoft Word, won't give the same answers in 100 years.
So there is a qualitative difference between computational mathematics
and any other code being written.

Almost all commercial code dies. All of the projects I've worked on
have died "in the belly of the corporation". I believe that this fate
awaits Mathematica and Maple, a topic which we've discussed. It
certainly happened to Macsyma and would have happened to Scratchpad,
but for the very generous efforts of Mike Dewar and other people at
NAG. I've been trying hard to get Derive put into a "dead code safe",
hoping that TI will agree to release the code if they lose interest in
it.

But getting the code is not enough. I've talked to the developers of
Mathematica and Maple. The kernel code of those systems are very, very
poorly documented by the programmers estimates. Yet I've attended ISSAC
talks about the super-speed numerics at the heart of the system. I do
not know but I'd bet that the only existing documentation will be the
few pages of text from that paper. Due to publication constraints an
ISSAC paper cannot address any real details about the code, the choice
of data structures, or the relation of that code to the theory. That
would take a pamphlet file.

The point is that if Mathematica or Maple dies and if the failing
company would release the code it is unlikely that the code will be
brought back to life without the involvement of the original
developer(s). (Of course code is now considered "intellectual
property" (a non-legal concept but...) and is considered a real asset
of the company. So if a company should go bankrupt they cannot "give
away" the code. They need to sell it to recover the asset's value. I
was quoted a price of $250k dollars for the source code for Macsyma.
I don't have that kind of money.) Yet even in the unlikely
circumstance that they do give it away you'll have a million lines of
C code with clearly chosen variable names and no documentation.

Thus we are faced with a choice. Do we let systems simply die and then
invest the thousands of manyears of work to build new ones so the cycle
repeats? Or do we invest in the code and try to make it "live" in a
way that new users can learn about the concepts and their specific
implementation? If we do that it requires the effort of the original
developers to communicate with the future generations in specific detail.
Post by William Sit
Is literate programming the panacea?
Would pamphlets help? We do not know for sure. But I firmly believe
they will and this project is an experiment on that thesis, among
other goals.

So while there are many ways to write clear code (e.g. variable name
choices) and various ways of documenting (ISSAC papers, textbooks,
specification documents, Rational relation diagrams, ...) I believe
that Knuth "got it right". He produced high quality, nearly bug free
code with deep documentation. He placed his focus on writing for
people rather than writing for the machine. And his program, Tex, has
outlasted many other similar tools, even those backed by big money
like IBM script.
Post by William Sit
As Donnie Brasco said in the namesake movie, let's "forget about it".
Drop the debate but not the fundamental goal. There really is a
guiding philosophy behind the choices that informs decisions about the
quality of a proposed direction.




All that said, I again apologize for being sarcastic. When the
fundamentals of the project are being questioned I need to be quite
clear when a debate misses the point. "Reducto ad sarcasm" is not a
proper way to debate and I'm sorry that I fell to that level. I hold
you in high personal esteem and I regret that mistake. Mea culpa.

Tim
William Sit
2007-06-29 20:11:30 UTC
Permalink
***@axiom-developer.org wrote:
[sniped]
Post by d***@axiom-developer.org
That said, you have recently agreed to allow me to use the paper I
converted as the basis for a pamphlet form of your work. Since you
disagree so strongly about making pamphlets perhaps we can compromise.
I'll make an effort to understand your algebra and an effort to write
a pamphlet form. All I ask is that you have the patience to review it,
explain what I don't understand, and correct the nonsense. Perhaps when
you've seen it done you'll be more convinced of the power of literate
programming.
The ability to review a pamphlet depends on familiarity with
that format, which I don't have. But I'll answer any
questions you may have on the paper or code. In fact, if you
(or anyone else) send me questions, I'll answer them and
maybe we can then piece together some form of documentation
satisfactory to all concerned. I think MathAction Wiki pages
may be one media to start this.

There is one problem: I do not know whether copyright (by
Journal of Symbolic Computation) may be violated if large
chunks of my paper is reproduced on the web. It is fair use
to send an individual who asks me a copy, but I believe any
mass distribution would require prior copyright clearance
(the same legal reason why a library cannot distribute
printed copies of articles in a pile, say for use in a
course, but may make an individual copy for anyone who
asks).

William
C Y
2007-06-29 21:39:48 UTC
Permalink
Post by William Sit
There is one problem: I do not know whether copyright (by
Journal of Symbolic Computation) may be violated if large
chunks of my paper is reproduced on the web.
Hard to say - I think this is the place to start:
http://www.elsevier.com/wps/find/supportfaq.cws_home/copyright

One item on the "rights the author retains" list gives hope:

"the right to prepare other derivative works, to extend the article
into book-length form, or to otherwise re-use portions or excerpts in
other works, with full acknowledgement of its original publication in
the journal."

Whether pamphlets qualify is probably a question for a lawyer.

I'm assuming anyone other than the original author has no special
rights period.

This is why I think the logical approch to take for pamphlets on
subjects where we have no legal right to the original source material
is to write our own review paper in the process, outlining the key
points and weaving the original papers' ideas together into a whole
(which is also the point of the CAS code, after all - at least for the
well established mathematical work that will probably form the focus of
most of our literate efforts for the first few years.) What to do
about original work without a larger body of literate is somewhat less
clear, although I think in most cases an article appropriate for
inclusion as a pamphlet will have to be slightly different from a
typical research article. (More background, context, etc.)

Hopefully, we can eventually make Axiom a driver for free availability
of new publications in mathematical research via some sort of Axiom
journal. If the goal is truly to spread knowledge and learn, expensive
commercial journals and their per-article or subscription fees present
a barrior to that goal. (Certainly I feel it, not being at a major
university - there are ways but especially for older papers tracking
them down can be extremely difficult. We want knowledge to be easily
accessible. It's hard enough to get people to want to learn - why make
it any harder when they actually do try to learn?) I view this as a
secondary goal of pamphlets - if Axiom is structured correctly, the
pamphlets should eventually constitute a very high quality, complete
description of the mathematical landscape that is freely available to
everyone (which just incidently happens to have running CAS code to let
you immediately apply those same ideas).

I think the Axiom project might be a bit like the Free Software
Foundation in that respect - to me at least it's about more than just a
working CAS. It's about changing the landscape itself. Not replacing
the academic institutions and their work as they exist today, but
making them more visible and more readily applicable to the rest of the
world. That's a more ambitious project than just a working CAS, but
the potential rewards are even greater.

The analogy I have always liked is the advancement of transportation.
Take traveling west in the US, for example - people started doing it in
covered wagons because that was quicker and easier for them than
building anything better. But soon, people built railways that
dramatically improved just about everything where travel was concerned.
It made all sorts of things possible that were impossible before.
Same with the US highway system - two lane roads will get you there,
but superhighways will do it faster and much more quickly. For an
individual car, it makes more sense to use what is already there. When
many people rely on something, it's worth doing right even at the
expense of greater up front cost and work. Hopefully Axiom will prove
to be an enabler for new types of mathematical research and and new
levels of rigor and speed. That's worth doing right, even if we have
to spend the time to make the infrastructure to make it possible first.

Cheers,
CY


____________________________________________________________________________________
Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.
http://autos.yahoo.com/green_center/
William Sit
2007-07-01 20:22:04 UTC
Permalink
Post by C Y
Whether pamphlets qualify is probably a question for a lawyer.
I'm assuming anyone other than the original author has no special
rights period.
Question for a lawyer means question for the courts? If the
authors have the "residual" rights as you mentioned,
wouldn't it be logical that the authors can assign such
"residual" rights to another entity?
Post by C Y
This is why I think the logical approch to take for pamphlets on
subjects where we have no legal right to the original source material
is to write our own review paper in the process, outlining the key
points and weaving the original papers' ideas together into a whole
(which is also the point of the CAS code, after all - at least for the
well established mathematical work that will probably form the focus of
most of our literate efforts for the first few years.) What to do
about original work without a larger body of literate is somewhat less
clear, although I think in most cases an article appropriate for
inclusion as a pamphlet will have to be slightly different from a
typical research article. (More background, context, etc.)
Writing new material in survey form is certainly alright
(this already is a huge undertaking and will require
constant updating), especially if no literate articles are
available. But overviews and surveys, such as at Wikipedia
or Mathematica sites are, I'm afraid, not what Tim has in
mind. He wants lock, stock, and barrel (all the shebang) *in
one pamphlet* so that one need not hunt for obscure outside
articles or be an expert in the field to follow, maintain
and improve the code. (That's why the COMBINAT code does not
yet pass muster.) That, I think, is kind of an oxymoron and
unattainable ideal (try reading any new algorithm in
symbolic computation and it is an infinite descent if you
are not already an expert *in the particular problem the
algorithm solves.* COMBINAT comes to mind.) Every pamphlet
would eventually be a pretty thick book, even if there are
plenty of literature covering the topic (nothing
fundamentally wrong with that except for lack of
author-power and stretching the 30 year horizon). Perhaps
Gaby's "incremental improvement" idea may work better, but
that is exactly what we have been debating about. I prefer
Ralf's pragmatic approach: get the code working and
stablized, add documentation at places where users or
developers find it lacking in details (that is, let
documentation be "demand driven" rather than "supply
driven").
Post by C Y
Hopefully, we can eventually make Axiom a driver for free availability
of new publications in mathematical research via some sort of Axiom
journal. If the goal is truly to spread knowledge and learn, expensive
commercial journals and their per-article or subscription fees present
a barrior to that goal. (Certainly I feel it, not being at a major
university - there are ways but especially for older papers tracking
them down can be extremely difficult. We want knowledge to be easily
accessible. It's hard enough to get people to want to learn - why make
it any harder when they actually do try to learn?) I view this as a
secondary goal of pamphlets - if Axiom is structured correctly, the
pamphlets should eventually constitute a very high quality, complete
description of the mathematical landscape that is freely available to
everyone (which just incidently happens to have running CAS code to let
you immediately apply those same ideas).
I agree with the goal, just not the means. Moreover, for
that vision to be realized, the prerequisite is a very large
user base. In its earlier days, SCRATCHPAD II had a fairly
respectable user base, contributing to hundreds of algebra
packages (the number of build-developers were about the same
as we currently have: a handful). We should perhaps
investigate why so many have abandoned Axiom and so few new
users. Documentation may be one reason, but history suggests
otherwise. Price may be another historic reason, but now it
is free. A steep learning curve? That did not stop the
earlier contributors and surely with so much more support
now, the learning curve should be less steep.
Non-functioning hyperdoc? But hyperdoc was fully working in
the NAG days. Aldor? Well, Aldor is not fairing that well
either. Perhaps the reasons are external to Axiom. Fewer
mathematicians or computer science doctorates? (that seems
to be the trend). Fewer people interested in computation?
(that can't be true! but the entrance level has certainly
gone higher). National Science Foundation (U.S.) no longer
supports Axiom? (that's it!) Any non-U.S. government
stepping in? would forming a non-profit tax-exempt
organization be able to raise sufficient funds for
supporting graduate students? -- that's about $30,000 per
student per year. We need half a million at 6% interest for
one.

Next question: what can we do to increase user base? (Let's
hear yours.)
Post by C Y
I think the Axiom project might be a bit like the Free Software
Foundation in that respect - to me at least it's about more than just a
working CAS. It's about changing the landscape itself. Not replacing
the academic institutions and their work as they exist today, but
making them more visible and more readily applicable to the rest of the
world. That's a more ambitious project than just a working CAS, but
the potential rewards are even greater.
Great vision! Would you outline some plans and actions? (I
share with you that the journal and publishers in the math
and cs areas at least are exploiting academics: authors and
researchers do all the work (writing, reviewing, editing,
proofreading) and get *nothing* other than a bibliography
item.)
Post by C Y
The analogy I have always liked is the advancement of transportation.
Take traveling west in the US, for example - people started doing it in
covered wagons because that was quicker and easier for them than
building anything better. But soon, people built railways that
dramatically improved just about everything where travel was concerned.
It made all sorts of things possible that were impossible before.
Same with the US highway system - two lane roads will get you there,
but superhighways will do it faster and much more quickly. For an
individual car, it makes more sense to use what is already there. When
many people rely on something, it's worth doing right even at the
expense of greater up front cost and work. Hopefully Axiom will prove
to be an enabler for new types of mathematical research and and new
levels of rigor and speed. That's worth doing right, even if we have
to spend the time to make the infrastructure to make it possible first.
Note that the building of superhighways was historically
demand-driven (national security, commerce, mobility, and
lots of drivers).


William
C Y
2007-06-29 22:08:44 UTC
Permalink
Post by Stephen Wilson
The build environment is not an issue for me at all. If there is
complication, it is due to the fact that there is no clean
integration between the tools required to both build the system and
extract the code/document bits. This is one reason why I am excited
about asdf-literate, as it should significantly simplify the build
as it understands the model. We need the proper tools for the job,
and if they do not yet exist, we can create them.
Bingo. None of what we're doing with pamphlets or compiling the system
is fundamentally hard (except maybe the def* parsing, but that can come
later) and it's just a question of teaching the tools to deal with it
correctly.
Post by Stephen Wilson
I dont do that myself. I write my code as most other programmers do.
I write it and use traditional comments as a form of documentation.
I do most of my coding in Lisp, which for the most part does not care
about things like the order of definitions. So I can code and
document and build towards a literate document. Its not yet a
pamphlet, and the comments lack LaTeX markup, but its a form of a
literate document. It is no different than any other pice of code
written by any other programmer, except that the comments might seem
a tad verbose.
I think CFFI's comments are like that, come to think of it...
Post by Stephen Wilson
Once I have polished a file, once it is looking good and stable and
Im relatively sure it is up to snuff, It is pretty straight forward
to convert to a pamphlet. I dont resent the need to convert the
code. I view it as an opportunity to audit and re-check. Something
you can never do enough of.
Amen.
Post by Stephen Wilson
So the rpocess of writing a pamphlet file is something of a support
structure for writing nice code -- something you want anyways before
you even think about pushing a change out to the mainstream.
Of course, others might write literately from the beginning, but
thats not how I do it. I would find that approach to be restrictive
and unproductive. But thats just me. Fortunately, nobody is
telling me how I should go about it.
Right. The way I do it is close to that, but because I usually lack
the domain knowledge to just start writing code I will begin writing
the background parts of the paper as a way to educate myself. (The
Units and Dimensions draft is an example, not yet finished). Once I
understand it well enough to have an idea of what should be done, I
will fire up sbcl and start poking around trying things and figuring
out how pieces I am going to need should be done.
Bill Page
2007-06-29 22:33:44 UTC
Permalink
Post by C Y
Post by Stephen Wilson
The build environment is not an issue for me at all. If there is
complication, it is due to the fact that there is no clean
integration between the tools required to both build the system and
extract the code/document bits. This is one reason why I am excited
about asdf-literate, as it should significantly simplify the build
as it understands the model. We need the proper tools for the job,
and if they do not yet exist, we can create them.
Bingo. None of what we're doing with pamphlets or compiling the system
is fundamentally hard (except maybe the def* parsing, but that can come
later) and it's just a question of teaching the tools to deal with it
correctly.
...
I very strongly disagree. I do not think the AXIOM project should be
in the business of building literate programming tools. And I am
rather surprised that Stephen should think so since he is also against
the idea of building and maintaining intermediate tools like BOOT that
are much more intimately related to AXIOM than literate programming
tools.

I think one of the great advantages of open source is the ability to
build freely on the work of other open source projects. Tim had the
right idea (but the wrong tool) when he decided to use noweb for
literate programming in Axiom. Re-writing such things in Lisp is just
a diversion away from the real point of the Axiom project (at least
what the Axiom project should be). I cannot imagine that spending time
extending asdf to understand pamphlet format will be anything but a
similar diversion. The result of all this effort is just to build a
bigger ghetto in which Axiom will eventually die... :-(

Regards,
Bill Page.
Stephen Wilson
2007-06-29 23:03:15 UTC
Permalink
Post by Bill Page
I very strongly disagree. I do not think the AXIOM project should be
in the business of building literate programming tools. And I am
rather surprised that Stephen should think so since he is also against
the idea of building and maintaining intermediate tools like BOOT that
are much more intimately related to AXIOM than literate programming
tools.
Ok, no problem. Besides, Axiom is not in the business of doing
anything. It is the sum total of the efforts and skill of the
individual contributors. If I have a problem which I need a tool to
solve, and if no one else is already pursuing a solution, Ill create
it myself.

It just so happens that Boot doesn't solve anything for me. But thats
another, completely unrelated, issue.
Post by Bill Page
I think one of the great advantages of open source is the ability to
build freely on the work of other open source projects. Tim had the
right idea (but the wrong tool) when he decided to use noweb for
literate programming in Axiom. Re-writing such things in Lisp is just
a diversion away from the real point of the Axiom project (at least
what the Axiom project should be).
No one, to my knowledge, is rewriting noweb or some equivalent in Lisp.
Post by Bill Page
I cannot imagine that spending time extending asdf to understand
pamphlet format will be anything but a similar diversion. The result
of all this effort is just to build a bigger ghetto in which Axiom
will eventually die... :-(
Not sure what to say about that. There are technical reasons why asdf
is a reasonable direction to pursue. I have no idea what you mean by
`ghetto'.


Sincerely,
Steve
C Y
2007-06-29 23:14:18 UTC
Permalink
Post by Bill Page
I very strongly disagree. I do not think the AXIOM project should be
in the business of building literate programming tools. And I am
rather surprised that Stephen should think so since he is also
against the idea of building and maintaining intermediate tools
like BOOT that are much more intimately related to AXIOM than
literate programming tools.
Um. I would argue that literate programming tools are intimiately
related to Axiom - Axiom may end up pushing literate programming in
directions no one has really gone before, depending on where the work
leads. (I'll avoid the BOOT question - the archives are full of that
debate.)
Post by Bill Page
I think one of the great advantages of open source is the ability to
build freely on the work of other open source projects. Tim had the
right idea (but the wrong tool) when he decided to use noweb for
literate programming in Axiom. Re-writing such things in Lisp is just
a diversion away from the real point of the Axiom project (at least
what the Axiom project should be).
I know my views about reducing the dependency tree are extreme, so I'll
just state that I find it an interesting direction to pursue - I like
working in Lisp and prefer to have Axiom rely only on Lisp from a long
term viability standpoint.
C Y
2007-06-29 23:25:43 UTC
Permalink
Post by Stephen Wilson
No one, to my knowledge, is rewriting noweb or some equivalent in
Lisp.
He's referring to cl-web, I believe. That can be viewed as a
functional replacement for noweb in Lisp, although it does not support
all of noweb's features. It is a requirement for asdf-literate to be
able to handle pamphlet files. In theory I could write a lisp routine
that calls noweb every time (I have a test script I used for
comparisons which did something of the sort) but it seemed to me that
the better solution for portability and simplicity within the Lisp
environment was to have the abilities native. As a bonus, Waldek
provided a finite-state based solution that appears to be faster at the
tangle operation than noweb - but that's not the primary benefit for
me.
Post by Stephen Wilson
Post by Bill Page
I cannot imagine that spending time extending asdf to understand
pamphlet format will be anything but a similar diversion. The
result
Post by Bill Page
of all this effort is just to build a bigger ghetto in which Axiom
will eventually die... :-(
Not sure what to say about that. There are technical reasons why
asdf is a reasonable direction to pursue. I have no idea what you
mean by `ghetto'.
If I understand correctly from previous posts, in this context "ghetto"
is being used to describe a large body of tools that are divorced from
mainstream directions being taken by the open source community.
Centering on Lisp already triggers some of those complaints, and Bill's
concern (if I understand correctly) is that if we home-grow too much we
will end up not being able to grow with the open source world and be
left behind with a bunch of non-standard tools no one wants to take the
time to understand.

Obviously I disagree that this is what will happen - Lisp I don't
regard as a ghetto and ASDF is the standard solution within the Lisp
world. It seems to be well designed and flexible. And the goal is to
develop tools such that given a working Lisp environment users and
developers will be able to focus on the Algebra without worrying about
the underlying tools. If they MUST work with them, I would like them
to be literate all the way down - no dark corners to get into trouble
with. But that's again just me.

Cheers,
CY



____________________________________________________________________________________
Need a vacation? Get great deals
to amazing places on Yahoo! Travel.
http://travel.yahoo.com/
Stephen Wilson
2007-06-30 00:13:40 UTC
Permalink
Post by C Y
Post by Stephen Wilson
No one, to my knowledge, is rewriting noweb or some equivalent in
Lisp.
He's referring to cl-web, I believe.
Duh, of course. Sorry Cliff! Temporary blackout, I belive.
Post by C Y
If I understand correctly from previous posts, in this context "ghetto"
is being used to describe a large body of tools that are divorced from
mainstream directions being taken by the open source community.
Centering on Lisp already triggers some of those complaints, and Bill's
concern (if I understand correctly) is that if we home-grow too much we
will end up not being able to grow with the open source world and be
left behind with a bunch of non-standard tools no one wants to take the
time to understand.
Ok. I recall similar notions. New tools that solve new problems are
always non-standard by definition. I just never connected `ghetto'
with that perspective.
Post by C Y
Obviously I disagree that this is what will happen - Lisp I don't
regard as a ghetto and ASDF is the standard solution within the Lisp
world. It seems to be well designed and flexible. And the goal is to
develop tools such that given a working Lisp environment users and
developers will be able to focus on the Algebra without worrying about
the underlying tools. If they MUST work with them, I would like them
to be literate all the way down - no dark corners to get into trouble
with. But that's again just me.
Your certainly not alone :)


Cheers,
Steve
Bill Page
2007-06-30 04:52:06 UTC
Permalink
Post by Stephen Wilson
...
Post by C Y
If I understand correctly from previous posts, in this context "ghetto"
is being used to describe a large body of tools that are divorced from
mainstream directions being taken by the open source community.
Centering on Lisp already triggers some of those complaints, and Bill's
concern (if I understand correctly) is that if we home-grow too much we
will end up not being able to grow with the open source world and be
left behind with a bunch of non-standard tools no one wants to take the
time to understand.
Thanks for carrying my side of the conversation for me, Cliff. :-)
I've had my mind on other things for the last few hours... Yes, you
state very clearly and exactly my concerns.
Post by Stephen Wilson
Ok. I recall similar notions. New tools that solve new problems are
always non-standard by definition. I just never connected `ghetto'
with that perspective.
What "new problems" are you referring to here? I do not see any new
problems in this part of the Axiom project. The methodologies that we
are talking about here have been around nearly as long as Axiom
itself.
Post by Stephen Wilson
Post by C Y
Obviously I disagree that this is what will happen - Lisp I don't
regard as a ghetto and ASDF is the standard solution within the Lisp
world. It seems to be well designed and flexible.
I have a very great respect for Lisp, but I wonder how you can look
around the web and at the type and number of programs that have been
written in the last 10 years and not think that Lisp is essentially
already a ghetto (as you have so clearly and accurately defined it).
We might wish that that was not true but all the evidence is clearly
there. Ask the people who teach computer science at university what
they think of Lisp. Ask them what there students think of Lisp. Gaby
for example has already said what the reaction of his colleagues was
to the fact that he has been spending so much time on an "old Lisp"
system like Axiom... When I mentioned Lisp to a room full of
enthusiastic Sage developers you should have seen the "tolerant
amusement" on the faces those under 25 in the crowd. Man, did that
make me feel old... :-(

What does ASDF do that 'make' and other parts of the existing build
system does not already do?
Post by Stephen Wilson
Post by C Y
And the goal is to develop tools such that given a working Lisp
environment users and developers will be able to focus on the Algebra
without worrying about the underlying tools. If they MUST work with
them, I would like them to be literate all the way down - no dark corners
to get into trouble with. But that's again just me.
That part I completely agree with but I fail to see why that requires
doing the things that you and Stephen are proposing.
Post by Stephen Wilson
Your certainly not alone :)
That's what I like about the web! ;-)

Regards,
Bill Page.
Stephen Wilson
2007-06-30 06:26:04 UTC
Permalink
Hello Bill,
Post by Bill Page
Post by Stephen Wilson
...
Post by C Y
If I understand correctly from previous posts, in this context "ghetto"
is being used to describe a large body of tools that are divorced from
mainstream directions being taken by the open source community.
Centering on Lisp already triggers some of those complaints, and Bill's
concern (if I understand correctly) is that if we home-grow too much we
will end up not being able to grow with the open source world and be
left behind with a bunch of non-standard tools no one wants to take the
time to understand.
Thanks for carrying my side of the conversation for me, Cliff. :-)
I've had my mind on other things for the last few hours... Yes, you
state very clearly and exactly my concerns.
Im glad Cliff reflected your thoughts accurately as well.
Post by Bill Page
Post by Stephen Wilson
Ok. I recall similar notions. New tools that solve new problems are
always non-standard by definition. I just never connected `ghetto'
with that perspective.
What "new problems" are you referring to here? I do not see any new
problems in this part of the Axiom project. The methodologies that we
are talking about here have been around nearly as long as Axiom
itself.
In some ways this is true. But take your own concern about a
complicated build environment as an example. There is no reason for
it.

I am skipping the next segment of your post because I do not feel
there is a need to promote or justify Lisp. I really would like to
avoid a language war.

[...]
Post by Bill Page
What does ASDF do that 'make' and other parts of the existing build
system does not already do?
There are several things.

Make has a specific attitude, if you can call it that, about what it
takes to build a system. It is very much rooted in the requirements
of C and similarly compiled software. In short, it invokes a compiler
repeatedly over a list of files, topologically sorted w.r.t a DAG of
dependencies, and finally links an executable.

Lisp is different. You still need to consider dependencies, but you do
not need the iterative process and the final `link' stage to get a
working system. Everything can be done dynamically while the system
is running. For example, I use ASDF in the Axisp repo for all new
lisp code. I can edit a pice of code in one emacs buffer, and have a
running Axiom system in another. When I have made a change, I do the
equivalent of:

(1) -> )lisp (asdf:oos 'asdf:load-op :axisp)

In reality, I have this command associated with a key binding, to save
typing. The end result is an Axiom, with all dependencies tracked and
loaded afresh, without having to restart the system.

Think about the consequences. Already, there exists the basic
machinery to drag-and-drop a pamphlet file into a running system and
have it instantly available. The connotations are far from being
limited to a simple convenience feature useful only to developers.


Another example, if you would bare with me. The iterative process of
compilation does not jive well with a Lisp based system. Consider the
algebra build, where we repeatedly invoke the Lisp system to compile
each and every file independently. This is necessary for C, but not
for Lisp. We could trivially define the dependencies in ASDF, and
build the algebra in one shot. This is a simple minded application
which would reduce the build time, for fun, by 15 minutes+. You save,
at a minimum, the repeated autoloading of the code which implements
Axioms compiler. Of course you could build a custom image which has
everything preloaded and save some time, but to me that is a gross
hack, and you loose the benefits which I alluded to above.
Furthermore, if you were to choose to do something similar from within
make you would be reimplementing a poor-mans ASDF, so why bother?
There are too many advantages to using ASDF directly.

I gave the above examples first as I feel they are the most `user
visible'. Certainly others can extend on the theme.

However, for the sake of completeness, one needs to consider the
design of ASDF itself -- what it means for developers. This is Lisp
specific, so I wont go into deep details. But understand that it is
defined via CLOS, the Common Lisp Object System. The interface is
designed such that all entities over which the system operates are
objects (classes), and all functionality is exported as a method.
This is totally foreign to make and friends.

The system is intrinsically extensible, via subclassing and
specialization. Thus, If you wanted to dynamically tangle and weave a
pamphlet file, you can do that with a minimum of effort. No need to
grok make, shell scripts, sed regular expressions, M4 macros, etc,
etc. You, among all people, should appreciate the benefits. It is a
mostly environment neutral way of specifying such a process. It is,
for any relative purpose, an instantly portable solution to an
exceedingly wide variety of problems.
Post by Bill Page
Post by Stephen Wilson
Post by C Y
And the goal is to develop tools such that given a working Lisp
environment users and developers will be able to focus on the Algebra
without worrying about the underlying tools. If they MUST work with
them, I would like them to be literate all the way down - no dark corners
to get into trouble with. But that's again just me.
That part I completely agree with but I fail to see why that requires
doing the things that you and Stephen are proposing.
Its not a requirement. There is nothing fundamental holding back
other approaches. But I truly do feel that this is a reasonable and
pragmatic approach, even given the pie in the sky ideal we would all
like to see realized.
Post by Bill Page
Post by Stephen Wilson
Your certainly not alone :)
That's what I like about the web! ;-)
Indeed!


Take care,
Steve
Ralf Hemmecke
2007-06-30 07:21:55 UTC
Permalink
BTW, has someone though about where we were now if NAG had given Axiom
(including a autoconf build environment) to Tim?

We had something like BI but probably totally undocumented. Why cannot
we take that position now. I am sure that if the tests work out fine,
why shouldn't we quickly work with autoconf?

I wait for Gaby to submit a patch. I think that will be the better road
than simply switch to wh-sandbox and make some people angry. We should
stay together as a community. Please.

If Axiom is as well know as Linux then we may become much more
restrictive which patches go into Axiom. Then it is time to really
"force" people to submit LP patches. But now we should make Axiom
attractive!!!

Ralf
Stephen Wilson
2007-06-30 08:10:51 UTC
Permalink
Hi Ralph,
Post by Ralf Hemmecke
BTW, has someone though about where we were now if NAG had given Axiom
(including a autoconf build environment) to Tim?
We had something like BI but probably totally undocumented. Why cannot
we take that position now. I am sure that if the tests work out fine,
why shouldn't we quickly work with autoconf?
I really have no objection to the use of autoconf. There is a huge
investment of knowledge encapsulated into that system which we would be
remiss to ignore.

I do not see a conflict between me advocating ASDF and autoconf, BTW.
Post by Ralf Hemmecke
I wait for Gaby to submit a patch. I think that will be the better
road than simply switch to wh-sandbox and make some people angry. We
should stay together as a community. Please.
I totally agree. All I hope for is that the patch is documented and
is in keeping with the clearly stated goals of the project. I want to
be able to understand such a patch in detail, without having to second
guess the authors intent. I am encouraged by Gaby's statements w.r.t
his willingness to expand on the details if something is not clear.
This is precisely the attitude we need if we hope to make documentation
a priority for the project.
Post by Ralf Hemmecke
If Axiom is as well know as Linux then we may become much more
restrictive which patches go into Axiom. Then it is time to really
"force" people to submit LP patches. But now we should make Axiom
attractive!!!
I believe we all have to agree that there exists a common goal. I
dont think there is a fast path to an amicable result. I have enough
people telling me that they want something done yesterday. Axiom, I
hope, is an oasis from such expectations.

Sincerely,
Steve
Ralf Hemmecke
2007-06-30 09:21:39 UTC
Permalink
Post by Stephen Wilson
Post by Ralf Hemmecke
I wait for Gaby to submit a patch. I think that will be the better
road than simply switch to wh-sandbox and make some people angry.
We should stay together as a community. Please.
I totally agree. All I hope for is that the patch is documented and
is in keeping with the clearly stated goals of the project.
It would be super good if the patch is properly documented, but that is
*not* my priority. Gaby is certainly willing to improve his autoconf
work with documentation when it is in trunk. How come that people seem
to think that Gaby is not committed to LP?
Post by Stephen Wilson
I want to be able to understand such a patch in detail, without
having to second guess the authors intent.
Do you know that we have a mailing list where you could simply ask?
The answer then should be used to improve the documentation. But see,
for me it would be much worse to lose Gaby than to have a little
imperfect documentation in trunk. We even have more imperfection all around.

Don't you see that probably all current developers are LP believers?
It is just a question how to arrive at an Axiom that is fully in an LP
style. Some people think that we must from now on do everything in
proper LP style. And some people rather like to work for a while in the
usual programming paradigm and postpone proper documenation until we
have autoconf, hyperdoc, windows port running. *Nobody* says that he
will not eventually document in LP style.

I agree that it would be best that documentation gets done while the
development happens, but that is *not* the main pressing part. It is
much more important to make more people aware of Axiom and attract more
developers. If we have 100 developer, then I totally agree that no patch
should be admitted to trunk if it is not fully documented in a way Tim
would like it. But until then let's just be a bit more relaxed.

With a handfull of developers we will *never* be able to document all
the legacy algebra code. There is simply not enough time. Let's attract
developers first and let's preach them some LP so that they know what
will be the vision of Axiom.

Note, that is not a compromise to Tim's vision of having everything
properly documented so that people can understand it. But what is the
use of documentation where only 10 people in the world are willing to
read it? Axiom will die without developers.

Look at Aldor. That's a super language, but all the other languages just
take over the ideas of Aldor and Aldor will lose no matter how advanced
it is. Without developers Aldor is dead. And with 10 developers also
Axiom is dead.
Post by Stephen Wilson
I believe we all have to agree that there exists a common goal. I
dont think there is a fast path to an amicable result. I have enough
people telling me that they want something done yesterday. Axiom, I
hope, is an oasis from such expectations.
Of course there is no real time pressure. I press, because I want to see
a well documented, well running Axiom during my lifetime. Only that
presses to ask for more developers.

Ralf
Stephen Wilson
2007-06-30 10:33:05 UTC
Permalink
Post by Ralf Hemmecke
Post by Stephen Wilson
Post by Ralf Hemmecke
I wait for Gaby to submit a patch. I think that will be the better
road than simply switch to wh-sandbox and make some people angry.
We should stay together as a community. Please.
I totally agree. All I hope for is that the patch is documented and
is in keeping with the clearly stated goals of the project.
It would be super good if the patch is properly documented, but that is
*not* my priority. Gaby is certainly willing to improve his autoconf
work with documentation when it is in trunk. How come that people seem
to think that Gaby is not committed to LP?
I dont think its an issue of lack of commitment. I think it is a
divergence in process. Regardless of how great and shiny and new
anyones patch is, I honestly do think its worth the time to document
it first. The payoffs in the long run are worth it.
Post by Ralf Hemmecke
Post by Stephen Wilson
I want to be able to understand such a patch in detail, without
having to second guess the authors intent.
Do you know that we have a mailing list where you could simply ask?
The answer then should be used to improve the documentation. But see,
for me it would be much worse to lose Gaby than to have a little
imperfect documentation in trunk. We even have more imperfection all around.
I really dont see it that way. If I have a contribution, then I post
it to the list. Give the community the chance to study it and
comment. Such a post is almost certainly an initial release, save
trivial fixes. It is an opertunity for everyone to contribute, to ask
questions, to engage themselves in the process. One can propose a
patch with working code without it being literate from the start.
Nothing forbids that. wh-sandbox and build-improvements dont
implicitly discount such a process. Neither Gaby nor Waldek have ever
said `I dont care about suggestions regarding LP'. The fundamental
problem, from my perspective, is that it is exceedingly rare for there
to be a public patch. Something real that we can all comment on and
study, that we all have a vested interest in improving.

Fundamentally, what _is_ important is that when the code is ready to
go into silver, it is a documented, literate, work. We do not need to
get that code in tommorow. This is a basic principle of the project.
Post by Ralf Hemmecke
Don't you see that probably all current developers are LP believers?
It is just a question how to arrive at an Axiom that is fully in an LP
style.
Its not really a question. Just write LP code for submission to
Silver. Any steps one takes to get to that point is an individual
decision.
Post by Ralf Hemmecke
Some people think that we must from now on do everything in
proper LP style. And some people rather like to work for a while in
the usual programming paradigm and postpone proper documenation until
we have autoconf, hyperdoc, windows port running. *Nobody* says that
he will not eventually document in LP style.
Its a totally classic scenario. Write the code, and promise to
document it. Almost invariably it turns out to be nonsense. I would
not be surprised if the original developers of axiom entertained
exactly the same notions. Look what that got us.
Post by Ralf Hemmecke
I agree that it would be best that documentation gets done while the
development happens, but that is *not* the main pressing part. It is
much more important to make more people aware of Axiom and attract
more developers. If we have 100 developer, then I totally agree that
no patch should be admitted to trunk if it is not fully documented in
a way Tim would like it. But until then let's just be a bit more
relaxed.
Totally disagree. If we cant convince the handfull of developers
currently involved, how on earth are we going to convince 100?

We need to stick to our guns and set an example. It might even be
inspiring if we do. Tim certainly inspired me, and thats why Im here.
Post by Ralf Hemmecke
With a handfull of developers we will *never* be able to document all
the legacy algebra code. There is simply not enough time. Let's
attract developers first and let's preach them some LP so that they
know what will be the vision of Axiom.
There is a huge difference between preaching and doing.
Post by Ralf Hemmecke
Note, that is not a compromise to Tim's vision of having everything
properly documented so that people can understand it. But what is the
use of documentation where only 10 people in the world are willing to
read it? Axiom will die without developers.
Axiom has developers, it will not die. The hope is that effort and
patience will win out in the end. That computational math becomes
both an art and a science. That Axiom represents the very best of an
ever so young disipline, and one which will outlive us all.
Post by Ralf Hemmecke
Look at Aldor. That's a super language, but all the other languages
just take over the ideas of Aldor and Aldor will lose no matter how
advanced it is. Without developers Aldor is dead. And with 10
developers also Axiom is dead.
Aldor is dead because no one but a few privileged folk can read its
code, but even that is not entierly true. I think Aldor has a lot of
good ideas behind it, and am trying to implement them myself. Good
ideas always survive.
Post by Ralf Hemmecke
Post by Stephen Wilson
I believe we all have to agree that there exists a common goal. I
dont think there is a fast path to an amicable result. I have enough
people telling me that they want something done yesterday. Axiom, I
hope, is an oasis from such expectations.
Of course there is no real time pressure. I press, because I want to
see a well documented, well running Axiom during my lifetime. Only
that presses to ask for more developers.
I share the same perspective.


Cheers,
Steve
C Y
2007-06-30 12:31:15 UTC
Permalink
Post by Bill Page
Thanks for carrying my side of the conversation for me, Cliff. :-)
<turns red>. Sorry, didn't mean to presume Bill - please forgive my
rudeness. I just had a feeling I knew pretty well what that side of
the argument would be, and wanted to respond before I fell asleep ;-).
Post by Bill Page
I've had my mind on other things for the last few hours... Yes, you
state very clearly and exactly my concerns.
OK, good.
Post by Bill Page
I have a very great respect for Lisp, but I wonder how you can look
around the web and at the type and number of programs that have been
written in the last 10 years and not think that Lisp is essentially
already a ghetto (as you have so clearly and accurately defined it).
That's a fair question, and if you take the definition of "ghetto" as
laid out above I suppose it DOES satisfy that definition. Which leaves
the question of why I don't think of Lisp as a ghetto even though it
might technically satisfy the definition.

I would say my line of reasoning goes something like the following:

1. Once I was able to understand the basics of how Lisp works, the
simplicity of it and the power it offers were worth the initial
learning effort.

2. In the world of computer algebra specifically, Lisp is a staple and
has been for decades. Far, far back in the depths of the Maxima
archives, before I knew what I was doing, I proposed rewriting Maxima
in a language with more popular support. The archives no longer seem
to be online, but IIRC the response at some point was that learning
Lisp was a small effort compared to the effort required to properly
implement CAS abilities, and the benefits of using Lisp outweighed the
difficulties in this particular case. I have since come to agree with
this.

3. Lisp may not get the "popular interest" other languages do, but
much of the code written for it seems to be rather well written - it
solves problems well when it does solve them.
Post by Bill Page
We might wish that that was not true but all the evidence is clearly
there. Ask the people who teach computer science at university what
they think of Lisp. Ask them what there students think of Lisp. Gaby
for example has already said what the reaction of his colleagues was
to the fact that he has been spending so much time on an "old Lisp"
system like Axiom... When I mentioned Lisp to a room full of
enthusiastic Sage developers you should have seen the "tolerant
amusement" on the faces those under 25 in the crowd. Man, did that
make me feel old... :-(
Lisp is still a working, functional language despite having its roots
in the Fortran days - that says something about the robustness of the
ideas behind it, IMHO. In my opinion the question should be what does
the language bring to the table. The only area I am aware of where
Lisp is seriously lacking compared to other languages is libraries
defining graphical tools.

Python has great library support for modern operating systems - I have
used it in the past because of this. However, for the 30 year horizon
what is popular today is of somewhat less concern to me. Who knows how
language trends will progress? Something new may become the next
"star" language. Lisp has been around for a very long time, and offers
a lot of power in exchange for being willing to think a little
differently. To me that's worth the tradeoff.
Post by Bill Page
Post by C Y
And the goal is to develop tools such that given a working Lisp
environment users and developers will be able to focus on the
Algebra without worrying about the underlying tools. If they
MUST work with them, I would like them to be literate all the
way down - no dark corners to get into trouble with. But that's
again just me.
That part I completely agree with but I fail to see why that requires
doing the things that you and Stephen are proposing.
OK, ask these questions - is autoconf (not our configure file but GNU
autoconf) literate? What about GCC? If those programs are in the
critical dependency tree any problems that no one else takes care of we
have to deal with, because without them we would have no working
program. noweb is at least literate but it also requires gcc. GCL and
Clisp bootstrap off of GCC, true, but CMUCL and SBCL do not.

Being able to maintain a working CAS means that everything required to
go from source code to finished binary is a potential issue for the
Axiom developers to deal with. The easier it is to deal with any
potential issue at ANY point in the software stack, the better.
Perhaps most of the time Axiom will be built and used with tools that
require external support, but (like Maxima) I would like Axiom to be
able AT NEED to get up and running with only tools that are fully
literate. The less Axiom is at the mercy of ANY external requirements,
the more robust it will be for the 30 year horizon and beyond. If
autoconf someday dies, if noweb becomes unmaintained someday, we would
still be able to build as long as an ANSI Lisp environment can be
bootstrapped. That's robustness.

A consequence of this approach is that we rely less on mainstream
tools. To me, the approach should be to have the option of using
mainstream tools if they do something better/faster, but be able to
fall back on Axiom itself if something goes haywire with the external
requirements. Future proofing Axiom means that the work put into the
Algebra code will still be usable indefinitely, so long as the
supporting Lisp environment can run. In some ways, it's the same
reason Java software is portable - it uses the virtual machine and
writes everything inside of it. Java was designed from some people
familiar with Lisp - I have a feeling this idea came from there. In
theory Lisp could provide many of the same benefits, if the work was
put into interfacing with the different graphics libraries properly.

"Mainstream" is a matter of definition, and the community defines it.
We are part of the community, engaged in a major project with wide
ranging implications. Rather than follow the trends, why don't we set
them? Find the best tools for the task, use them, and make them
mainstream?

The Spad/Aldor languages are even more of a ghetto than Lisp, yet we
are putting effort into them because they provide enough benefits by
virture of their language constructs to be worth the effort. I
personally like Lisp enough to consider it worth the extra effort
(which isn't really all that much, to be honest - I have a LOT of
basics to learn and that would be the same in any language) to make
ASDF capable of handling pamphlets and do some other work as well to
make it close to an ideal environment. And I'm sure ASDF is a LOT
easier to make literate than autoconf, when the goal of "turtles all
the way down" with literate programming comes into play.

Anyway. We'll see - it's an experiment. All we can do is give it a
try.

Cheers,
CY





____________________________________________________________________________________
Got a little couch potato?
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz
C Y
2007-07-03 10:24:25 UTC
Permalink
Post by William Sit
Post by C Y
Whether pamphlets qualify is probably a question for a lawyer.
I'm assuming anyone other than the original author has no special
rights period.
Question for a lawyer means question for the courts?
I meant it's a question for someone familiar with the actual laws on
the books which would pertain to this type of agreement. I am not
personally familiar with them
Post by William Sit
If the authors have the "residual" rights as you mentioned,
wouldn't it be logical that the authors can assign such
"residual" rights to another entity?
It probably depends on the wording of the actual signed agreement.
Perhaps the rights retained by the author are non-transferable - it
would most likely depend.
Post by William Sit
Writing new material in survey form is certainly alright
(this already is a huge undertaking and will require
constant updating), especially if no literate articles are
available.
I also think it is a good fit to what Axiom actually needs - pamphlets
covering already well established areas should include the best points
from all available research.
Post by William Sit
But overviews and surveys, such as at Wikipedia
or Mathematica sites are, I'm afraid, not what Tim has in
mind. He wants lock, stock, and barrel (all the shebang) *in
one pamphlet* so that one need not hunt for obscure outside
articles or be an expert in the field to follow, maintain
and improve the code.
The point about hunting for obscure articles, I agree - if there is
knowledge contained in that article relevant to the pamphlet, it should
be in there - many people simply will not be able to locate that
article. As much as is reasonably possible an introduction should be
provided, but there are limits - the art is finding the correct ones.
Post by William Sit
(That's why the COMBINAT code does not
yet pass muster.) That, I think, is kind of an oxymoron and
unattainable ideal (try reading any new algorithm in
symbolic computation and it is an infinite descent if you
are not already an expert *in the particular problem the
algorithm solves.* COMBINAT comes to mind.)
The way to handle this, in my estimation, is to have the "basic"
pamphlet in the subject provide the introduction and have subsequent
pamphlets build on it. I don't think every pamphlet should have all
introductory material, but there should be a "basics" pamphlet that
both lays out the basics of the subject in Axiom and introduces them to
the developer. Units and Dimensions is intended to (eventually) serve
this purpose for that subfield - new algorithms in dimensional analysis
would reference the basic Units and Dimensions framework and build from
it.
Post by William Sit
Every pamphlet
would eventually be a pretty thick book, even if there are
plenty of literature covering the topic (nothing
fundamentally wrong with that except for lack of
author-power and stretching the 30 year horizon).
Indeed, even 30 years may not be enough to do this as it should be
done. However, I don't see that as a reason not to try - if the
foundation is strong enough it's worth the time and effort.
Post by William Sit
Perhaps
Gaby's "incremental improvement" idea may work better, but
that is exactly what we have been debating about. I prefer
Ralf's pragmatic approach: get the code working and
stablized, add documentation at places where users or
developers find it lacking in details (that is, let
documentation be "demand driven" rather than "supply
driven").
I don't see any reason people can't work from "both ends" and meet "in
the middle", so to speak.

In my particular case, the effort required to go from where I am now to
a position where I would regard myself able to make non-trivial algebra
contributions takes me through a lot of areas that may not have a lot
of immediate user demand but I think are worth paying attention to.
But that's just me.
Post by William Sit
I agree with the goal, just not the means. Moreover, for
that vision to be realized, the prerequisite is a very large
user base.
Right. So we have a bit of a chicken-egg problem, in two ways - to get
a user base we need a working program, but to attract users away from
current systems (already very good) we must offer something compelling
enough to warrant the switch. So we must be working soon, but we must
also be designing for the long term. Two goals, with (IMHO) two
different approaches needed.
Post by William Sit
Next question: what can we do to increase user base? (Let's
hear yours.)
My suggestions? The only thoughts I have on the subject are that we
must offer something compelling enough and unique enough that people
are drawn to Axiom from other, working systems. The only feature I see
that I would estimate desirable enough to accomplish this is formal
proof logic trust integrated with CAS results. (I.e. the hard road.)

Obviously, we might implement field specific packages (e.g. Feyncalc)
that would appeal to specific problem domains, but we need to first
convince everyone to trust the results.
Post by William Sit
Post by C Y
I think the Axiom project might be a bit like the Free Software
Foundation in that respect - to me at least it's about more than
just a working CAS. It's about changing the landscape itself.
Not replacing the academic institutions and their work as they
exist today, but making them more visible and more readily
applicable to the rest of the world. That's a more ambitious
project than just a working CAS, but
the potential rewards are even greater.
Great vision! Would you outline some plans and actions?
Well, thoughts anyway:

1. Implement a framework flexible enough and powerful enough to enable
most features expected of a modern CAS and support interaction with
formal proof assistants.

2. Build (or re-build) Axiom's Algebra based upon modern Category
Theory. I think we are close, but we should make things feel as
natural as possible to a mathematician (who are the ones we are hoping
will extend the system in the end, after all.)

3. Reproduce several known results of famous problems inside the CAS
itself, demonstrating its power on known, verifiable problems.

4. Tackle new problems, looking for solutions to as yet unsolved but
interesting problems. Use the effort to refine the tools in the CAS
for solving such problems.

5. Once #4 begins to show results that are significant to the
mathematical community, attention should begin to shift towards Axiom
in a positive way as a tool for new research work. As that happens,
the Axiom Journal can begin to organize as a serious publication.

Essentially, it's up to us to make our case. If we can demonstrate
with real results that Axiom is a uniquely effective tool for new work,
that will be the most powerful possible tool to drive its use.
Post by William Sit
(I share with you that the journal and publishers in the math
and cs areas at least are exploiting academics: authors and
researchers do all the work (writing, reviewing, editing,
proofreading) and get *nothing* other than a bibliography
item.)
I hear periodic rumblings about this from the scientific community, but
I'm not sure about mathematics per say.
Post by William Sit
Note that the building of superhighways was historically
demand-driven (national security, commerce, mobility, and
lots of drivers).
Correct - so if we can demonstrate with some non-trivial examples the
practicality of the superhighway, we may begin to get a lot more people
interested. (Hence the need for good tools - e.g. a car that performs
better than a horse.)

Cheers,
CY



____________________________________________________________________________________
Sick sense of humor? Visit Yahoo! TV's
Comedy with an Edge to see what's on, when.
http://tv.yahoo.com/collections/222
Loading...