Discussion:
A quote of a quote in a quote
Jean-Luc Delatre
2005-08-23 08:01:40 UTC
Permalink
"Linguistic symbols thus free human cognition from the immediate
perceptual situation not only by enabling reference to things outside
this situation ("displacement" Hockett, 1960), but rather by enabling
multiple simultaneous representations of each and every, indeed all
possible, perceptual situations."

From "The Cultural Origins of Human Cognition" by Michael Tomasello
http://email.eva.mpg.de/~tomas/

Cited in
http://mixingmemory.blogspot.com/2005/08/cogblog-tomasello-chapter-1.html

Thus, trying to "agree" on the "one-true-view" about intelligence,
cognition or anything else is just going against the very basis of our
intellectual capabilities.

Being fixated on logic is likely among the worst possible blind alleys
we can get trapped in.

*Of course* the implementation of any piece of software cannot escape
logic and determinism (even when making choices dependent on a random
noise source).
But this is an engineering constraint *not* in any way an usefull or
wise design guideline.

Apart from this engineering constraint another source of the obsession
with logic and ontology in Western tought comes obviously from the
Greek roots of our culture.
The Greeks tried to have their minds work as a computer before even
having one, consenquently we were bound to invent computers in the end,
not such a bad thing *if we can now go beyond their limitations*.

The strong appeal of logic and precise ontological definitions is just a
*technical artifact* of the use of derivation rules.
No matter which flavor of logic fits your taste, at some point you build
a "consequent sentence" from a bunch of antecedents which are deemed
*true*, by substitutions of symbols into rule patterns, which rules are
deemed "valid" and you expect the "consequent sentence" to also be
"true" or "valid" and usable further in the process.
So far so good...
But what does this entails about rules and symbols?
Rules *have* to be explicitely or implicitely under the scope of some
sort of universal quantification!
Rules must be valid for *any* syntacticaly legal substitutions, this
holds even if you think of any extra applicability/validity conditions
for the rules to be used.
The preconditions just get aggregated to the rule to from the real,
practical rule instead of the "shortened" version which comes more
readily to our mind and discourse.
Those preconditions *too* are under the scope of the "universal
quantification" of sorts.
(This is where many missteps of reasoning happen, forgetting about
implied preconditions)

Here now comes the Ontology, Being and Existence questions!
Because, what is an universal quantification if not the reverse of an
existential quantification?
"Every schoolboy knows" that (forall x R) is exactly (not (exists x (not
R)))

IN ORDER TO MAINTAIN CONSISTENCY OF OUR "DERIVATIONS" WE HAVE TO CHECK
FOR WHAT "EXISTS" OR NOT

But, but, this is only within our little games and plays with symbols
and discourse!
The "connection with reality" is *outside* this logic game, at the
stage where we choose to pretend that such or such word "refers to" or
"names" or (whatever you favorite wording is) a "thing in the world".
It is *deeply misleading* to confuse a technical requirement of symbolic
manipulations with constraints about "the world at large".
This leads to meaningless questions about non-problems which will not
contribute to a better "understanding of the world".

End of Rant... :-)

Now what can we do to improve our "understanding of the world"?
Language could help, in fact, language has to help because we have *no*
choice.
It is the only device we have to share knowledge among us and it is
hopefully "powerfull enough", witnessed by the very discussions on this
list which in spite of all disagreements and misundertandings still
convey some "meaning" and "knowledge" between participants, however
partial or distorted these meanings can be.

To help us the most usefull achievement would be to manage to get
computers to interact with us in manner closer to natural language than
to logic. Computers are already "perfectly logic", short of hardware
failure they do *exactly* what the binary code tells them :-D

To have any hope to do that we will have to somehow "understand" enough
about language to mimic at least part of it with logic (suitable to
computer implementation) NOT THE OTHER WAY AROUND (i.e. trying to
forcefully cast the richness of language into the straightjacket of logic).

It goes without saying that this DOES NOT MEAN that we have to have
"sentient computers" or "spiritual machines" or any delirious goals of
this kind, just being able to state things like "please use algorithm so
and so to deal with this statistical analysis" and get a sensible
outcome instead of scrambling for weeks patching the nitty-gritty
details for an unreliable result.

Trying to eschew this effort because "it's too difficult" or "logic is
known to be reliable" would be acting like the proverbial drunkard
looking for his lost keys under the lampost instead of where they
probably are because "it's too dark over there"...

Cheers,

JLD










========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-08-23 14:42:50 UTC
Permalink
Jean-Luc,

That's one of the major points I made in the paper
"The Challenge of Knowledge Soup":

JLD> Being fixated on logic is likely among the worst
possible blind alleys we can get trapped in.
That is true if the goal is to build a machine
with the capabilities of the HAL 9000.

JLD> *Of course* the implementation of any piece of
software cannot escape logic and determinism (even
when making choices dependent on a random noise source).
That is true if we are trying to use today's technology
to implement a program that we know how to build using
today's technology.

But we should always be open to new innovations from
whatever direction they come from.

Enough said.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-01 10:08:11 UTC
Permalink
On Tue, 23 Aug 2005 10:42:50 -0400
Post by John F. Sowa
Enough said.
I don't think so.
If we really have to care about "philosophy" could we focus on *recent* developments rather
than rehashing forever, Peirce said this and Aristotle said that and blah-blah-blah...
And George Bush is doubleplusnogood (I share this opinion but WTF is this doing on *this* list?)

I suggest as a seed:
http://cscs.umich.edu/~crshalizi/notebooks/scientific-method.html

Containing a quote from Peter Medawar:

"If the purpose of scientific methodology is to prescribe or expound a
system of enquiry or even a code of practice for scientific behavior,
then scientists seem to be able to get on very well without it.
Most scientists receive no tuition in scientific method,
but those who have been instructed perform no better as scientists than those who have not.
Of what other branch of learning can it be said that it gives its proficients no advantage;
that it need not be taught or, if taught, need not be learned? "


JLD
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-01 14:25:10 UTC
Permalink
Jean-Luc,

If you had been reading these notes, you should have
noticed that Peter M's statement is patently false
Post by Jean-Luc Delatre
http://cscs.umich.edu/~crshalizi/notebooks/scientific-method.html
"If the purpose of scientific methodology is to prescribe or expound
a system of enquiry or even a code of practice for scientific
behavior, then scientists seem to be able to get on very well without
it. Most scientists receive no tuition in scientific method,
but those who have been instructed perform no better as scientists
than those who have not. Of what other branch of learning can it be
said that it gives its proficients no advantage; that it need not be
taught or, if taught, need not be learned? "
Just look at the difference between Ernst Mach's philosophy and
Einstein's philosophy. Mach had a great deal of influence in
preventing physicists from recognizing and adopting Boltzmann's
methodology throughout the latter part of the 19th-century and
into the beginning of the 20th. With aid from Carnap and others,
Mach had a profound and devastating influence on 20th-century
psychology and linguistics. Einstein was the savior of physics
by demonstrating a totally new scientific methodology in his
three papers of 1905 and in his subsequent work over the next
20 years. The next great leap was taken by Niels Bohr and
Werner Heisenberg in quantum mechanics during the 1920s.

All five of the philosophers mentioned in that paragraph --
Mach, Carnap, Einstein, Bohr, and Heisenberg -- also happened
to be practicing scientists. (In earlier years, you can add
Newton and Aristotle to the list of research scientists who
had a profound influence on the philosophy of science.)

Medawar's observation is based on looking at philosophers
who had *zero* experience as research scientists. He is
correct in his observation that such philosophers are very
wisely ignored by scientists. But those philosophers who
were also research scientists are responsible for some of
the greatest advances (and some great disasters) in science.

That is one of my primary reasons for recommending Peirce.
He was a recognized research scientist in the forefront of
two fields at the same time: In the late 19th-century, he
was a pioneer in logic and one of the leading experimental
physicists in the measurement of gravity -- which earned him
the *first* invitation to a scientific congress in Europe
by *any* American scientist.

In conjunction with his work on gravity, Peirce was the *first*
person to recommend the use of a wavelength of light as a method
for measurement of length, he designed the apparatus for using
that method, and he used it to measure the lengths of his
pendulums for measuring gravity. When he talked about how
scientists do science, he really knew.

Summary: Philosophy of science has had a profound influence
on the way scientists do science, but unless the philosopher
also happens to be at the forefront of research in at least
one scientific discipline, he or she will be ignored. However,
being a research scientist is no guarantee of the ability to
do good philosophy of science, as Ernst Mach demonstrated.

John Sowa

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-01 15:35:58 UTC
Permalink
On Thu, 01 Sep 2005 10:25:10 -0400
"John F. Sowa" <sowa-***@public.gmane.org> wrote:

.../...
Post by John F. Sowa
All five of the philosophers mentioned in that paragraph --
Mach, Carnap, Einstein, Bohr, and Heisenberg -- also happened
to be practicing scientists. (In earlier years, you can add
Newton and Aristotle to the list of research scientists who
had a profound influence on the philosophy of science.)
.../...
Post by John F. Sowa
Summary: Philosophy of science has had a profound influence
on the way scientists do science, but unless the philosopher
also happens to be at the forefront of research in at least
one scientific discipline, he or she will be ignored. However,
being a research scientist is no guarantee of the ability to
do good philosophy of science, as Ernst Mach demonstrated.
Fine!
Then *who* could be some of the "good" 21st century philosopher scientists?
We need to find them.
Mulling over and over on *previous* philosopher scientists views may be of some use to straighten current practices
certainly not to discover new ones (new "paradigms" but this word is a bit tainted by hype and abuse).

JLD

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Dominic Widdows
2005-09-01 18:12:24 UTC
Permalink
Post by Jean-Luc Delatre
Mulling over and over on *previous* philosopher scientists views may
be of some use to straighten current practices
certainly not to discover new ones (new "paradigms" but this word is a
bit tainted by hype and abuse).
Dear Jean-Luc,

This is patently false. Simple example - 3 years ago I took some of the
definitions from Euclid's Elements of Geometry, some of mathematical
ramblings from Hermann Grassmann (19th century linguist and
philosopher), and used them to build negation operators into search
engines.

Result? Up to 87% reduction in unwanted synonyms or neighbors of
unwanted terms.
Reason this hadn't been done before? Ignorance of scholarship outside
our own timeframe.

Best wishes,
Dominic

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-02 03:18:58 UTC
Permalink
Jean-Luc and Dominic,

That is the multimillion-dollar question:

JLD> Then *who* could be some of the "good" 21st century
Post by Jean-Luc Delatre
philosopher scientists?
That is the what everybody would love to know: who is the
next great author, actor, scientist, philosopher, statesman,
business executive, etc.?

There are two places to look: (1) at the up-and-coming
stars that everybody else is evaluating, or (2) at the
ones that have been around for a while, but have untapped
potential that has been neglected.

Dominic's example shows that previously neglected work
can often be a source of rich insights:

DM> Simple example - 3 years ago I took some of the definitions
Post by Jean-Luc Delatre
from Euclid's Elements of Geometry, some of mathematical
ramblings from Hermann Grassmann (19th century linguist and
philosopher), and used them to build negation operators into
search engines.
Result? Up to 87% reduction in unwanted synonyms or neighbors
of unwanted terms. Reason this hadn't been done before?
Ignorance of scholarship outside our own timeframe.
As another example, I would cite Cecil Rhodes -- the man
who founded the De Beers diamond monopoly: he arrived in
South Africa after the diamond prospectors had discovered
all the best diamond mines and dug out all the diamonds.
Then he bought up the mineral rights to all those empty
diamond mines very cheaply. And he discovered that beneath
the upper level that had been dug out, there was a much
bigger and much, much richer lode.

The logician and philosopher Susan Haack, who has written
some respectable work about the philosophy of science
(although she herself is not really a scientist) made
a suggestion that I like: "Peirce is the first philosopher
of the 21st century!"

The reason for that comment is very similar to Dominic's:
(a) Peirce was an outstanding scientist of his generation
who also made important contributions to philosophy and
(b) the 20th-century philosophers in both the analytic
and the continental traditions ignored him. Therefore,
his writings are an option that have not yet been
properly evaluated.

That's the same kind of idea that led Felix Mendelsohn
to a treasure trove of music by a long-dead musician that
nobody listened to -- J. S. Bach.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-06 08:08:48 UTC
Permalink
John and Dominic,

Of course, I disagree ;-)
Post by John F. Sowa
JLD> Then *who* could be some of the "good" 21st century
Post by Jean-Luc Delatre
philosopher scientists?
JS> That is the what everybody would love to know: who is the
Post by John F. Sowa
next great author, actor, scientist, philosopher, statesman,
business executive, etc.?
There are two places to look: (1) at the up-and-coming
stars that everybody else is evaluating, or (2) at the
ones that have been around for a while, but have untapped
potential that has been neglected.
Certainly some potential is neglected,
but what I see as the main problem today was *not* in "ancient" thinkers agenda and could not have been.
The handling of vast amounts of digitized and (badly) formalized text and various forms of data
is a challenge none of them could have foreseen and which was not even clearly anticipated may be 30 or 40 years ago.
We were already at loss to sort out semantics and meaning from corpuses of very modest sizes
even when dealing with, say the lifetime works of on single philosopher or scientist,
today in order to answer the very question above (who is the next great...) through which amount would we have to sift?
Looks like a chicken and egg problem.
We have to start someplace nevertheless and I would rather suggest to look at the works of people like

Gerald Edelman http://en.wikipedia.org/wiki/Gerald_Edelman
or Stuart Kauffman http://www.edge.org/3rd_culture/bios/kauffman.html

Some of their premises may be questionable but at least they try to "escape the box"
A better understanding of how complexity comes about and how we "naturally" handle it could help much more than ancient Metaphysics(TM)
Post by John F. Sowa
DM> Simple example - 3 years ago I took some of the definitions
Post by Jean-Luc Delatre
from Euclid's Elements of Geometry, some of mathematical
ramblings from Hermann Grassmann (19th century linguist and
philosopher), and used them to build negation operators into
search engines.
Result? Up to 87% reduction in unwanted synonyms or neighbors
of unwanted terms. Reason this hadn't been done before?
Ignorance of scholarship outside our own timeframe.
I guess I have been misunderstood because this is nearly irrelevant.
Though based on neglected insights from an earlier linguist this is only a *technical* improvment
even if a very good one and even if (may be, you tell...) it dramatically enhance the asymptotical performance,
that does not change the way of thinking about the whole field.
Or does it?

Many many hidden gems lay in waiting, not only in forgotten publications but everywhere around us,
until some "crazy mind" hits the right metaphor which puts them to use.

But what I am talking about is changing the *approach* of the question not "improving" the usual methods
unless may be in some cases by several orders of magnitude which therefore also changes the nature of the field.


JLD

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-06 13:29:42 UTC
Permalink
Jean-Luc,

There is no question that we need new ideas, new
input, new data, new research, and new insights.
And one thing we can be sure of is that they will be
found in the most unexpected places.
Of course, I disagree ;-)
You always do.

But Dominic's point about Grassmann was significant:
G's work on algebra was largely ignored even in his
lifetime, although a few mathematicians paid attention
to it. When I studied math, he was a respected, but
ignored label on the pantheon. But as Dominic pointed
out, his algebra can have major applications to information
retrieval today.

Clifford algebra is a related version, also from the
1870s, which was ignored for over a hundred years.
But the physicist David Hestenes resurrected it
from the dead in the 1980s and demonstrated that it
is key to unifying and simplifying a very wide range
of representations in physics that use vectors,
tensors, matrices, and miscellaneous ad hoc techniques.
Just type the following three words to Google:

hestenes clifford algebra

Re large corpora: There is nothing new there.
Just read how lexicographers develop dictionaries,
especially the Oxford English Dictionary. They used
very large corpora of real language over a century ago,
when they just had a room full of clerks who were
supplied with citations gathered by an army of volunteers
from all around the world.

In the late 19th century, there was a rival US project
to produce a competing dictionary, which was finally
published as the _Century Dictionary_. It was larger
than the current Merriam Webster unabridged dictionary,
but it was not as large as the OED. So it was used
for a while, but it went out of print, but it was
(and still is) an excellent resource. Since the
copyright expired, its definitions have been widely
copied and modified by dictionary editors everywhere.

And there was one associate editor of that dictionary,
who personally wrote, revised, or edited 16,000 entries
-- more than any other editor of the _Century Dictionary_.
I'm sure you've heard his name: Charles Sanders Peirce.

There are undoubtedly many people today whose computers
have churned away on more gigabytes of text, but I challenge
you to show me *one* who has personally analyzed as much of
the input and output in as great a depth as good old CSP.
He wrote the following to B. E. Smith, the overall editor:

The task of classifying all the words of language,
or what's the same thing, all the ideas that seek
expression, is the most stupendous of logical tasks.
Anybody but the most accomplished logician must break
down in it utterly; and even for the strongest man,
it is the severest possible tax on the logical equipment
and faculty.

He understood logic, lexicography, and large corpora of
real texts at a depth that is unique -- even today.

Re Gerald Edelman or Stuart Kauffman: That is certainly
interesting work, which may someday produce some important
insights. But it is also important to have some starting
guidelines to work with. Among the very interesting work
being done today is a field called biosemiotics with branches
called zoosemiotics for animals and phytosemiotics for plants.

You can check Google to see whose ideas those fields are
based on.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-06 15:51:23 UTC
Permalink
On Tue, 06 Sep 2005 09:29:42 -0400
Post by John F. Sowa
Among the very interesting work
being done today is a field called biosemiotics with branches
called zoosemiotics for animals and phytosemiotics for plants.
You can check Google to see whose ideas those fields are
based on.
Bwa, ha, ha, ha...
ROTFL !

Charles Sanders Peirce (who else?) + Intelligent Design
Follow the links:

http://www.gypsymoth.ento.vt.edu/~sharov/biosem/biosem.html#topics (second page on Google!)
http://members.iinet.net.au/~tramont/biosem.html A biosemiotic theory of cognition (by Stephen Springette)
http://members.iinet.net.au/~tramont/unscienc.html Unscience (absolutely sic!)

Sigh...

JLD
P.S. Did you really wrote "There is no question that we need new ideas," or is it a lengthy typo?
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-06 20:17:48 UTC
Permalink
Jean-Luc,

That is a common English idiom:

JLD> Did you really write "There is no question
Post by Jean-Luc Delatre
that we need new ideas," or is it a lengthy typo?
However, I should have used the clearer form
"There is no doubt..." If you check Google,
the form "There is no doubt" has 5,450,000 hits,
and "There is no question" has 1,840,000 hits.
Both forms are synonymous.

Re biosemiotics: As in any field, there is some good
work and some bad work. As I have said many times,
90% of everything is crap.

Your first citation, biosem.htm, cited some decent
people and also some marginal people. Your second
and third citations mixed biosemiotics with "meme",
which is a very popular, but very dubious coinage
by Richard Dawkins, an evolutionary biologist who
happens to be the Charles Simonyi Professor of the
Public Understanding of Science at Oxford University.

The notion of "meme" is an example of a new idea that
is 99.99% crap, even though it has been proposed and
popularized by a professor with very prestigious
credentials. A sure sign that it is crap is that
the word "meme" gets 51,900,000 hits on Google,
of which at most 0.01% or 5,190 have the slightest
chance of saying anything useful -- i.e., to show
how and why it is misguided.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-07 16:33:11 UTC
Permalink
On Tue, 06 Sep 2005 16:17:48 -0400
Post by John F. Sowa
However, I should have used the clearer form
"There is no doubt..." If you check Google,
the form "There is no doubt" has 5,450,000 hits,
and "There is no question" has 1,840,000 hits.
Both forms are synonymous.
An interesting case of semantic ambiguity :-)
I was perfectly aware that "There is no question" is a synonym of "There is no doubt"
So, what my question did meant?
(hint: it would have been a very lengthy typo indeed!)

Best,

JLD

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-07 23:15:22 UTC
Permalink
Jean-Luc,

I'm not sure what you meant. But in any case,
following is a summary of what I meant:

1. New ideas will undoubtedly be necessary
for any serious progress in addressing the
issues of language understanding, meaning,
truly intelligent systems, etc. Those ideas
can come from many different sources. Some
of them may lie in the future, and others
may already be available to us as "roads
not taken" at various stages in the past.

2. Many of the assumptions of 20th-century
analytic philosophy were seriously misguided,
and many of the directions that were guided
by those assumptions have reached a dead end,
as I outlined in my paper on the Challenge
of Knowledge Soup.

3. Among the dead ends are the expert systems of
the 1980s (including Cyc, which is a 1980-style
system pushed to its inevitable conclusion),
the formal systems of natural language semantics
built on a foundation of Kripke and Montague
modal and intensional logics, and the formal
ontologies that are trying to repeat the Cyc
experience but with different variations in
the foundation.

4. But even though the large systems have reached
a dead end, that does not mean that every aspect
of those systems was equally misguided. It is
important to take stock of what was achieved by
all those systems, including Cyc, Montague,
formal ontology, etc., and determine what, if
anything, is salvageable.

5. In looking at alternatives, we should not just
repeat the past with minor variations (which
is what the SemWeb seems to be doing), but take
a hard look at all the underlying assumptions
in order to understand what went wrong.

6. In examining any idea, the question whether it
is new or old is irrelevant. Some new ideas,
such as "memes", are just as pointless as the
ones that failed. And some old ideas, such as
Peirce's semiotics, suggest more promising
alternatives at the exact places were the
20th-century guys took some of the most
disastrous wrong turns.

In short, I don't believe that anybody has all the
answers, not even Peirce, or me, or you, or anybody
else you care to name. But maybe some of us have
some of the answers, and it's important to examine
them and see where they might lead and where major
innovations may be necessary.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-08 03:46:48 UTC
Permalink
John,

Except for your too heavy emphasis on Peirce I wholeheartedly agree on all your points.
Unfortunately the current zeitgeist is not too propitious to the required sharing of ideas as I mentioned in another semantic related list:
http://groups.yahoo.com/group/emergentsemantics/message/20
Also, the paranoid expectations of the military and various brands of technological millenarists ("Singularity" and all) are not going to help.
Would nuclear physics have been developed so openly if the resulting technological impact had been anticipated?

There are two more complications.
- Given the magnitude of the problem is our collective intelligence up to the task. That is, has our swarm of "little thinking bees" any chance of covering enough of the search space so that we can hope to hit some sensible answer some day?
- The final synthesis will have to be done by and be embodied in a single human mind, will it fit in any one?

Just like the original 20th century Einstein the 21st century "new Einstein" will owe most and nearly all of his glory to the crowd of less prominent colleagues but we need one.
http://www.mathpages.com/rr/s8-08/8-08.htm

Cheers,

JLD
Post by John F. Sowa
Jean-Luc,
I'm not sure what you meant. But in any case,
1. New ideas will undoubtedly be necessary
for any serious progress in addressing the
issues of language understanding, meaning,
truly intelligent systems, etc. Those ideas
can come from many different sources. Some
of them may lie in the future, and others
may already be available to us as "roads
not taken" at various stages in the past.
2. Many of the assumptions of 20th-century
analytic philosophy were seriously misguided,
and many of the directions that were guided
by those assumptions have reached a dead end,
as I outlined in my paper on the Challenge
of Knowledge Soup.
3. Among the dead ends are the expert systems of
the 1980s (including Cyc, which is a 1980-style
system pushed to its inevitable conclusion),
the formal systems of natural language semantics
built on a foundation of Kripke and Montague
modal and intensional logics, and the formal
ontologies that are trying to repeat the Cyc
experience but with different variations in
the foundation.
4. But even though the large systems have reached
a dead end, that does not mean that every aspect
of those systems was equally misguided. It is
important to take stock of what was achieved by
all those systems, including Cyc, Montague,
formal ontology, etc., and determine what, if
anything, is salvageable.
5. In looking at alternatives, we should not just
repeat the past with minor variations (which
is what the SemWeb seems to be doing), but take
a hard look at all the underlying assumptions
in order to understand what went wrong.
6. In examining any idea, the question whether it
is new or old is irrelevant. Some new ideas,
such as "memes", are just as pointless as the
ones that failed. And some old ideas, such as
Peirce's semiotics, suggest more promising
alternatives at the exact places were the
20th-century guys took some of the most
disastrous wrong turns.
In short, I don't believe that anybody has all the
answers, not even Peirce, or me, or you, or anybody
else you care to name. But maybe some of us have
some of the answers, and it's important to examine
them and see where they might lead and where major
innovations may be necessary.
John
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-09 14:54:37 UTC
Permalink
Jean-Luc,

That is an encouraging sign:

JLD> Except for your too heavy emphasis on Peirce
Post by Jean-Luc Delatre
I wholeheartedly agree on all your points.
Re Peirce: I don't expect anyone to believe anything
without seeing some results. The main point is to
keep looking at all promising alternatives.

Re Einstein: Yes, all good ideas have been anticipated
many times, usually in statements that failed to attract
much attention. That is one reason why I keep saying
that a promising place to look for significant new ideas
is in sources that have been neglected or overlooked.

JLD> Also, the paranoid expectations of the military and
Post by Jean-Luc Delatre
various brands of technological millenarists ("Singularity"
and all) are not going to help. Would nuclear physics have
been developed so openly if the resulting technological
impact had been anticipated?
One point I'd like to make about security restrictions is
that what they classify with the highest levels of secrecy
are usually the least interesting and least important
results for fundamental research. When I was at IBM, for
example, the fundamental research had the lowest security
classification and could be approved for publication with
the least amount of red tape. Anything that was going into
a product, however, was confidential until the product
was announced.

The most highly guarded secrets were also the most trivial
for anybody other than the sales people: prices and announcement
dates of new products. And in all businesses and governments,
there is very high security for anything that would embarrass
the big guys at the top.

Re neuroscience: Following are some comments I made about
an article in today's _New York Times_.

John
__________________________________________________________________

That article shows, as do most studies of the brain,
that (a) neuroscience has learned an enormous amount
and (b) what they've learned is still a tiny fraction
Post by Jean-Luc Delatre
"Brain May Still Be Evolving, Studies Hint."
http://www.nytimes.com/2005/09/09/science/09brain.html
Note the word "hint" in the title, and note that everybody
who is quoted says, in one way or another, "we don't know."

The date of 60,000 years ago is significant because there
is a lot of archaeological evidence of an upsurge in
innovation, including the beginnings of the migrations
of modern humans out of Africa. Following is one comment
Post by Jean-Luc Delatre
Richard Klein, an archaeologist who has proposed that
modern human behavior first appeared in Africa because
of some genetic change that promoted innovativeness, said
the time of emergence of the microcephalin allele "sounds
like it could support my idea." If the allele did support
enhanced cognitive function, "it's hard to understand why
it didn't get fixed at 100 percent nearly everywhere," he said.
Dr. Klein suggested the allele might have spread for a
different reason, that as people colonizing East Asia and
Europe pushed north, they adapted to colder climates.
In other words, it's interesting, but nobody knows what it means.

John Sowa

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jean-Luc Delatre
2005-09-09 17:37:34 UTC
Permalink
On Fri, 09 Sep 2005 10:54:37 -0400
Post by John F. Sowa
Post by Jean-Luc Delatre
Richard Klein, an archaeologist who has proposed that
modern human behavior first appeared in Africa because
of some genetic change that promoted innovativeness, said
the time of emergence of the microcephalin allele "sounds
like it could support my idea." If the allele did support
enhanced cognitive function, "it's hard to understand why
it didn't get fixed at 100 percent nearly everywhere," he said.
Dr. Klein suggested the allele might have spread for a
different reason, that as people colonizing East Asia and
Europe pushed north, they adapted to colder climates.
In other words, it's interesting, but nobody knows what it means.
*Very interesting*, if true it at least means that there is a "single point of breakthrough"
between the apes and Einstein (plus individual differences and cumulative culture Ă  la Tomasello of course)
But though, a single allele!

JLD
=======================To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-09 18:32:23 UTC
Permalink
Jean-Luc,

There have been many different studies of the
distribution of human genes:

JLD> *Very interesting*, if true it at least means
Post by Jean-Luc Delatre
that there is a "single point of breakthrough"
Some of the studies of genetic diversity indicate
there had been a collapse of the early modern human
(homo sapiens) population to a rather small number of
individuals at some point around then (60K years ago)
-- possibly caused by a famine or other natural disaster.

Other studies based on the mitochondrial DNA, which
is only passed along through the egg, suggest that
all modern humans are descended from a single female
at some time around then. Of course, they call
her "Eve".

And some studies of the world's languages suggest
that they all diverged from a single protolanguage
that dates to around 35 to 60 thousand years ago.
(Four words from that language are mama, papa,
kaka, and aq'wa.) For a book on that topic, see

_The Origin of Language : Tracing the Evolution
of the Mother Tongue_ by Merritt Ruhlen.

But even if all these conjectures are true, they don't
tell us anything about the nature of intelligence or
how to build intelligent systems on our computers.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Bruce Philp
2005-09-03 00:35:38 UTC
Permalink
Hello
Post by John F. Sowa
Peter M's statement is patently false
about the effects of philosophy on 20th-century science <snip>
I sought a comment on a post from this conversation from my son, who's
a physicist postdoc working in epidemiology after a PhD in something
biological. His chatty reply may be of interest & he doesn't object
to my forwarding it: below, slightly edited. Not one word about Mach
and Einstein.

Regards
Bruce Philp

------------------

Interesting. I don't think Sowa is speaking the same language as
Medawar! They're not accounting for the difference between physics
(physicists) and biology (biologists).

IME there are many successful biologists and medicos (i.e. people who
Medawar saw) who couldn't describe the scientific method if pressed.
That is less generally true of physicists (who Sowa describes). The
difference being in physics you spend a lot of time thinking "what is
the nature of this?" but in biology you generally spend your time
thinking "what, if anything is happening, and how on earth could I see
it anyway?" (and then spend the next two weeks trying to figure out
how to dissolve a protein.)

The best biology students (biochem ones, anyway) most certainly can
discuss the scientific method and the philosophy of science. Not
always particularly well, and they aren't very interested in it. IME
better biologists spend more time thinking about "what is the nature
of this?" too -- but you don't need to do that to count barnacles.

Familiarity with HPS seems to correlate with competence in biology,
but I bet that enjoying Jane Austen correlates much the same way...
neither HPS nor Jane Austen causes one to be good at biology.

OTOH, epidemiologists and statisticians seem go nuts about the
philosophy of science, maybe they worry about "what can we be doing?"
I told you that the word "epistemology" got mentioned in the first
three or four seminars I went to here... I'm also told by [] that
I'm clearly a Bayesian (AIUI rather than a frequentist) but I can't
for the life of me tell the difference. It is not the sort of thing
I'm used to worrying about, coming from bio/physics. 'twas also kind
of strange being categorised so quickly without even understanding the
category.

ISTM that philosophy is generally descriptive rather than prescriptive.
Even though he says "expound," Medawar seems to think that only
prescriptive philosophy could have any value; that's wrong. PS can be
shoehorned into becoming prescriptive when dealing with non-scientists
(or badly behaved scientists) e.g. "does smoking cause lung cancer?"
But even in cases like evolution I would argue that it is mostly
descriptive: in answer to challenges like "evolution is not science"
PS only really needs to describe what science does, describe what
evolution does, and see if they coincide.

(As I've commented to you before, the reason that plenty of biologists
don't accept evolution is that they don't have a good enough awareness
of HPS to understand how it could be science. They are easily
bamboozled by professional obfuscators, HPS would lend them clarity of
thought rather than new methods.)
Post by John F. Sowa
Post by Jean-Luc Delatre
Of what other branch of learning can it be
said that it gives its proficients no advantage; that it need not be
taught or, if taught, need not be learned?
That's often said by naughty kids in maths classrooms and their
economic rationalist cousins. We learn things firstly because they
are enriching and secondly because they might turn out to be useful.

HPS doesn't give *scientists* a huge advantage, but it does give HPS
practitioners an advantage. :-) I also suspect that George Bush has
not read the great political philosophers... and it would not have
conferred any great advantage if he had. It also seems that law
students only ever study H & P in order to ease their consciences...
competence in H & P probably confers a disadvantage in practice of
law. Philosophy does not generally give practitioners a huge
advantage -- if it did, everybody would do queue up to do a philosophy
course at uni.

Sorry it's a ramble. I agree with Sowa that Medawar's (that name is
hard to type!) statement is daft, but I don't think he's quite hit the
nail on the head about why.

DJP
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-03 15:53:47 UTC
Permalink
Bruce,

Your son made some interesting points.
Not one word about Mach and Einstein.
Very few people talk about Mach except professional
philosophers of science and jet pilots. "Einstein"
is just a label on the pantheon, like "Newton" or
"Galileo". So I wouldn't expect those names to come
up in casual conversation. My previous remarks were
about their legacy, which lingered for a long time in
the background assumptions that are seldom questioned.
IME there are many successful biologists and medicos
(i.e. people who Medawar saw) who couldn't describe the
scientific method if pressed. That is less generally true
of physicists (who Sowa describes). The difference being
in physics you spend a lot of time thinking "what is the
nature of this?" but in biology you generally spend your
time thinking "what, if anything is happening, and how on
earth could I see it anyway?" (and then spend the next two
weeks trying to figure out.
Part of the difference might be explained by the nature of the
subject: there is a greater depth of theory in physics than
biology, and physicists have to study the results of all that
blue-sky contemplation by Einstein, Bohr, Heisenberg, and others.
But some of the difference may be the typical split between
theoreticians and experimenalists in any field.
OTOH, epidemiologists and statisticians seem go nuts about
the philosophy of science, maybe they worry about "what can
we be doing?" I told you that the word "epistemology" got
mentioned in the first three or four seminars I went to here...
That kind of thinking is *essential* for statistics. Before
you can do any counting, you have to decide (a) what are you
going to count, (b) what would a meaningful answer look like,
and (c) how can you distinguish a meaningful one from a
meaningless one. Every good statistics course teaches those
points in exhaustive detail. (See that list of quotes from
_Statistics for Experimenters_ by George Box et al.)
ISTM that philosophy is generally descriptive rather than
prescriptive. Even though he says "expound," Medawar seems
to think that only prescriptive philosophy could have any value;
that's wrong....
Much, if not most of the difference is in style rather than
content. The most effective way to prescribe anything is to
get it entrenched in the unquestioned background assumptions.
Examples are often more effective than explicit descriptions
or commands -- precisely because they are not verbalized.
As I've commented to you before, the reason that plenty of
biologists don't accept evolution...
I can't imagine any biologist who doesn't accept evolution.
Even the intelligent design crowd assume evolution -- they're
just arguing over the relative frequency of random vs. guided
mutations. They should try teaching that in Sunday School --
it would bore the kids to death.

John Sowa

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jack Park
2005-09-03 17:03:42 UTC
Permalink
Post by John F. Sowa
Bruce,
<snip>
Post by John F. Sowa
Part of the difference might be explained by the nature of the
subject: there is a greater depth of theory in physics than
biology, and physicists have to study the results of all that
blue-sky contemplation by Einstein, Bohr, Heisenberg, and others.
But some of the difference may be the typical split between
theoreticians and experimenalists in any field.
<snip>
It might be than another part of the difference, as has been suggested
by Robert Rosen, is that much (but not all) of physics has been teased
out of nature by reductionist methods, while most of our understandings
in biology, while starting from information gleened by reductionist
methods, relys heavily on relational modeling owing to the complex
nature of feedbacks, context, and more, some of which we don't yet have
the tools to see, much less represent. A crude paraphrase might be that
physics is easier than biology.

Slightly off subject, I suspect that conceptual graphs can play a much
larger role in biological modeling than they presently do. At the
moment, I am exploring the potential to combine cg structures with topic
maps. It strikes me that this approach combines the indexical
properties of topic maps with the relational modeling properties of
existential graphs.

Jack

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Rich Cooper
2005-09-03 18:58:14 UTC
Permalink
Post by John F. Sowa
Post by John F. Sowa
Bruce,
<snip>
Post by John F. Sowa
Part of the difference might be explained by the nature of the
subject: there is a greater depth of theory in physics than
biology, and physicists have to study the results of all that
blue-sky contemplation by Einstein, Bohr, Heisenberg, and others.
But some of the difference may be the typical split between
theoreticians and experimenalists in any field.
<snip>
It might be than another part of the difference, as has been suggested by
Robert Rosen, is that much (but not all) of physics has been teased out of
nature by reductionist methods, while most of our understandings in
biology, while starting from information gleened by reductionist methods,
relys heavily on relational modeling owing to the complex nature of
feedbacks, context, and more, some of which we don't yet have the tools to
see, much less represent. A crude paraphrase might be that physics is
easier than biology.
Slightly off subject, I suspect that conceptual graphs can play a much
larger role in biological modeling than they presently do. At the moment,
I am exploring the potential to combine cg structures with topic maps. It
strikes me that this approach combines the indexical properties of topic
maps with the relational modeling properties of existential graphs.
Jack
Recently, there has been a research topic called "Systems Biology" that
applies linear systems theory (control theory) methods to the biological
pathways we know about. With the groves of data coming out of genomics
and protein arrays, the number we know about is growing pretty quickly.
I've talked with a few of the people at Cal Tech about SysBio, and they
tell me it was started in Japan, and that there are a number of research
projects that are developing models, testing them against measured
results, and using the results to simulate the effects of biological
pathways. So is shouldn't be too long before biology is at least as
scientific as physics.

Talk to physicists, and they like to spend time in the lab building
machinery and experiments, so at heart, they're just as reality prone as
biologists. As John said, its just the type of material available for study
in the past that has led to the different focus of the two. As more
biology becomes systematized, biologists will start to spend more and
more of their time on computers just like the rest of us. Instead of
CAD and RAD tools there will evolve BAD tools!

Rich

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-04 02:29:55 UTC
Permalink
RC> Talk to physicists, and they like to spend time
in the lab building machinery and experiments,
so at heart, they're just as reality prone as
biologists.
There is a cultural difference, however, between
theoretical and experimental physicists. They
would joke that you could measure the brilliance
of theoretical physicists by how fast the equipment
would break whenever they entered a lab.

A story about Niels Bohr described one of his train
trips from Denmark to attend a conference in Zurich.
It happened that at the moment when his train was
passing through Göttingen, a major physics experiment
at the university blew up. They said that was the
ultimate proof of Bohr's genius.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Jack Park
2005-09-04 06:24:01 UTC
Permalink
Post by Rich Cooper
Post by John F. Sowa
Post by John F. Sowa
Bruce,
<snip>
Post by John F. Sowa
Part of the difference might be explained by the nature of the
subject: there is a greater depth of theory in physics than
biology, and physicists have to study the results of all that
blue-sky contemplation by Einstein, Bohr, Heisenberg, and others.
But some of the difference may be the typical split between
theoreticians and experimenalists in any field.
<snip>
It might be than another part of the difference, as has been suggested
by Robert Rosen, is that much (but not all) of physics has been teased
out of nature by reductionist methods, while most of our
understandings in biology, while starting from information gleened by
reductionist methods, relys heavily on relational modeling owing to
the complex nature of feedbacks, context, and more, some of which we
don't yet have the tools to see, much less represent. A crude
paraphrase might be that physics is easier than biology.
Slightly off subject, I suspect that conceptual graphs can play a much
larger role in biological modeling than they presently do. At the
moment, I am exploring the potential to combine cg structures with
topic maps. It strikes me that this approach combines the indexical
properties of topic maps with the relational modeling properties of
existential graphs.
Jack
Recently, there has been a research topic called "Systems Biology" that
applies linear systems theory (control theory) methods to the biological
pathways we know about. With the groves of data coming out of genomics
and protein arrays, the number we know about is growing pretty quickly.
I've talked with a few of the people at Cal Tech about SysBio, and they
tell me it was started in Japan, and that there are a number of research
projects that are developing models, testing them against measured
results, and using the results to simulate the effects of biological
pathways. So is shouldn't be too long before biology is at least as
scientific as physics.
Talk to physicists, and they like to spend time in the lab building
machinery and experiments, so at heart, they're just as reality prone as
biologists. As John said, its just the type of material available for study
in the past that has led to the different focus of the two. As more
biology becomes systematized, biologists will start to spend more and
more of their time on computers just like the rest of us. Instead of
CAD and RAD tools there will evolve BAD tools!
Rich
That's cute, Rich. Relational Biology was started by Nicholas Rashevsky
in 1954 with his paper "Topology and Life". He's the fellow that gave us
mathematical biology in the first place, then realized, about the same
time that Watson and Crick went public, that we don't have the tools to
understand how it is that we can tease apart a living cell and count
some of the parts in it, but we can't even put it back together again;
something was missing. He launched a program originally based on graph
theory, then migrated it to what he called organismic set theory. Robert
Rosen came along and married category theory to the enterprise and
opened the door to a number of important realizations. Right now, I
don't think anybody has a key to *the final word* on modeling really
complex systems. There remains much to do, and the tribes are too busy
belittling each other to see the lights on in a forest of potential
synergies.

Jack

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Bullard, Claude L (Len)
2005-09-09 15:20:18 UTC
Permalink
One of the notions that TimBL espouses that I tend to
agree with is the idea of independent invention. That is,
when an idea emerges independently from multiple sources,
it is likely a good idea to adopt into a system. The
hornswanger in that is patent IP protection, of course,
but that is digression.

When I wrote and published my paper on Information Ecosystems,
it was because of a deadline to present at an upcoming Hytime
conference. I was on a jet coming home and reading a Scientific
American article about biological alleles and ecosystem
organization and noted the overlaps with the ideas we had
been discussing for open integrated bibliographic information
systems (usually called hypertext) as taxons. That was all
the input I had.

The concept of treating an information system as if it were
an ecosystem has probably emerged from half a dozen other
sources before and after I published that paper. Who knew
because it wasn't a well-noticed idea. Doug Lenat made
fun of it at the conference as his presentation followed
mine. OTOH, today it is very popular idea just as the
ideas behind CYC are receiving criticism.

Is it a useful idea? Not really but danged if it isn't
popular. Beware the trendy. If an idea is to be
adopted for more than an illustration to make more complex
ideas more palatable, it has to be implementable, testable,
provable, or lead to an idea that is.

On the other hand, the papers I wrote for GE Aircraft
Systems on Enterprise Engineering are still classified
15 years after they were published even now that with
XML, most of them are realized in standard practice with
other people's names on them because once one got the
hang of markup, they were obvious. Even the National
British Library couldn't get a copy.

Not a great record for innovation, eh?

Attention is an opportunity to gig, but not the act. Act.
If you want the loudest applause, check the order on the marquee
and who is introducing the acts. It doesn't really matter
if it is original material; just that it is presented as such.
Paying audiences don't care about that; they care that they
get to tell their grandchildren that they were there. Sad but so.

len


From: owner-cg-***@public.gmane.org [mailto:owner-cg-***@public.gmane.org]On Behalf Of John
F. Sowa

Jean-Luc,

That is an encouraging sign:

JLD> Except for your too heavy emphasis on Peirce
Post by Jean-Luc Delatre
I wholeheartedly agree on all your points.
Re Peirce: I don't expect anyone to believe anything
without seeing some results. The main point is to
keep looking at all promising alternatives.

Re Einstein: Yes, all good ideas have been anticipated
many times, usually in statements that failed to attract
much attention. That is one reason why I keep saying
that a promising place to look for significant new ideas
is in sources that have been neglected or overlooked.

JLD> Also, the paranoid expectations of the military and
Post by Jean-Luc Delatre
various brands of technological millenarists ("Singularity"
and all) are not going to help. Would nuclear physics have
been developed so openly if the resulting technological
impact had been anticipated?
One point I'd like to make about security restrictions is
that what they classify with the highest levels of secrecy
are usually the least interesting and least important
results for fundamental research. When I was at IBM, for
example, the fundamental research had the lowest security
classification and could be approved for publication with
the least amount of red tape. Anything that was going into
a product, however, was confidential until the product
was announced.

The most highly guarded secrets were also the most trivial
for anybody other than the sales people: prices and announcement
dates of new products. And in all businesses and governments,
there is very high security for anything that would embarrass
the big guys at the top.

Re neuroscience: Following are some comments I made about
an article in today's _New York Times_.
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
John F. Sowa
2005-09-09 16:11:27 UTC
Permalink
Len,

That remark has also been said before:

CLB> One of the notions that TimBL espouses that I
tend to agree with is the idea of independent invention.
That is, when an idea emerges independently from multiple
sources, it is likely a good idea to adopt into a system.
A related version:

Everything of importance has been said before,
by someone who did not discover it.
Alfred North Whitehead

And another good line:

Pereant qui ante nos nostra dixerunt.
(Damn those who said our stuff before we did.)
Donatus

CLB> The hornswanger in that is patent IP protection...

And that is why it's important to have a better system
for challenging patents on the basis of prior art
without a costly lawsuit. There are indeed novel ideas
that are worth patenting, but there is an enormous amount
of "obvious" or "prior art" that is getting patented.

John

========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Bullard, Claude L (Len)
2005-09-09 17:12:28 UTC
Permalink
I don't think he claims to have said it
first. He promotes it as a principle of
distinguishing among ideas as worthy of
pursuing. It is a decent metric.

I spent last weekend with a friend
of mine who is a software patent examiner.
I told him it wouldn't be a good idea to
tell anyone else that. :-)
He mentioned that publication dates,
copyright dates, etc., are very necessary
to put on published ideas. Putting aside
the cost issue, just validating a patent
is a tough job. There is so much hype,
branding, relabeling, etc. in computer science
that if one wasn't there, it can be tough to
know WHEN a feature was first implemented to
the level of really being prior art.

Let's take a current brouhaha for example. There
are plenty of comp sci folks here with unique
experiences with respect to the evolution of
the current GUI/client/server systems and networks:

o Is a browser really a unique kind of software?
Did HTML make any difference in what Mosaic and
Viola are beyond what Mac Hypercard or even
Englebart's Augment provided?

o What is the patentable difference between
an embedded object in a browser and an embedded
object in ANY windows app?

o Is interactivity really more than an API in
the context of an embedded object and what if
anything is unique about that in the context of a
so-called 'browser'?

o What if anything about the WWW is truly prior art?

The first victim of the web was history.

For sure, we need a better system, or
at least, we should be better at playing the
one we have.

len


From: owner-cg-***@public.gmane.org [mailto:owner-cg-***@public.gmane.org]On Behalf Of John F.
Sowa

Len,

That remark has also been said before:

CLB> One of the notions that TimBL espouses that I
tend to agree with is the idea of independent invention.
That is, when an idea emerges independently from multiple
sources, it is likely a good idea to adopt into a system.
A related version:

Everything of importance has been said before,
by someone who did not discover it.
Alfred North Whitehead

And another good line:

Pereant qui ante nos nostra dixerunt.
(Damn those who said our stuff before we did.)
Donatus

CLB> The hornswanger in that is patent IP protection...

And that is why it's important to have a better system
for challenging patents on the basis of prior art
without a costly lawsuit. There are indeed novel ideas
that are worth patenting, but there is an enormous amount
of "obvious" or "prior art" that is getting patented.
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Bullard, Claude L (Len)
2005-09-09 19:21:41 UTC
Permalink
Maybe it tells us that there are multiple ways to do that
some of which will be more successful for more and less
general tasks.

Maybe it tells us that detailed organic evolution is the
wrong model for creating artificial intelligence because
it is inefficient and takes too long to be profitable.

Maybe it tells us that intelligence is not really one
thing with a single nature and that a single model
approach is a likely cul de sac even if that is not
provable.

There are some aspects of the web evolution I like:

o focused on the bits of computer science divergent
systems can share such as a single addressing model
and a shared syntax for cases where sharing a syntax
is useful, a shared character map (unicode); in
short, useful agreements on easy bits.

o otherwise, everyone does their own thing and what
works survives until it is replaced. Cooperation
is a better strategy.

o processes that even if messy and haphazard, resemble
organic evolution superficially so we may not have to
prove it; it may prove itself by demonstration.

What I don't like:

o conflating names with addresses. Bad linguistics.

o Emphasis on memeHype such as the current Web 2.0:
web as programmable system idea. It isn't wrong, it
isn't a reason to put a version number on the
web, itself, a collection of dynamic versionable software.

o a blithe disregard to the relationships between its
evolution and the evolution of human habits that then
affect cultural evolution and eventually human evolution.
This is unproven but an interesting conjecture. We can
observe the cultural evolution but the human evolution
may take too long to prove until it is a fait accompli.

I think we do better making computers more intelligent
on their own terms: faster pattern recognition, faster
email, more secure access, more memory, improved GUIs
(that one seems to be a very tough problem), less
getting in the way of what I want to get done.

A computer should be like a water faucet: consistent,
cheap, it always works the same way and I can throw it
away without much sentiment.

len

From: owner-cg-***@public.gmane.org [mailto:owner-cg-***@public.gmane.org]On Behalf Of John
F. Sowa

Jean-Luc,

There have been many different studies of the
distribution of human genes:

JLD> *Very interesting*, if true it at least means
Post by Jean-Luc Delatre
that there is a "single point of breakthrough"
Some of the studies of genetic diversity indicate
there had been a collapse of the early modern human
(homo sapiens) population to a rather small number of
individuals at some point around then (60K years ago)
-- possibly caused by a famine or other natural disaster.

Other studies based on the mitochondrial DNA, which
is only passed along through the egg, suggest that
all modern humans are descended from a single female
at some time around then. Of course, they call
her "Eve".

And some studies of the world's languages suggest
that they all diverged from a single protolanguage
that dates to around 35 to 60 thousand years ago.
(Four words from that language are mama, papa,
kaka, and aq'wa.) For a book on that topic, see

_The Origin of Language : Tracing the Evolution
of the Mother Tongue_ by Merritt Ruhlen.

But even if all these conjectures are true, they don't
tell us anything about the nature of intelligence or
how to build intelligent systems on our computers.
========================
To post a message, send mail to cg-***@public.gmane.org
To unsubscribe, send mail to majordomo-***@public.gmane.org with the command 'unsubscribe cg' in the message body.
See http://news.gmane.org/gmane.comp.ai.conceptual-graphs for the mailing list archive.
See http://www.cs.uah.edu/~delugach/CG for the Conceptual Graph Home Page.
For help or administrative assistance, mail to owner-cg-***@public.gmane.org
Loading...