Discussion:
Dark numbers (4)
(too old to reply)
WM
2020-11-13 13:13:18 UTC
Permalink
Descending Sequences of Natural Numbers

Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.

Every sequence of natural numbers descending from ω to 0 is finite. This follows from the axiom of foundation. But above all it is dictated by the practical impossibility to define individually actually infinitely many predecessors of ω.

Obviously ℵo numbers are existing, but are not available as destinations for the leap from ω.

Regards, WM
Gus Gassmann
2020-11-13 14:33:00 UTC
Permalink
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
Every sequence of natural numbers descending from ω to 0 is finite. This follows from the axiom of foundation. But above all it is dictated by the practical impossibility to define individually actually infinitely many predecessors of ω.
Obviously ℵo numbers are existing, but are not available as destinations for the leap from ω.
Wrong. You have clearly no clue about infinity. There is no largest integer, hence there is no last integer before you reach omega, nor a first integer if you try to count down. This is still a defect of your mind.
WM
2020-11-13 18:21:55 UTC
Permalink
Post by Gus Gassmann
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
Every sequence of natural numbers descending from ω to 0 is finite. This follows from the axiom of foundation. But above all it is dictated by the practical impossibility to define individually actually infinitely many predecessors of ω.
Obviously ℵo numbers are existing, but are not available as destinations for the leap from ω.
You have clearly no clue about infinity.
Do you have? Then you could answer my quesion:

What comes before ω? Is it a number or is it a space?
Note that ω is obliged to come next after all natural numbers.
Post by Gus Gassmann
There is no largest integer, hence there is no last integer before you reach omega,
How do you reach ω? Passing a space? Many spaces?

Regards, WM
FredJeffries
2020-11-13 19:04:56 UTC
Permalink
Post by WM
How do you reach ω?
The same way you 'reach' the horizon or the end of the rainbow or the east pole
mitchr...@gmail.com
2020-11-13 19:09:40 UTC
Permalink
The beginning of mathematics is the dark no quantity empty zero number.
Dark zeros are used to set bases...
Calculus is infinitesimal for its first fundamental quantity.
That is the unlimited small math.
WM
2020-11-13 19:13:38 UTC
Permalink
Post by FredJeffries
Post by WM
How do you reach ω?
The same way you 'reach' the horizon or the end of the rainbow or the east pole
No, that is a bad analogy. It is not an analogy at all. You cannot walk beyond the end of the rainbow. But you can count ω+1.

In "Dark Numbers (5)" it will become clear that ω is reached.

Regards, WM
Alan Smaill
2020-11-14 18:50:13 UTC
Permalink
Post by WM
Post by FredJeffries
Post by WM
How do you reach ω?
The same way you 'reach' the horizon or the end of the rainbow or the east pole
No, that is a bad analogy. It is not an analogy at all. You cannot
walk beyond the end of the rainbow. But you can count ω+1.
Go on then.
here is the start:

1,2,3,
Post by WM
Regards, WM
--
AS
Dan Christensen
2020-11-13 14:34:25 UTC
Permalink
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
VERY good!
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N. And there is only a finite number of possibilities remaining for the other terms. But, so what???? It's just basic elementary-school math. No need to concoct any of your mysterious "dark" numbers to obtain this result.


Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com
WM
2020-11-13 18:22:22 UTC
Permalink
Post by Dan Christensen
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
VERY good!
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.

Regards, WM
Gus Gassmann
2020-11-13 18:33:33 UTC
Permalink
On Friday, 13 November 2020 at 14:22:31 UTC-4, WM wrote:
[...]
Post by WM
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
You continue to display your infinite ignorance, along with a mistaken or misleading translation. Cantor did *not* say "next to", he said "next after". That is a big difference, my friend, and I hope this "next to" is not what you have been hanging your reputation on for the last 15 or so years. There is no immediate predecessor of ω, nor does Cantor ever hint that there should be.
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Well, what of it? FISONS are finite and end segments are not. Stop the presses!!!
WM
2020-11-13 18:52:48 UTC
Permalink
Post by Gus Gassmann
[...]
Post by WM
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
Cantor did *not* say "next to", he said "next after". That is a big difference,
Not at all. The next after is the next to.
Post by Gus Gassmann
There is no immediate predecessor of ω, nor does Cantor ever hint that there should be.
What is between all natural numbers and the number *next* after them?
Post by Gus Gassmann
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Well, what of it? FISONS are finite and end segments are not.
Yes, for all definable numbers n this is the case. But the elements of endsegments are also numbers, alas aleph_0 of them are not suitable to be ends of FISONs. If all were in a definable FISON, then there would be a definable empty endsegment.

Regards, WM
Me
2020-11-13 18:57:24 UTC
Permalink
Post by WM
Post by Gus Gassmann
Cantor did *not* say "next to", he said "next after". That is a big difference,
Not at all. The next after is the next to.
Cranks have a tendency to "translate" certain statements (the significance of which they do not understand) into nonsense.
Gus Gassmann
2020-11-13 19:17:53 UTC
Permalink
Post by WM
Post by Gus Gassmann
[...]
Post by WM
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
Cantor did *not* say "next to", he said "next after". That is a big difference,
Not at all. The next after is the next to.
Post by Gus Gassmann
There is no immediate predecessor of ω, nor does Cantor ever hint that there should be.
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer, and n integer is "next to" ω. This should not be too difficult to understand. (But evidently it is far beyond your understanding.)
Post by WM
Post by Gus Gassmann
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Well, what of it? FISONS are finite and end segments are not.
Yes, for all definable numbers n this is the case. [...trash taken out...]
Please stop with this nonsense about "definable" naturals. I won't comment on your last two sentences, because they represent gobbledygook.
Sergio
2020-11-13 21:04:46 UTC
Permalink
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
[...]
Post by WM
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
Cantor did *not* say "next to", he said "next after". That is a big difference,
Not at all. The next after is the next to.
Post by Gus Gassmann
There is no immediate predecessor of ω, nor does Cantor ever hint that there should be.
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer, and n integer is "next to" ω. This should not be too difficult to understand. (But evidently it is far beyond your understanding.)
Post by WM
Post by Gus Gassmann
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Well, what of it? FISONS are finite and end segments are not.
Yes, for all definable numbers n this is the case. [...trash taken out...]
Please stop with this nonsense about "definable" naturals. I won't comment on your last two sentences, because they represent gobbledygook.
Im sure he can explain further, unless he forgets to instantiate his
dark memory cells when counting down but not up, whilee endosegments +
FISIONS = definable - refinable = "next to" fictional imaginational
"powdered Ants" instantated (with water) => POOF!! into;
"Large Ants that do not bullshit"
WM
2020-11-14 12:08:41 UTC
Permalink
Post by Gus Gassmann
Post by WM
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer,
Cantor said that ω is following next upon all finite integers. So it is next to them because nothing is in between.

Regards, WM
Gus Gassmann
2020-11-14 13:47:20 UTC
Permalink
Post by WM
Post by Gus Gassmann
Post by WM
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer,
Cantor said that ω is following next upon all finite integers. So it is next to them because nothing is in between.
Give me the German quote (properly, so I can check it!). And define what it means to be "next to *all* integers". This is your damned ambiguity about quantifiers, the better to twist them when logic drives you into a corner. You are a deceitful asshole.
WM
2020-11-15 15:33:01 UTC
Permalink
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
Post by WM
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer,
Cantor said that ω is following next upon all finite integers. So it is next to them because nothing is in between.
Give me the German quote (properly, so I can check it!).
With pleasure. Here are some:

Enthalten die Zahlen β keine größte, dann besitzen sie (nach dem zweiten Erzeugungsprinzip) eine "Grenze" β', welche auf alle β zunächst folgt [p. 208f]. Diese Erklärung Ernst Zermelos stützt sich auf Cantors Grundsatz für Wohlordnungen, dass "zu jeder beliebigen endlichen oder unendlichen Menge von Elementen ein bestimmtes Element gehört, welches das ihnen allen nächstfolgende Element in der Sukzession ist" [p. 168].

Georg Cantor definiert für die natürlichen Zahlen, "daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt, d. h. größer zu nennen ist als jede der Zahlen ν" [p. 195], dass allerdings der Abstand "ω - ν immer gleich ω ist" [p. 395].

"Die Gesamtheit aller endlichen Kardinalzahlen ν bietet uns das nächstliegende Beispiel einer transfiniten Menge; wir nennen die ihr zukommende Kardinalzahl 'Alef-null', in Zeichen ℵo" [p. 293].
Post by Gus Gassmann
And define what it means to be "next to *all* integers".
You should ask Cantor. He obviously thought that no definition was necessary. Prhaps I may assist him: In my opinion he says that there is nothing between the natural numbers and ω - like there is nothing between 0 and (0, 1].

Regards, WM
Alan Smaill
2020-11-15 18:07:22 UTC
Permalink
--
AS
mitchr...@gmail.com
2020-11-15 19:33:07 UTC
Permalink
--
AS
There is dark zero and sub magnitude math.
Sergio
2020-11-17 03:39:00 UTC
Permalink
Post by ***@gmail.com
--
AS
There is dark zero and sub magnitude math.
rearranging words;

Math magnitude and dark sub zero is there.

Zero Math there, sub dark magnitude.
Gus Gassmann
2020-11-15 20:27:30 UTC
Permalink
Post by WM
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
Post by WM
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer,
Cantor said that ω is following next upon all finite integers. So it is next to them because nothing is in between.
Give me the German quote (properly, so I can check it!).
Enthalten die Zahlen β keine größte, dann besitzen sie (nach dem zweiten Erzeugungsprinzip) eine "Grenze" β', welche auf alle β zunächst folgt [p. 208f]. Diese Erklärung Ernst Zermelos stützt sich auf Cantors Grundsatz für Wohlordnungen, dass "zu jeder beliebigen endlichen oder unendlichen Menge von Elementen ein bestimmtes Element gehört, welches das ihnen allen nächstfolgende Element in der Sukzession ist" [p. 168].
Georg Cantor definiert für die natürlichen Zahlen, "daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt, d. h. größer zu nennen ist als jede der Zahlen ν" [p. 195], dass allerdings der Abstand "ω - ν immer gleich ω ist" [p. 395].
"Die Gesamtheit aller endlichen Kardinalzahlen ν bietet uns das nächstliegende Beispiel einer transfiniten Menge; wir nennen die ihr zukommende Kardinalzahl 'Alef-null', in Zeichen ℵo" [p. 293].
Post by Gus Gassmann
And define what it means to be "next to *all* integers".
You should ask Cantor. He obviously thought that no definition was necessary. Prhaps I may assist him: In my opinion he says that there is nothing between the natural numbers and ω - like there is nothing between 0 and (0, 1].
Fuck you. First you misquote Cantor, then you refuse to provide a precise quote, now you don't even define your own terms.

And then it comes out that Cantor (of course!) never said "omega is next to all integers". In fact omega is never "next to" *any* integer, let alone all of them. You were wrong, I was right.

You lying piece of shit! Did you actually think you could get away with your falsifications?
zelos...@gmail.com
2020-11-16 08:20:17 UTC
Permalink
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
Post by WM
What is between all natural numbers and the number *next* after them?
ω is not "next to" a (finite) integer,
Cantor said that ω is following next upon all finite integers. So it is next to them because nothing is in between.
Give me the German quote (properly, so I can check it!).
Enthalten die Zahlen β keine größte, dann besitzen sie (nach dem zweiten Erzeugungsprinzip) eine "Grenze" β', welche auf alle β zunächst folgt [p. 208f]. Diese Erklärung Ernst Zermelos stützt sich auf Cantors Grundsatz für Wohlordnungen, dass "zu jeder beliebigen endlichen oder unendlichen Menge von Elementen ein bestimmtes Element gehört, welches das ihnen allen nächstfolgende Element in der Sukzession ist" [p. 168].
Georg Cantor definiert für die natürlichen Zahlen, "daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt, d. h. größer zu nennen ist als jede der Zahlen ν" [p. 195], dass allerdings der Abstand "ω - ν immer gleich ω ist" [p. 395].
"Die Gesamtheit aller endlichen Kardinalzahlen ν bietet uns das nächstliegende Beispiel einer transfiniten Menge; wir nennen die ihr zukommende Kardinalzahl 'Alef-null', in Zeichen ℵo" [p. 293].
Post by Gus Gassmann
And define what it means to be "next to *all* integers".
You should ask Cantor. He obviously thought that no definition was necessary. Prhaps I may assist him: In my opinion he says that there is nothing between the natural numbers and ω - like there is nothing between 0 and (0, 1].
Fuck you. First you misquote Cantor, then you refuse to provide a precise quote, now you don't even define your own terms.
And then it comes out that Cantor (of course!) never said "omega is next to all integers". In fact omega is never "next to" *any* integer, let alone all of them. You were wrong, I was right.
You lying piece of shit! Did you actually think you could get away with your falsifications?
He always does, that is why he is a crank
WM
2020-11-16 10:28:46 UTC
Permalink
Post by Gus Gassmann
And then it comes out that Cantor (of course!) never said "omega is next to all integers". In fact omega is never "next to" *any* integer, let alone all of them.
Try to learn German.
" das ihnen allen nächstfolgende Element in der Sukzession ist"
"daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt"

Regards, WM
Gus Gassmann
2020-11-16 11:00:14 UTC
Permalink
Post by WM
Post by Gus Gassmann
And then it comes out that Cantor (of course!) never said "omega is next to all integers". In fact omega is never "next to" *any* integer, let alone all of them.
Try to learn German.
" das ihnen allen nächstfolgende Element in der Sukzession ist"
"daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt"
Piss off. If you think that you can translate "nächstfolgende" or "welche auf alle Zahlen ν folgt "with "next to", then you have a problem with both English and German, in addition to all your well-documented blind spots in mathematics.
Sergio
2020-11-17 03:25:56 UTC
Permalink
Post by Gus Gassmann
Post by WM
Post by Gus Gassmann
And then it comes out that Cantor (of course!) never said "omega is next to all integers". In fact omega is never "next to" *any* integer, let alone all of them.
Try to learn German.
" das ihnen allen nächstfolgende Element in der Sukzession ist"
"daß ω die erste ganze Zahl sein soll, welche auf alle Zahlen ν folgt"
Piss off. If you think that you can translate "nächstfolgende" or "welche auf alle Zahlen ν folgt "with "next to", then you have a problem with both English and German, in addition to all your well-documented blind spots in mathematics.
WMs blind spots are gaps filled with Dark Numbers
Sergio
2020-11-13 20:51:35 UTC
Permalink
Post by WM
Post by Gus Gassmann
[...]
Post by WM
What is the immediate predecessor of ω? Is it a space? If not: what causes ω to stay in a distance from all natural numbers? Especially since Cantor defined that ω comes next to all natural numbers: It is immediately following. A space is excluded.
Cantor did *not* say "next to", he said "next after". That is a big difference,
Not at all. The next after is the next to.
wrong. "next after" is not "next to"

one implies an ordering, the other does not.
Post by WM
Post by Gus Gassmann
There is no immediate predecessor of ω, nor does Cantor ever hint that there should be.
What is between all natural numbers and the number *next* after them?
did you blow a fuse ?

obviously;

1,2,3,4,5,...

comma's, are what is between all natural numbers.
Post by WM
Post by Gus Gassmann
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Well, what of it? FISONS are finite and end segments are not.
Yes, for all definable numbers n this is the case. But the elements of endsegments are also numbers, alas aleph_0 of them are not suitable to be ends of FISONs. If all were in a definable FISON, then there would be a definable empty endsegment.
is it definable only when you look at it ?
Post by WM
Regards, WM
WM
2020-11-14 12:11:11 UTC
Permalink
Post by Sergio
Post by WM
Not at all. The next after is the next to.
wrong. "next after" is not "next to"
one implies an ordering, the other does not.
1, 2, 3, ..., omega is under discussion and is an order.

Regards, WM
Dan Christensen
2020-11-13 19:55:15 UTC
Permalink
Post by WM
Post by Dan Christensen
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
VERY good!
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
It has no immediate predecessor.

[snip]
Post by WM
Post by Dan Christensen
And there is only a finite number of possibilities remaining for the other terms.
When counting upwards, there is an infinite number of natural numbers - at least if you "forget" to instantiate them. But when countimg downward you cannot forget to instantiate them.
Pure gibberish!

Once again you have failed to prove the existence this mysterious set of "dark" or "undefinable" numbers using the axioms of set theory. When will you learn, Mucke?


Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com
WM
2020-11-14 12:13:42 UTC
Permalink
Post by Dan Christensen
Post by WM
Post by Dan Christensen
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
It has no immediate predecessor.
Then there is a void the immediate predecessor. How large is it?

Regards, WM
Mostowski Collapse
2020-11-14 13:25:47 UTC
Permalink
Still fighting windmills?

Limit ordinals dont have direct predecessors.

ω
/ / | \ \ ....
| | | | |
0 1 2 3 4

How do you want to get a direct predecessor if there is "...".

LoL
Post by WM
Post by Dan Christensen
Post by WM
Post by Dan Christensen
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
It has no immediate predecessor.
Then there is a void the immediate predecessor. How large is it?
Regards, WM
Timothy Golden
2020-11-14 14:28:02 UTC
Permalink
Post by Mostowski Collapse
Still fighting windmills?
Limit ordinals dont have direct predecessors.
ω
/ / | \ \ ....
| | | | |
0 1 2 3 4
How do you want to get a direct predecessor if there is "...".
LoL
Post by WM
Post by Dan Christensen
Post by WM
Post by Dan Christensen
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
It has no immediate predecessor.
Then there is a void the immediate predecessor. How large is it?
Regards, WM
Mathematicians are so fond of their variables. Omega lacks any constraint and so is undefined. To define a system via an undefined value is far more troublesome than the other way around. One should utter:
Suppose w = 1000.
If no validity can be found within the construction under the stricture then likewise no validity will be found when the constraint is lifted. w-1 is as undefined as w is. I think working in this territory could be productive in that an ordinary variable can take a value, and so then if w is not an ordinary variable then what sort is it? Here I think mathematicians have happily mixed many types without a care, but structured thinking, as in the sort that includes compiler level integrity, will not allow the construction w-1. Whether nature abides by this principle as well ; this is a valuable conversation I think; particularly in the dimensional context.

Already the methods in use in declaring a large w expose a problem: Our computers arguably do suffer an omega problem. We continue to build larger word sizes, but also we have other ways around the fixed form. We engage in a radix hierarchy of multiple terms. In that this method is already in use to discuss a large w then if we use it again should the problem change form? The compact version should utter the radix construction formally rather than as an assumption. Here we see that w was 10 all along; in any base; any time anywhere the answer is 10. That this ambiguity goes unresolved in modern mathematics while we carry on constructing radix like things such as polynomials to cure the woes... well maybe I've gotten aleph and double you mixed up a bit, but that mix lends a bit of accuracy to the puzzle I think.
Mostowski Collapse
2020-11-14 18:02:08 UTC
Permalink
We have 1000 ∈ ω, therefor for by the axiom of regularity:

1000 =\= ω

Since it is refutable that a set contains itself. Case closed.

Here is a picture:

ω
/ / | \ \ .... \ ...
| | | | | |
0 1 2 3 4 1000

Hope This Helps!
Post by Timothy Golden
Suppose w = 1000.
Jim Burns
2020-11-14 18:26:35 UTC
Permalink
Post by Timothy Golden
Mathematicians are so fond of their variables.
I can see why they would be so fond of variables.
Variables are one way for finite beings to discuss,
describe, argue about infinitely many objects, tasks,
whatever. This is a neat trick.

In non-mathematics, there is nothing controversial about
referring to _one of_ some multiple of individuals.
"A stitch in time saves nine" doesn't refer to some
particular stitch. It's _a stitch_

In my opinion, in mathematics, a variable is the formalization
of an _indefinite reference_ to one of whatever-it-is that
we are discussing, describing, arguing about. Making a
claim using an indefinite reference can be understood as
making that claim for each individual that might or might not
have been referred to.

If there are infinitely many individual whatever-it-is that
might or might not have been referred to, then we have
_in effect_ made infinitely many claims -- but we've done so
_finitely_ This is a neat trick.

Describe a natural number.
That is to say, make statements that would be true
no matter _which_ natural number is referred to.

A natural number has a unique successor.
Different numbers have different successors.
A natural number does not have the natural number zero
as a successor.

We have just described infinitely many natural numbers.
And there are no natural numbers for which those are
false. If we can draw conclusions from these, they will
also be true of all the infinitely many natural numbers.
Because of variables (indefinite references).

What we have NOT done is name them all or calculate them
all or do anything to them all. And we don't NEED to do
that when we reason in this way.
Post by Timothy Golden
Omega lacks any constraint and so is undefined.
We can describe omega as coming after each finite ordinal
and coming before any other ordinal which comes after each
finite ordinal.

Describing omega this way involves an indefinite reference
to finite ordinals and ordinals coming after omega.
So, from a certain point of view, I have just made
infinitely many assertions about omega.

I think that making infinitely many assertions is supposed
to be controversial, what with me being finite and all.
"How can a finite being perform infinitely many tasks?"

The answer to that objection is:
Didn't you see what I just did? That's how.
Post by Timothy Golden
To define a system via an undefined value is far more
Suppose w = 1000.
If no validity can be found within the construction under
the stricture then likewise no validity will be found
when the constraint is lifted.
Uhm? No.
It doesn't work like that at all.
If I say "Suppose 2 + 2 = 5" no validity will be found
under that stricture in regular old arithmetic.
Because 2 + 2 is not 5.

That doesn't mean that regular old arithmetic is
inconsistent. It just doesn't work like that.

Did you mean to write "invalidity" for "validity" there?
Timothy Golden
2020-11-16 16:58:58 UTC
Permalink
Post by Jim Burns
Post by Timothy Golden
Mathematicians are so fond of their variables.
I can see why they would be so fond of variables.
Variables are one way for finite beings to discuss,
describe, argue about infinitely many objects, tasks,
whatever. This is a neat trick.
In non-mathematics, there is nothing controversial about
referring to _one of_ some multiple of individuals.
"A stitch in time saves nine" doesn't refer to some
particular stitch. It's _a stitch_
In my opinion, in mathematics, a variable is the formalization
of an _indefinite reference_ to one of whatever-it-is that
we are discussing, describing, arguing about. Making a
claim using an indefinite reference can be understood as
making that claim for each individual that might or might not
have been referred to.
If there are infinitely many individual whatever-it-is that
might or might not have been referred to, then we have
_in effect_ made infinitely many claims -- but we've done so
_finitely_ This is a neat trick.
Describe a natural number.
That is to say, make statements that would be true
no matter _which_ natural number is referred to.
A natural number has a unique successor.
Different numbers have different successors.
A natural number does not have the natural number zero
as a successor.
We have just described infinitely many natural numbers.
And there are no natural numbers for which those are
false. If we can draw conclusions from these, they will
also be true of all the infinitely many natural numbers.
Because of variables (indefinite references).
OK, but omega does not fit this description. Especially if you believe that
w =/= 1 , w =/= 2, w =/= 3, etc.
then omega is in an inverted form of your own description of a 'variable'. So omega is arguably of a type that has not been carefully defined.
This ambiguity is much like our radix 10 counting system which develops the ability to construct large numbers. In hindsight every radix system is radix 10. It is totally meaningless to declare or construct values this way. The aleph maintains this obtuse character even in finite systems:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A
The A in the above set will never be used to generate a value. We should not universally characterize this A, for it is only characterized by its predecessors. It is only by stating those predecessors that this A takes any meaning. In this regard the symbols prior to this A are a form of cryptography; they are an informational source and I might as well have used any symbols there; it's just that these are symbols we've agreed upon already; something known as the Arabic numerals. How strange that a culture which can claim to be the ultimate scientific source in modernity has gone awry in militant religiosity. Come on in here Muslims and credit yourselves; your Christian elders have your back right? And their Jewish masters too. Well, not this is getting racy...
Post by Jim Burns
What we have NOT done is name them all or calculate them
all or do anything to them all. And we don't NEED to do
that when we reason in this way.
Post by Timothy Golden
Omega lacks any constraint and so is undefined.
We can describe omega as coming after each finite ordinal
and coming before any other ordinal which comes after each
finite ordinal.
Describing omega this way involves an indefinite reference
to finite ordinals and ordinals coming after omega.
So, from a certain point of view, I have just made
infinitely many assertions about omega.
I think that making infinitely many assertions is supposed
to be controversial, what with me being finite and all.
"How can a finite being perform infinitely many tasks?"
Didn't you see what I just did? That's how.
Post by Timothy Golden
To define a system via an undefined value is far more
Suppose w = 1000.
If no validity can be found within the construction under
the stricture then likewise no validity will be found
when the constraint is lifted.
Uhm? No.
It doesn't work like that at all.
If I say "Suppose 2 + 2 = 5" no validity will be found
under that stricture in regular old arithmetic.
Because 2 + 2 is not 5.
That doesn't mean that regular old arithmetic is
inconsistent. It just doesn't work like that.
Did you mean to write "invalidity" for "validity" there?
Gus Gassmann
2020-11-16 17:34:24 UTC
Permalink
On Monday, 16 November 2020 at 12:59:13 UTC-4, ***@gmail.com wrote:
[...]
Post by Timothy Golden
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A
The A in the above set will never be used to generate a value.
Less'n, of course, you are working with hexadecimal numbers...
Jim Burns
2020-11-16 20:14:11 UTC
Permalink
Post by Gus Gassmann
On Monday, 16 November 2020 at 12:59:13 UTC-4,
Post by Timothy Golden
This ambiguity is much like our radix 10 counting system
which develops the ability to construct large numbers.
In hindsight every radix system is radix 10. It is
totally meaningless to declare or construct values this
way. The aleph maintains this obtuse character even in
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A
The A in the above set will never be used to generate
a value.
Less'n, of course, you are working with hexadecimal
numbers...
And don't forget the duoquadragesimals!
Timothy Golden
2020-11-16 20:58:22 UTC
Permalink
Post by Jim Burns
Post by Gus Gassmann
On Monday, 16 November 2020 at 12:59:13 UTC-4,
Post by Timothy Golden
This ambiguity is much like our radix 10 counting system
which develops the ability to construct large numbers.
In hindsight every radix system is radix 10. It is
totally meaningless to declare or construct values this
way. The aleph maintains this obtuse character even in
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A
The A in the above set will never be used to generate
a value.
Less'n, of course, you are working with hexadecimal
numbers...
And don't forget the duoquadragesimals!
The freedom to construct new numerical types exists beneath existing mathematics. AA claims to be a basis for all, but it is a lie.

Jim, I falsify the support for w that you seem to be for (though I do suppose you do see through your own argument), but really then shouldn't we be taking this situation more seriously? As we discuss large values and exist in a system of education that begets them with an ambiguity... How can this not set off you nose? Yes, I believe that their means of construction is valid, but the existence of aleph as a necessity has gone untaught.
The answer really is w = 10, but you must understand that there is a cryptographic interpretation here that goes ignored. The question of whether we ought to utter a radix argument twice or thrice or however many times we like or find useful is called the freedom to construct. I would like to call this the axiom of choice too, but that phrase has come to mean something quite different. This is how modern mathematics is: it is a narrow interpretation that encompasses all possibilities in layers of terminology so that specialties are built... oh dear the fiscal flareup is coming... no another time.

Locking numbers become necessary in order to obtain physical correspondence. Path values are in essence locked numbers; connected together one to the next. Should we evaluate the path at a given position we will have that raw value and that is fine, but beneath this level of geometrical interpretation physical systems betray far more and not at all a singular specific value. Should you attempt to adjust a path for instance then these locked numbers are in essence holding their lock. It may be true that they will cleave or twist or some such; this is an open area of research. Anyway resting at the path level summation is implied and a series notation suffices. It is a simplest form. We'd like to work up to solids from it. It's like legitimating string theory without really needing any of their complicated arguments. This theory would not stoop below quantum physics as the stringers have done. It would be forced to reconstruct atomic theory and molecular theory through and through; no small task.Can I do that? No, I cannot, but I can still work in this place and it is its own basis. To choose a unique basis is to choose a unique math. Academia can be portrayed as a farce through this lens. There is where you wind up I suppose.
Timothy Golden
2020-11-16 18:04:15 UTC
Permalink
Post by Timothy Golden
Post by Jim Burns
Post by Timothy Golden
Mathematicians are so fond of their variables.
I can see why they would be so fond of variables.
Variables are one way for finite beings to discuss,
describe, argue about infinitely many objects, tasks,
whatever. This is a neat trick.
In non-mathematics, there is nothing controversial about
referring to _one of_ some multiple of individuals.
"A stitch in time saves nine" doesn't refer to some
particular stitch. It's _a stitch_
In my opinion, in mathematics, a variable is the formalization
of an _indefinite reference_ to one of whatever-it-is that
we are discussing, describing, arguing about. Making a
claim using an indefinite reference can be understood as
making that claim for each individual that might or might not
have been referred to.
If there are infinitely many individual whatever-it-is that
might or might not have been referred to, then we have
_in effect_ made infinitely many claims -- but we've done so
_finitely_ This is a neat trick.
Describe a natural number.
That is to say, make statements that would be true
no matter _which_ natural number is referred to.
A natural number has a unique successor.
Different numbers have different successors.
A natural number does not have the natural number zero
as a successor.
We have just described infinitely many natural numbers.
And there are no natural numbers for which those are
false. If we can draw conclusions from these, they will
also be true of all the infinitely many natural numbers.
Because of variables (indefinite references).
OK, but omega does not fit this description. Especially if you believe that
w =/= 1 , w =/= 2, w =/= 3, etc.
then omega is in an inverted form of your own description of a 'variable'. So omega is arguably of a type that has not been carefully defined.
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A
The A in the above set will never be used to generate a value. We should not universally characterize this A, for it is only characterized by its predecessors. It is only by stating those predecessors that this A takes any meaning. In this regard the symbols prior to this A are a form of cryptography; they are an informational source and I might as well have used any symbols there; it's just that these are symbols we've agreed upon already; something known as the Arabic numerals. How strange that a culture which can claim to be the ultimate scientific source in modernity has gone awry in militant religiosity. Come on in here Muslims and credit yourselves; your Christian elders have your back right? And their Jewish masters too. Well, not this is getting racy...
I have a bit more to contribute here before I get censored as a racist bigot. The Weyl's and the Einstein's aren't so proud of my expression here possibly, but then the underdog can take his place; the atheist in deep Islam deeply covets his secret disbelief; the lone wolf effect forming the ultimate atheist. I'd like to know how they think. I am second generation open atheist and even find it abysmal just how religious mathematics has become.

Yes I have more. I suspect I have locking numbes. Already the real numbers alot open and closed ends to their segments. Is it just a game or can we put a loop there? If we allow a loop there then we have locking numbers. We have physical operators. Further a physics of locking numbers will carry incredible correspondence with molecular behavior. Already I state that the ray is more fundamental than the line and that this simple mistake has caused a four century blunder; an overlooked place to work in general from. That the ray is zero dimensional; Descartes suggested that it was a trick of the eye. I'll quote him here sometime, but not right now.
Post by Timothy Golden
Post by Jim Burns
What we have NOT done is name them all or calculate them
all or do anything to them all. And we don't NEED to do
that when we reason in this way.
Post by Timothy Golden
Omega lacks any constraint and so is undefined.
We can describe omega as coming after each finite ordinal
and coming before any other ordinal which comes after each
finite ordinal.
Describing omega this way involves an indefinite reference
to finite ordinals and ordinals coming after omega.
So, from a certain point of view, I have just made
infinitely many assertions about omega.
I think that making infinitely many assertions is supposed
to be controversial, what with me being finite and all.
"How can a finite being perform infinitely many tasks?"
Didn't you see what I just did? That's how.
Post by Timothy Golden
To define a system via an undefined value is far more
Suppose w = 1000.
If no validity can be found within the construction under
the stricture then likewise no validity will be found
when the constraint is lifted.
Uhm? No.
It doesn't work like that at all.
If I say "Suppose 2 + 2 = 5" no validity will be found
under that stricture in regular old arithmetic.
Because 2 + 2 is not 5.
That doesn't mean that regular old arithmetic is
inconsistent. It just doesn't work like that.
Did you mean to write "invalidity" for "validity" there?
Jim Burns
2020-11-16 21:01:09 UTC
Permalink
On Saturday, November 14, 2020 at 1:26:49 PM UTC-5,
Post by Jim Burns
If there are infinitely many individual whatever-it-is
that might or might not have been referred to, then we
have _in effect_ made infinitely many claims -- but we've
done so _finitely_ This is a neat trick.
Describe a natural number.
That is to say, make statements that would be true
no matter _which_ natural number is referred to.
A natural number has a unique successor.
Different numbers have different successors.
A natural number does not have the natural number zero
as a successor.
We have just described infinitely many natural numbers.
And there are no natural numbers for which those are
false. If we can draw conclusions from these, they will
also be true of all the infinitely many natural numbers.
Because of variables (indefinite references).
OK, but omega does not fit this description.
Especially if you believe that
w =/= 1 , w =/= 2, w =/= 3, etc.
then omega is in an inverted form of your own description
of a 'variable'. So omega is arguably of a type that has
not been carefully defined.
omega is a constant, it is not a variable.

Also, omega is not a natural number, and my description
is of a natural number. So, it would be all right if
omega did not fit that description. Many non-natural-number
things do not fit that description.

However, it's not true that omega does not fit that
description. I hope this is clear: the above describes
natural numbers and desribes other things as well, including
omega. For comparison, a square has four corners. Anything
which does not have four corners is not a square. However,
there are also plane figures with four corners which are not
squares.

omega is the first infinite ordinal. That is to say...

omega is an ordinal.
What's typically mean by being an ordinal is that
every set of ordinals either contains a first element
or it's empty.

omega is an infinite ordinal.
There are infinitely many ordinals that come before omega.
As a consequence, any ordinal alpha that comes after omega
also has infinitely many ordinals before alpha, and
alpha is infinite.

omega is the first infinite ordinal.
Any ordinal k that comes before omega is finite.
The set { 0,1,...,k } is finite, for k < omega.

"The first infinite ordinal" is actually quite a lot of
description of omega, although it needs some unpacking.
Timothy Golden
2020-11-18 00:34:22 UTC
Permalink
Post by Jim Burns
On Saturday, November 14, 2020 at 1:26:49 PM UTC-5,
Post by Jim Burns
If there are infinitely many individual whatever-it-is
that might or might not have been referred to, then we
have _in effect_ made infinitely many claims -- but we've
done so _finitely_ This is a neat trick.
Describe a natural number.
That is to say, make statements that would be true
no matter _which_ natural number is referred to.
A natural number has a unique successor.
Different numbers have different successors.
A natural number does not have the natural number zero
as a successor.
We have just described infinitely many natural numbers.
And there are no natural numbers for which those are
false. If we can draw conclusions from these, they will
also be true of all the infinitely many natural numbers.
Because of variables (indefinite references).
OK, but omega does not fit this description.
Especially if you believe that
w =/= 1 , w =/= 2, w =/= 3, etc.
then omega is in an inverted form of your own description
of a 'variable'. So omega is arguably of a type that has
not been carefully defined.
omega is a constant, it is not a variable.
Also, omega is not a natural number, and my description
is of a natural number. So, it would be all right if
omega did not fit that description. Many non-natural-number
things do not fit that description.
However, it's not true that omega does not fit that
description. I hope this is clear: the above describes
natural numbers and desribes other things as well, including
omega. For comparison, a square has four corners. Anything
which does not have four corners is not a square. However,
there are also plane figures with four corners which are not
squares.
omega is the first infinite ordinal. That is to say...
omega is an ordinal.
What's typically mean by being an ordinal is that
every set of ordinals either contains a first element
or it's empty.
omega is an infinite ordinal.
There are infinitely many ordinals that come before omega.
As a consequence, any ordinal alpha that comes after omega
also has infinitely many ordinals before alpha, and
alpha is infinite.
omega is the first infinite ordinal.
Any ordinal k that comes before omega is finite.
The set { 0,1,...,k } is finite, for k < omega.
"The first infinite ordinal" is actually quite a lot of
description of omega, although it needs some unpacking.
I don't feel as though I have a dog in this fight really, but I do have a sidelong gaze at this thing. Your term 'ordinal' is formalized by your language to not be a natural number. I do find in these sorts of categorization another interesting option: to claim that two is a subset of three and so forth is an admission that three contains two. The successor / predecessor language portrays something different. This distinction is not present in the raw value, but its properties on the continuum are easily understood. Three as a segment that goes from zero to three is a different construction than the point three point zero. Thus it is that two types of geometry can be portrayed by the same raw value. Other oddities have already been admitted such as a segment that is open at three. I never saw this with so much character as I see it now. That character promises more such characters.

So now that you've dodged away from your extensive diatribe on what a variable is should we now delve into what a constant is? Are you able to go into the same level of exactness? It seems to me that so far you have 'not a number' within your own requirements of the constant omega. I do see what you mean, and I think I can agree that omega is not a number. NAN10 could do the trick; radix error: Hardware revision required.
NAN10: radix error; Hardware revision required. Unhandled exception at 0xfb3578ac90d18.
System going down now...

As we entertain numerical constructions I would remind you of the freedoms that solid state physics takes in its own characters:
Phonons, Holes, Plasmons, Polaritons, Polarons, Excitons, Cooper pairs, Magnons, Vacancies, Dislocations, Donors, Acceptors
This is the list of Basic Model Concepts as laid out in Kittel's 5th Introduction to Solid State Physics.
No discussion of the abysmally slow rate of heat conduction is conducted, convected, nor radiated to the end user.

Our ordinary numbers are not really doing physics very well. They are lacking physical correspondence. We need a more interdimensional understanding. This w almost goes there. But if you cannot specify its type then I don't think it is fair to call it a constant as if the familiar constant were valid; say w=3. No. By your own definition it is not a number. I'll stand by w=10.
Jim Burns
2020-11-18 18:27:20 UTC
Permalink
On Monday, November 16, 2020 at 4:01:25 PM UTC-5,
Post by Jim Burns
Also, omega is not a natural number, and my description
is of a natural number. So, it would be all right if
omega did not fit that description. Many non-natural-number
things do not fit that description.
However, it's not true that omega does not fit that
description. I hope this is clear: the above describes
natural numbers and desribes other things as well, including
omega. For comparison, a square has four corners. Anything
which does not have four corners is not a square. However,
there are also plane figures with four corners which are not
squares.
Your term 'ordinal' is formalized by your language to
not be a natural number.
It seems to me that so far you have 'not a number' within
your own requirements of the constant omega. I do see what
you mean, and I think I can agree that omega is not a number.
NAN10 could do the trick;
radix error: Hardware revision required.
NAN10: radix error; Hardware revision required.
Unhandled exception at 0xfb3578ac90d18.
System going down now...
omega is an ordinal. Natural numbers are ordinals.
However, omega is the first infinite ordinal,
and natural numbers are finite ordinals.
They are alike in some ways, different in others.

omega is not a natural number. But it's not an error.

We describe natural numbers and omega --
we make statements which we know are true of them because
we know what natural numbers and omega are --
and we reason from those statements to other statements
which we know are also true of them because of how we only
use truth-preserving inferences to do this.

This is one way in which we can reason about infinity,
about infinitely many things, about infinite things.
(I can't say for sure that other ways don't exist.
I can say for sure this way exists.)

You have a different way of working with things that
breaks down if you try to use it with infinitely many things.

----
So now that you've dodged away from your extensive
diatribe on what a variable is should we now delve into
what a constant is? Are you able to go into the same level
of exactness?
In my second-most-recent post to you, I talked about
variables and why mathematicians are so fond of them.
A variable is the formalization of an indefinite reference.

In the same vein, a constant is a _definite_ reference.
In order to justify my claim that omega is a constant,
I should be able to prove that (i) at least one first
infinite ordinal exists, and (ii) no more than one first
infinite ordinal exists. I'll spare you the details,
but I can do this.

----
This seems to be where the incompatibility with infinity
enters your way of doing things: your imagined computers
only deal with constants, and constants are NOT indefinite
references to one of infinitely many. Even what programmers
call variables are not variables in the sense of which
mathematicians are fond.

There are programs which can verify proofs which use these
mathematicians' variables. It seems to me that these programs
avoid any hint of infinity by only dealing in finite
manipulations of conceivably-meaningless finite strings.
Timothy Golden
2020-11-19 23:17:30 UTC
Permalink
Post by Jim Burns
On Monday, November 16, 2020 at 4:01:25 PM UTC-5,
Post by Jim Burns
Also, omega is not a natural number, and my description
is of a natural number. So, it would be all right if
omega did not fit that description. Many non-natural-number
things do not fit that description.
However, it's not true that omega does not fit that
description. I hope this is clear: the above describes
natural numbers and desribes other things as well, including
omega. For comparison, a square has four corners. Anything
which does not have four corners is not a square. However,
there are also plane figures with four corners which are not
squares.
Your term 'ordinal' is formalized by your language to
not be a natural number.
It seems to me that so far you have 'not a number' within
your own requirements of the constant omega. I do see what
you mean, and I think I can agree that omega is not a number.
NAN10 could do the trick;
radix error: Hardware revision required.
NAN10: radix error; Hardware revision required.
Unhandled exception at 0xfb3578ac90d18.
System going down now...
omega is an ordinal. Natural numbers are ordinals.
However, omega is the first infinite ordinal,
and natural numbers are finite ordinals.
They are alike in some ways, different in others.
omega is not a natural number. But it's not an error.
We describe natural numbers and omega --
we make statements which we know are true of them because
we know what natural numbers and omega are --
and we reason from those statements to other statements
which we know are also true of them because of how we only
use truth-preserving inferences to do this.
This is one way in which we can reason about infinity,
about infinitely many things, about infinite things.
(I can't say for sure that other ways don't exist.
I can say for sure this way exists.)
You have a different way of working with things that
breaks down if you try to use it with infinitely many things.
----
So now that you've dodged away from your extensive
diatribe on what a variable is should we now delve into
what a constant is? Are you able to go into the same level
of exactness?
In my second-most-recent post to you, I talked about
variables and why mathematicians are so fond of them.
A variable is the formalization of an indefinite reference.
In the same vein, a constant is a _definite_ reference.
In order to justify my claim that omega is a constant,
I should be able to prove that (i) at least one first
infinite ordinal exists, and (ii) no more than one first
infinite ordinal exists. I'll spare you the details,
but I can do this.
----
This seems to be where the incompatibility with infinity
enters your way of doing things: your imagined computers
only deal with constants, and constants are NOT indefinite
references to one of infinitely many. Even what programmers
call variables are not variables in the sense of which
mathematicians are fond.
There are programs which can verify proofs which use these
mathematicians' variables. It seems to me that these programs
avoid any hint of infinity by only dealing in finite
manipulations of conceivably-meaningless finite strings.
Yes I am fond of the computer style of representation and am wary of the mathematicians who never had to abide by a compiler. I have little doubt that the details you spare me of have some wishy-washy content. Already the terminology of ordinal as if it is a pure concept is trouble. Well, you are free to construct and I am free to construct, and I do accept that you are a higher grade mathematician than I am. Infinity is not really my thing, but I have been getting into number theory a bit. Isn't the aleph the omega here? I do think you'll find that the implementation of a radix style implementation on this problem is productive but it exposes a problem: the means in use to develop these large numbers already contain the radix construction. I'm sure in your terms you deny this fact, yet good luck to you naming a large number that you so fondly work with. To what degree can we reradix? The sweet part of this is that there is a need to go back to the original useful number theory and find an aleph down there that goes ignored in standard curricular activities. I suppose the ultimate form is to pose that all sets carry an aleph. Indeed the notion of the set as a container could demand such a sealing end. Aleph is not however in the contents of the set: it is part of the container itself. Yet it will be needed to construct large numbers cleanly. Does the ambiguity down low cause the ambiguity up high? Are they the same ambiguity? All number systems are radix 10. The usage of this value as if it is fundamental is a fallacy. We should not construct large numbers from large numbers. We construct from a finite set
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A }
The A shall never be used to construct a value... just like omega. Under the interpretation that I just laid out we may as well just tie the A up with the closing brace, except its notion is needed when we suppose a value like
135 = 1 AA + 3 A + 5
Keeping things simple this then bridges us directly into polynomials... and again the puzzle of reusing the radix concept and to what degree that can be valid without stating more formally the fact that it already is in use... the uniqueness diminishes somewhat. I don't have any final answer on this but these connections I do see as present.

Under dimensional interpretation to what degree our ordinary large numbers are already multidimensional... you see there is room here for interpretation. It's casting open quite a lot of puzzles through the same lens. I don't believe that the exponential form style of number construction can really hold up:
1 x 10 ^ 2 + 3 x 10 ^ 1 + 5 x 10 ^ 0
again because we are building a large number from a large number. It may have to be built up to from a cascade effect that children learn their large values by. If you aren't convinced by that one then try
2 x 10 ^ 32 + 5 x 10 ^ 12 + 7
The thing is we know that these numbers work. The ambiguity clears up more as we settle into modulo counting as our basis of theory. To what degree does Aleph actually mean 'return'? Now I am arguably out of touch with the careful use of these terms as you use them. What I do know is that the modulo form carries deep consequences that are not fully leveraged in modernity. I know this because I have discovered polysign numbers, whose modulo sign behaviors are supported by the real value as P2: the two signed numbers. That the three-signed numbers construct the complex numbers within their own format from exactly the same rules that begat P2; that
P1 P2 P3 P4 P5 P6 ...
all exist as siblings; a full family of number systems; none any more fundamental than any other; general dimensional geometry demanded by their nature:
- 1 + 1 = 0 : P2
- 1 + 1 * 1 = 0 : P3
- 1 + 1 * 1 # 1 = 0 : P4
...
- 1 = 0 : P1

P1 really belongs at the top but it's zero dimensional unidirectional algebra goes neglected in modernity. So I'm being nice putting P2 first; but it is a lie. If your naturals fit P1 then they are actually zero dimensional in geometric rendering. Integers are two-signed and take the one dimensional stage. The ray is more fundamental than the line. The P1 ray is a trick of the eye according to Descartes. The rays of light that allow us to view that point on a piece of paper... those two values you can assign to that point... zero dimensional? two dimensional point? What's going on here? Not another ambiguity! We have a fraud that is perpetrated on the human race known as mathematics. Is it any wonder that the physicist lands in particle / wave duality with such a flim-flam basis? Which is it Jim? Is that point zero dimensional or two dimensional? Three dimensional? Four dimensional? With time unidirectional the zero dimensional interpretation of time has arrived formalized within polysign. It's all about modulo behaviors. The form
s x
where s is sign and x is magnitude is mostly what I work in. Pushing this onto the naturals is not really my cup of tea, but there is no problem in doing so. You could declare your work to be withiin the x of sx, and I would be pushing the P1 form which does carry sign. Of course to get to the P2 form and so on we need sign. To get to the two-signed integer legitimates the P1 form where x is taken as discrete, and yet this P1 form is dimensionally collapsed; it is zero dimensional geometrically speaking. You can sort of see it in its unidirectional nature; there is no predecessor; just the successor; and so it is greatly diminished. At P3 (Maybe I should be calling this discrete form N3) you have three directions to step in. Step once in each direction
- 1 + 1 * 1
and you'll wind up back where you started. It's a simplex coordinate system. These N3 map to the plane; N4 to 3D traditional space. Oh, did you abide by the six unique directions of physical space? I only need four. N3 are already complex valued even though I haven't described the product to you; I'm sure you can do that. Nn are algebraically well behaved. Pn too of course.
Jim Burns
2020-11-21 17:35:00 UTC
Permalink
On Wednesday, November 18, 2020 at 1:27:33 PM UTC-5,
Post by Jim Burns
This seems to be where the incompatibility with infinity
enters your way of doing things: your imagined computers
only deal with constants, and constants are NOT indefinite
references to one of infinitely many. Even what programmers
call variables are not variables in the sense of which
mathematicians are fond.
There are programs which can verify proofs which use these
mathematicians' variables. It seems to me that these programs
avoid any hint of infinity by only dealing in finite
manipulations of conceivably-meaningless finite strings.
Yes I am fond of the computer style of representation and
am wary of the mathematicians who never had to abide by
a compiler.
There are programs which can verify proofs which use these
mathematicians' variables. This isn't a joke. For example:
https://en.wikipedia.org/wiki/Metamath
You're welcome.

At some point, if you continue your studies, you will
realize mathematicians had to "abide by a compiler"
long before there were any computers. The ideas that we
make physical in our computers are mathematical ideas.
I have little doubt that the details you spare me of have
some wishy-washy content.
I see that you don't have to work vary hard to convince
yourself.

What we mean by "ordinal" is
Every non-empty class C of ordinals contains a first
ordinal, an ordinal c such that, for all b in C, c =< b

Assume that some infinite ordinal exists.
Let C be the class of infinite ordinals.
C is non-empty.

The non-empty class C of infinite ordinals contains
_at least one_ ordinal c such that, for all b in C, c =< b

Suppose that, for both c and c' in C,
for all b in C, c =< b and c'=< b
Then c =< c' and c' =< c.
Thus c = c'.

If any infinite ordinal exists,
the first infinite ordinal omega exists and is unique.

Enjoy.
Already the terminology of ordinal as if it is a pure
concept is trouble.
Maybe "as if it is a pure concept" means something to you.

I've described ordinals.
Every non-empty class of them contains a first.
There are consequences to that description being
true of them.

Perhaps the trouble you refer to is not being able
to assign a decimal numeral to omega.
Well, you can't always get what you want.
But maybe, in this case, you get what you need,
omega without a decimal numeral.
Well, you are free to construct and I am free to construct,
We are both constrained by our desire to be understood,
so we only use well-known words in well-known senses.

Suppose, for example, someone decided that omega = 10
instead of that it was the first infinite ordinal.
The punishment for this would be that what they wrote
would be turned into gibberish. No organization or person
exists to administer this punishment, it's how it is.
And there is no one to appeal to for mercy, either.
and I do accept that you are a higher grade mathematician
than I am.
That sounds flattering.
However, whatever my own grade is, as a mathematician,
what we have before us is not higher-grade mathematics.

These are all pretty much introductory topics. I may have
added my personal touch here or there, in an effort to
explain these topics, but a very large percentage of folks
see them and master them, even without my help, and move on.

The exceptions (most of the world's supply of exceptions post
to sci.logic) seem to be people who, for their own reasons,
are determined to not understand them.

If you want to understand, I'm fairly confident that
you will be able to.
Sergio
2020-11-21 17:47:15 UTC
Permalink
Post by Jim Burns
On Wednesday, November 18, 2020 at 1:27:33 PM UTC-5,
Post by Jim Burns
This seems to be where the incompatibility with infinity
enters your way of doing things: your imagined computers
only deal with constants, and constants are NOT indefinite
references to one of infinitely many. Even what programmers
call variables are not variables in the sense of which
mathematicians are fond.
There are programs which can verify proofs which use these
mathematicians' variables. It seems to me that these programs
avoid any hint of infinity by only dealing in finite
manipulations of conceivably-meaningless finite strings.
Yes I am fond of the computer style of representation and
am wary of the mathematicians who never had to abide by
a compiler.
There are programs which can verify proofs which use these
https://en.wikipedia.org/wiki/Metamath
You're welcome.
At some point, if you continue your studies, you will
realize mathematicians had to "abide by a compiler"
long before there were any computers.
?? there were no compilers.

Math has formal proofs long before that.


OR if you mean we used "math" to design comliers and computers, yes we
did, computers are all math. Google " Karnaugh Maps " then " Boolean Logic "
Post by Jim Burns
The ideas that we
make physical in our computers are mathematical ideas.
I have little doubt that the details you spare me of have
some wishy-washy content.
I see that you don't have to work vary hard to convince
yourself.
yep WM ideas are flaky
Post by Jim Burns
What we mean by "ordinal" is
Every non-empty class C of ordinals contains a first
ordinal, an ordinal c such that, for all b in C, c =< b
Assume that some infinite ordinal exists.
Let C be the class of infinite ordinals.
C is non-empty.
The non-empty class C of infinite ordinals contains
_at least one_ ordinal c such that, for all b in C, c =< b
Suppose that, for both c and c' in C,
for all b in C, c =< b and c'=< b
Then c =< c' and c' =< c.
Thus c = c'.
If any infinite ordinal exists,
the first infinite ordinal omega exists and is unique.
Enjoy.
Already the terminology of ordinal as if it is a pure
concept is trouble.
Maybe "as if it is a pure concept" means something to you.
I've described ordinals.
Every non-empty class of them contains a first.
There are consequences to that description being
true of them.
Perhaps the trouble you refer to is not being able
to assign a decimal numeral to omega.
Well, you can't always get what you want.
But maybe, in this case, you get what you need,
omega without a decimal numeral.
Well, you are free to construct and I am free to construct,
We are both constrained by our desire to be understood,
so we only use well-known words in well-known senses.
Suppose, for example, someone decided that omega = 10
instead of that it was the first infinite ordinal.
would be turned into gibberish. No organization or person
exists to administer this punishment, it's how it is.
And there is no one to appeal to for mercy, either.
and I do accept that you are a higher grade mathematician
than I am.
That sounds flattering.
However, whatever my own grade is, as a mathematician,
what we have before us is not higher-grade mathematics.
These are all pretty much introductory topics. I may have
added my personal touch here or there, in an effort to
explain these topics, but a very large percentage of folks
see them and master them, even without my help, and move on.
The exceptions (most of the world's supply of exceptions post
to sci.logic) seem to be people who, for their own reasons,
are determined to not understand them.
If you want to understand, I'm fairly confident that
you will be able to.
Timothy Golden
2020-11-22 16:34:17 UTC
Permalink
Post by Jim Burns
On Wednesday, November 18, 2020 at 1:27:33 PM UTC-5,
Post by Jim Burns
This seems to be where the incompatibility with infinity
enters your way of doing things: your imagined computers
only deal with constants, and constants are NOT indefinite
references to one of infinitely many. Even what programmers
call variables are not variables in the sense of which
mathematicians are fond.
There are programs which can verify proofs which use these
mathematicians' variables. It seems to me that these programs
avoid any hint of infinity by only dealing in finite
manipulations of conceivably-meaningless finite strings.
Yes I am fond of the computer style of representation and
am wary of the mathematicians who never had to abide by
a compiler.
There are programs which can verify proofs which use these
https://en.wikipedia.org/wiki/Metamath
You're welcome.
At some point, if you continue your studies, you will
realize mathematicians had to "abide by a compiler"
long before there were any computers. The ideas that we
make physical in our computers are mathematical ideas.
I have little doubt that the details you spare me of have
some wishy-washy content.
I see that you don't have to work vary hard to convince
yourself.
What we mean by "ordinal" is
Every non-empty class C of ordinals contains a first
ordinal, an ordinal c such that, for all b in C, c =< b
Assume that some infinite ordinal exists.
Let C be the class of infinite ordinals.
C is non-empty.
The non-empty class C of infinite ordinals contains
_at least one_ ordinal c such that, for all b in C, c =< b
Suppose that, for both c and c' in C,
for all b in C, c =< b and c'=< b
Then c =< c' and c' =< c.
Thus c = c'.
If any infinite ordinal exists,
the first infinite ordinal omega exists and is unique.
Enjoy.
Already the terminology of ordinal as if it is a pure
concept is trouble.
Maybe "as if it is a pure concept" means something to you.
I've described ordinals.
Every non-empty class of them contains a first.
There are consequences to that description being
true of them.
Perhaps the trouble you refer to is not being able
to assign a decimal numeral to omega.
Well, you can't always get what you want.
But maybe, in this case, you get what you need,
omega without a decimal numeral.
Well, you are free to construct and I am free to construct,
We are both constrained by our desire to be understood,
so we only use well-known words in well-known senses.
Suppose, for example, someone decided that omega = 10
instead of that it was the first infinite ordinal.
would be turned into gibberish. No organization or person
exists to administer this punishment, it's how it is.
And there is no one to appeal to for mercy, either.
and I do accept that you are a higher grade mathematician
than I am.
That sounds flattering.
However, whatever my own grade is, as a mathematician,
what we have before us is not higher-grade mathematics.
These are all pretty much introductory topics. I may have
added my personal touch here or there, in an effort to
explain these topics, but a very large percentage of folks
see them and master them, even without my help, and move on.
The exceptions (most of the world's supply of exceptions post
to sci.logic) seem to be people who, for their own reasons,
are determined to not understand them.
If you want to understand, I'm fairly confident that
you will be able to.
I think all that needed is a radix concept atop your ordinal, which does appear to be very clean. Then you'll have w=10. If this notation is acceptable for a finite ordinal:
( a, b, c, d, e ) = C1
( a, b, c, d, e, f ) = C2
we cannot claim that f is the w of C1. Each ordinal as stated is free standing and demands unique symbolic notation. There is no equality nor comparison possible between the a of C1 and the a of C2 right? Particularly if I were to insert a handy null into C2
( 0, a, b, c, d, e, f ) = C3
then we'd like to think that we could do some things between C2 and C3, but every time you go to do something you have to introduce another unique set description because these things are so primitive that they deny ordinary numerical semantics. They are number-like, and they are enumerable in their finite form, and let's face it, the ones in use in modernity go like:
( 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 ) = Cten
but you see in another galaxy they are probably using
( 1, 2, 3, 4, 5, 6, 0 ) = Cten
and of course this Cten form carries a return zero at the end which is the place of the w, but this value forms a breakpoint in the progression. We are engaged in a progression and to build atop the Ordinal seems fit enough. We can place a zero at the bottom as well and demand the form
0, a, b, c, d, e, f, 0
and we see that the zeros are essentailly the same semantically as braces; they are just another form and a subtly differing meaning. Now we can encode our higher systems from a finite system, and you have no more need of your ordinal fantasy sir. Dismissed. All higher ordinals are representable by lower ordinal systems.

---------------------------------------------------------
| Please note this is not finalized. | (Sorry for the sucky google font)
---------------------------------------------------------

Proof: a high ordinal Cn demands a low ordinal Cten a<b<c<d<e<f<0 so we take a copy of Cn and begin overwriting
Cf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
when we land on Cn we mark this lowest ordinal in a register. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next ordinal appending the detail in the register. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next lower ordinal c. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCfCfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next lower ordinal d. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCfCfCfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next lower ordinal e. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCfCfCfCfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next lower ordinal f. We now copy out the zeros which have been marked in our 'false' non-ordinal intermediary and begin marking them
CfCfCfCfCfCfCf = a,b,c,d,e,f,0,a,b,c,d,e,f,0, ...
and when we land on the last zero we mark this next lower ordinal 0. This completes the first round of disentangling Cn onto Cten. If we are done, e.g. had we run out of zeros at stage c we simply have to halt, or generate an empty signal which will merely repeat endlessly. Thus we have a formal halting condition here. Our results under this stage c condition could read:
a a b
or any such combination of Cf ordinals. The most extreme or greatest ordinal is
0 0 0 0 0 0 0
which carries a nice semblance with infinity. The first ordinal is
a
which is appropriate enough. It certainly handles ordinals smaller than Cten. I'm really getting tired of all this viciously repetetive typing here so I will stop here. I don't like the Cf^n repetition but I do believe it does have to stop for the moment where it is. The next course of this sytem grows out farther but we should not just keep going here. It is as if we are in a hardware implementation. You must build your next lathe with a lathe, but the first lathe has to be built from scratch; freehand style. I'm wondering where the element ten turns up...
1, 2, 3, 4, 5, 6, 7, 8, 9, 0 : a, b, c, d, e, f, 0
in Cf puts 10 at
0
so I guess I have zero problem, but this will get fixed up. I will prepend a note above to point out that this is not finalized. Still it looks as if in this style you simply append zero to your infinite ordinal, and because it breaks the progression it acts as an omega in this style of system. The zero itself is the aleph and this is my mistake above. Well, sort of; there is still a bit of disambiguation for me to do here, and in time the zero turns to 10, where the one in ten is the first element, and zero is the aleph, in this case of a non-zero originated system.

sage: help( "ordinal" )
No Python documentation found for 'ordinal'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.

sage: help( "Ordinal" )
No Python documentation found for 'Ordinal'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.

sage: help( Ordinal )
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-10-f1751ffbc554> in <module>()
----> 1 help( Ordinal )

NameError: name 'Ordinal' is not defined
sage: help( ordinal )
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-11-627559457903> in <module>()
----> 1 help( ordinal )

NameError: name 'ordinal' is not defined
sage:
Dan Christensen
2020-11-14 17:30:51 UTC
Permalink
Post by WM
Post by Dan Christensen
Post by WM
Post by Dan Christensen
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
It has no immediate predecessor.
Then there is a void the immediate predecessor.
That would require a proof. Maybe you can try to prove the existence of such a "void" using the axioms of set theory (e.g. the ZFC axioms). You have already failed to prove the existence of your mysterious set of "dark" numbers, so I won't hold my breath, Mucke.


Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com
Me
2020-11-14 18:11:36 UTC
Permalink
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
Sergio
2020-11-20 15:23:48 UTC
Permalink
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
Gus Gassmann
2020-11-20 15:42:30 UTC
Permalink
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Sergio
2020-11-20 16:03:46 UTC
Permalink
Post by Gus Gassmann
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Ah yes! the limitations of language,

I have an excellent Book, "The Nature of Physical Theory" by P W
Bridgman, Nobel Prize winner, lecture notes from 1935. He talks about
thought, language, logic, math, applications, Relitivity, Modeling, wave
mechanics. He discusses the inherent problems using language to
describe things. His use of English is superb. probably free out there
on the internet now.
FredJeffries
2020-11-20 17:19:48 UTC
Permalink
Post by Sergio
Post by Gus Gassmann
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Ah yes! the limitations of language,
I beg to disagree. There is no 'limitation of language' involved. 'Next after' expresses the notion satisfactorily.

The problem in the poster's deliberate (mis)use of UNsatisfactory, ambiguous, and deceptive language.
Sergio
2020-11-21 01:58:39 UTC
Permalink
Post by FredJeffries
Post by Sergio
Post by Gus Gassmann
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Ah yes! the limitations of language,
I beg to disagree. There is no 'limitation of language' involved. 'Next after' expresses the notion satisfactorily.
The problem in the poster's deliberate (mis)use of UNsatisfactory, ambiguous, and deceptive language.
totally agree with you, on WM. It is fun for him. trying to convince
herds of people into believing "Dark Numbers", bet he snickers a lot etc...

I created the ANT LIST to capture his vagueitivities, it was over 500,
BUT new version will be out soon!!
Python
2020-11-20 17:30:52 UTC
Permalink
Post by Sergio
Post by Gus Gassmann
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Ah yes! the limitations of language,
I have an excellent Book, "The Nature of Physical Theory" by P W
Bridgman, Nobel Prize winner, lecture notes from 1935. He talks about
thought, language, logic, math, applications, Relitivity, Modeling, wave
mechanics. He discusses the inherent problems using language to
describe things. His use of English is superb. probably free out there
on the internet now.
Thanks, it's there:
http://www.contrib.andrew.cmu.edu/~kk3n/80-300/bridgman1936.pdf
WM
2020-11-20 17:50:35 UTC
Permalink
Post by Python
Post by Sergio
Post by Gus Gassmann
Post by Sergio
Post by Me
Post by WM
What is the immediate predecessor of ω?
There is no "immediate predecessor of ω", you silly dumbass.
does "next to" work ?
You have to be careful. "Next to" "all integers" is ambiguous. "Next after" is much more informative.
Ah yes! the limitations of language,
I have an excellent Book, "The Nature of Physical Theory" by P W
Bridgman, Nobel Prize winner, lecture notes from 1935. He talks about
thought, language, logic, math, applications, Relitivity, Modeling, wave
mechanics. He discusses the inherent problems using language to
describe things. His use of English is superb. probably free out there
on the internet now.
http://www.contrib.andrew.cmu.edu/~kk3n/80-300/bridgman1936.pdf
Even better: "The ordinary diagonal Verfahren I believe to involve a patent confusion of the program and object aspects of the decimal fraction, which must be apparent to any who imagines himself actually carrying out the operations demanded in the proof. In fact, I find it difficult to understand how such a situation should have been capable of persisting in mathematics." [P.W. Bridgman: "A physicist's second reaction to Mengenlehre", Scripta Mathematica 2 (1934) p. 225ff] The essential are here: https://www.hs-augsburg.de/~mueckenh/Transfinity/Transfinity/pdf

More of his splendid analyse of set theory can be found here: https://www.hs-augsburg.de/~mueckenh/Transfinity/KB/

Regards, WM
WM
2020-11-20 17:45:35 UTC
Permalink
Post by Sergio
I have an excellent Book, "The Nature of Physical Theory" by P W
Bridgman, Nobel Prize winner, lecture notes from 1935. He talks about
thought, language, logic, math, applications, Relitivity, Modeling, wave
mechanics. He discusses the inherent problems using language to
describe things. His use of English is superb. probably free out there
on the internet now.
His thoughts behind the language are superb too: "The ordinary diagonal Verfahren I believe to involve a patent confusion of the program and object aspects of the decimal fraction, which must be apparent to any who imagines himself actually carrying out the operations demanded in the proof. In fact, I find it difficult to understand how such a situation should have been capable of persisting in mathematics." [P.W. Bridgman: "A physicist's second reaction to Mengenlehre", Scripta Mathematica 2 (1934) p. 225ff]

Regards, WM
Gus Gassmann
2020-11-20 17:52:16 UTC
Permalink
Post by WM
Post by Sergio
I have an excellent Book, "The Nature of Physical Theory" by P W
Bridgman, Nobel Prize winner, lecture notes from 1935. He talks about
thought, language, logic, math, applications, Relitivity, Modeling, wave
mechanics. He discusses the inherent problems using language to
describe things. His use of English is superb. probably free out there
on the internet now.
His thoughts behind the language are superb too: "The ordinary diagonal Verfahren I believe to involve a patent confusion of the program and object aspects of the decimal fraction, which must be apparent to any who imagines himself actually carrying out the operations demanded in the proof. In fact, I find it difficult to understand how such a situation should have been capable of persisting in mathematics." [P.W. Bridgman: "A physicist's second reaction to Mengenlehre", Scripta Mathematica 2 (1934) p. 225ff]
Superb thoughts, indeed. "I believe to involve...", "... must be apparent to anyone...", "...find it difficult to understand...". Very convincing stuff, this.
Me
2020-11-20 19:20:12 UTC
Permalink
P W Bridgman, Nobel Prize winner
Yes, that's the guy who didn't understand Cantor's diagonal argument.

A crank (sort of), Nobel Prize winner or not.

"If a non-terminating decimal is to be handled or arranged in sequence like a thing it is sufficient to know how to handle and arrange a finite decimal of n digits, the number n being subject to no restriction as to magnitude. The theorem would now demand that it is impossible to set up any scheme for arranging all possible decimal fractions of n digits in a definite order, n being subject to no restriction as to magnitude. But such a theorem is obviously false, for there are 10^n possible decimals of n digits [...] What is done in the actual diagonal Verfahren when translated into this technique is this: it is shown that given a proposed array and any number n, no matter how large, it is then possible to set up a decimal the first n digits of which are different from the first n digits of any decimal to be found in the first n places of the proposed array. But this is clearly not what is required." [P.W. Bridgman: "A physicist's second reaction to Mengenlehre", Scripta Mathematica, Vol. II, 1934]

"The ordinary diagonal Verfahren I believe to involve a patent confusion of the program and object aspects of the decimal fraction, which must be apparent to any who imagines himself actually carrying out the operations demanded in the proof. In fact, I find it difficult to understand how such a situation should have been capable of persisting in mathematics. [...]"

etc. etc.
FromTheRafters
2020-11-20 15:43:11 UTC
Permalink
Post by WM
Post by Dan Christensen
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually
infinite; it has ℵo terms.
VERY good!
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
The least ordinal of the set has no predecessor. You are asking of the
transfinte, the equivalent of 'what precedes zero' in the naturals.
FredJeffries
2020-11-20 17:23:01 UTC
Permalink
Post by FromTheRafters
Post by WM
Post by Dan Christensen
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually
infinite; it has ℵo terms.
VERY good!
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
w has no immediate predecessor in N. So the 2nd term must be a number in N.
What is the immediate predecessor of ω?
The least ordinal of the set has no predecessor. You are asking of the
transfinte, the equivalent of 'what precedes zero' in the naturals.
https://en.wikipedia.org/wiki/Inaccessible_cardinal
WM
2020-11-20 17:41:56 UTC
Permalink
Post by FromTheRafters
Post by WM
What is the immediate predecessor of ω?
The least ordinal of the set has no predecessor. You are asking of the
transfinte, the equivalent of 'what precedes zero' in the naturals.
There is -1.

omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays? Further omega is immediately following upon all natnumbers.

Regards, WM
Gus Gassmann
2020-11-20 17:48:39 UTC
Permalink
Post by WM
Post by FromTheRafters
Post by WM
What is the immediate predecessor of ω?
The least ordinal of the set has no predecessor. You are asking of the
transfinte, the equivalent of 'what precedes zero' in the naturals.
There is -1.
How the fuck is -1 a natural number, you imbecile.
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Gibberish, like so much of what you write. What is "it"? And why might "it" not be "stay[ing] where it is"?
Post by WM
Further omega is immediately following upon all natnumbers.
With ambiguous statements such as this, it's no wonder that you get confused. Can you parse and translate, please?
FredJeffries
2020-11-20 17:53:56 UTC
Permalink
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Why does zero (or one) stay where IT is?

http://www.sorites.org/Issue_11/item08.htm
WM
2020-11-20 19:34:15 UTC
Permalink
Post by FredJeffries
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Why does zero (or one) stay where IT is?
They stay where they are because there is no free space in its neighbourhood where they could settle.

By the way this shows by the reciprocals that there is no free space before omega.

Regards, WM
Sergio
2020-11-21 17:49:27 UTC
Permalink
Post by WM
Post by FredJeffries
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Why does zero (or one) stay where IT is?
They stay where they are because there is no free space in its neighbourhood where they could settle.
oh, so your part of the space patrol now.
Post by WM
By the way this shows by the reciprocals that there is no free space before omega.
Regards, WM
Timothy Golden
2020-11-21 13:26:21 UTC
Permalink
Post by FredJeffries
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Why does zero (or one) stay where IT is?
http://www.sorites.org/Issue_11/item08.htm
Pretty sure without the product that you can play around here. With the product the consequences are arguably damaging, but the geometrical nature of the product never made sense in the first place. Geometrical correspondence of the natural numbers with the number line possibly allow you discrete folks to accept that the ray is more fundamental than the line. Still, you are going to pay a dear price for it.

One of the oddities of the physical representation of coordinate systems, and you arguably would like to remain consistent here or at least know keenly the correspondences which hold. For instance inches; cut off the first two inches of your measuring tape when the end wears out. Now the benefits are accruing. Since your reel holds an endless quantity screw it; just measure and cut your tape and save yourself the trouble of reeling it back in. This exposes a unidirectional nature. This is a one-signed paradigm. Dimensionality projected up and of course we ordinarily presume that you were holding the thing straight... and on it goes. Even the dimensional freedoms of the point are up for grabs. The single ray portrays the point but with a filled in segment back to the origin. stretch it straight and hold it up to your eye. Line up the other end with this eyeward end. There you have it; back to the point again. This is how Descartes saw things. I believe this is his trick of the eye. Does it have a consequence on the observer? Is all that the observer ever see's a projection? A three dimensional projection? Why three? Emergent spacetime ought to be the burden of somebody. Mathematicians will point their finger at the physicists. Physicists will point back to the Mathematicians. Philosophers look on with a broad gaze and scratch their heads, their balls, and their toes. Should a basis to do physics in be purely arithmetic? Should their be a clean foundation? Should we anticipate emergent spacetime from mathematics? In hindsight the answer is yes.
Jim Burns
2020-11-22 22:01:46 UTC
Permalink
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
Concerning the geometrical nature of the product:

If we start with some sort of standard unit length '1'
(something like the International Prototype Metre in a vault
in Paris), we can define the product and the quotient of
lengths x and y geometrically, by construction of similar
triangles.

If we have two similar triangles with sides of lengths
a,b,c and a',b',c', the corresponding sides have equal
ratios: a/a' = b/b' = c/c'

Construct two similar triangles such that
a = x, a' = 1, b' = y, and we will have x/1 = b/y
and thus b = x*y. b is our product length.

Construct two similar triangles such that
a = x, a' = y, b' = 1, and we will have x/y = b/1
and thus b = x/y. b is our quotient length.

(One way to construct similar triangles is for them
to share one angle and have one of the other angles be
a right angle. Two equal angles yield similar triangles,
which yield sides in the desired ratio.)
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
What sort of damage did you have in mind?

----
Rene Descartes had the idea to describe geometric objects
by the use of real numbers. Our concept of real numbers
has been refined since then, notably by the requirement
that they be Dedekind complete.

I find it interesting? amusing? that we can also describe
real numbers and their operations by the use of geometric
objects.

The real numbers are the complete ordered field.
In order to describe them, we need addition, subtraction,
multiplication, division, and order. They should be the
usual +.-,*,/,< with the additional property that <
is Dedekind complete.

Addition, subtraction and order are defined in obvious ways.
Multiplications and division are defined by use of
similar triangles, as discussed above.

Dedekind completeness or something equivalent to it
has been described in many ways. One way that seems
especially apt here is
| Two continuous curves which cross meet at one or more
| points.

This is basically the Intermediate Value Theorem,
which is provably equivalent to Dedekind completeness.

I don't know if classical Greek mathematicians stated
it as a geometrical property. Sometimes something can
seem too obvious to need stating. Then, it could take
a long time to notice the hole in the argument.
I strongly suspect that, upon being shown the IVT,
they would accept it.

https://en.wikipedia.org/wiki/Intermediate_value_theorem
Timothy Golden
2020-11-23 15:32:32 UTC
Permalink
Post by Jim Burns
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
If we start with some sort of standard unit length '1'
(something like the International Prototype Metre in a vault
in Paris), we can define the product and the quotient of
lengths x and y geometrically, by construction of similar
triangles.
If we have two similar triangles with sides of lengths
a,b,c and a',b',c', the corresponding sides have equal
ratios: a/a' = b/b' = c/c'
Construct two similar triangles such that
a = x, a' = 1, b' = y, and we will have x/1 = b/y
and thus b = x*y. b is our product length.
Construct two similar triangles such that
a = x, a' = y, b' = 1, and we will have x/y = b/1
and thus b = x/y. b is our quotient length.
(One way to construct similar triangles is for them
to share one angle and have one of the other angles be
a right angle. Two equal angles yield similar triangles,
which yield sides in the desired ratio.)
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
What sort of damage did you have in mind?
It's all very good content here JB. Thanks for your time and for holding a fine example for others here on usenet. I love that we are on an uncensored medium. I love interjecting rhetoric of any sort I like into the conversation, but without content it all goes flat. Here you have gone over the top on content. Your lack of skepticism I do find troubling, but your openness is excellent. There is a closure requirement specified within the operators as defined in the ring formalism, which the real values are credited as taking. Indeed subtraction and division are not necessary operators under that formalism, but this is really just an aside. The critical aspect is that closure requires the result of the operator be in the same set as the sources. It is arguably true that your result as you have laid out the product is not at all in the same set and if anything lands in something closer to the square of that set, though I don't necessarily want to be held to account on this language. This of course is consistent with unit analysis of physics. Traversing onward to complex values we will see that the discrepancy grows in terms of geometrical correspondence and yet proves useful in physics and engineering in less tangible ways.

I don't believe that Descartes used real numbers; not the two-signed version. He worked from a concept of magnitude and that ought not be construed as the real value as we know it today.

Rules for the Direction of the Mind, Book II, Rule XV: "It is usually helpful, also, to draw these diagrams and observe them through the external senses, so that by this means our thought can more easily remain attentive.

The way these figures are to be drawn so that their images will be formed more clearly in our imagination when they are presented to our eyes is self-evident. For, first, we depict unity in three ways, namely, by a square, (square glyph), if we consider only length and width, or by a line, (line glyph), if we consider only length, or, finally, by a point, (point glyph), if we consider nothing else but that it is to form part of a quantity. But in whatever way it is depicted and conceived, we always understand that it is an object extended in every way and capable of an infinity of dimensions. In the same way, also, we exhibit the terms of a problem to our eyes, if we are to pay attention to two of their different magnitudes simultaneously, by the rectangle, two sides of which are the two magnitudes under consideration: in this way should they be commensurable(1) with unity, (long rectangle glyph), or this way, (larger rectangle glyph split into 2hx3w; or this way, (glyph of 2hx3w dots) if they should be commensurable; and nothing more is needed unless it is a question of a multitude of units. Finally, if we consider only one of those magnitudes, we depict the line either by a rectangle, one side of which is the magnitude in question and the other is unity, in this way, (glyph of rectangle), as we shall do whenever it is to be compared with some other surface; or by a length alone in this manner, (glyph of line), if it should be regarded only as an incommensurable length; or in this manner, (glyph of six dots), it is should be a quantity."

I see support for quite a lot of modern thinking here, but meanwhile this general dimensional thought is contaminated. For instance the very translation that I have has a footnote (1). where for Descartes own 'commensurable' they substitute 'incommensurable', but I have returned it above to Descartes and find that it reads fine this way. These interdimensional concerns are still of interest, and I am amazed that in one paragraph Descartes could discuss so many apt details that can be construed to support quite a few perspectives.

It is apt that we are forced to discuss the Cartesian product and its various interpretations; none of which Descartes relies upon. I find that the Cartesian product is not a necessity. If it can be done without then it should be done without, and the many ways that it gets used are not just a traffic jam; at the epicenter there is a collision.
Post by Jim Burns
----
Rene Descartes had the idea to describe geometric objects
by the use of real numbers. Our concept of real numbers
has been refined since then, notably by the requirement
that they be Dedekind complete.
I find it interesting? amusing? that we can also describe
real numbers and their operations by the use of geometric
objects.
The real numbers are the complete ordered field.
In order to describe them, we need addition, subtraction,
multiplication, division, and order. They should be the
usual +.-,*,/,< with the additional property that <
is Dedekind complete.
Addition, subtraction and order are defined in obvious ways.
Multiplications and division are defined by use of
similar triangles, as discussed above.
Dedekind completeness or something equivalent to it
has been described in many ways. One way that seems
especially apt here is
| Two continuous curves which cross meet at one or more
| points.
This is basically the Intermediate Value Theorem,
which is provably equivalent to Dedekind completeness.
I don't know if classical Greek mathematicians stated
it as a geometrical property. Sometimes something can
seem too obvious to need stating. Then, it could take
a long time to notice the hole in the argument.
I strongly suspect that, upon being shown the IVT,
they would accept it.
https://en.wikipedia.org/wiki/Intermediate_value_theorem
Jim Burns
2020-11-23 18:43:13 UTC
Permalink
On Sunday, November 22, 2020 at 5:02:01 PM UTC-5,
Post by Jim Burns
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
If we start with some sort of standard unit length '1'
(something like the International Prototype Metre in a vault
in Paris), we can define the product and the quotient of
lengths x and y geometrically, by construction of similar
triangles.
If we have two similar triangles with sides of lengths
a,b,c and a',b',c', the corresponding sides have equal
ratios: a/a' = b/b' = c/c'
Construct two similar triangles such that
a = x, a' = 1, b' = y, and we will have x/1 = b/y
and thus b = x*y. b is our product length.
Construct two similar triangles such that
a = x, a' = y, b' = 1, and we will have x/y = b/1
and thus b = x/y. b is our quotient length.
(One way to construct similar triangles is for them
to share one angle and have one of the other angles be
a right angle. Two equal angles yield similar triangles,
which yield sides in the desired ratio.)
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
What sort of damage did you have in mind?
The critical aspect is that closure requires the result of
the operator be in the same set as the sources. It is
arguably true that your result as you have laid out the
product is not at all in the same set and if anything lands
in something closer to the square of that set, though I don't
necessarily want to be held to account on this language.
No, you are mistaken.
x, y, x*y, and x/y all are in the same units, the length '1'
which I compared to the International Prototype Metre
(previously the actual definition of the SI metre).

The standard unit could be any single length, but let's make it
concrete: let '1' be the SI metre.

What is the product of 2 times 3?

We construct two similar triangles of sides a,b,c and
a',b',c'. The sides are in the ratio a/a' = b/b' = c/c'.

a = 2 m
a' = 1 m
b' = 3 m

Because similar triangles, (2 m)/(1 m) = b/(3 m)
If we perform the indicated construction with perfect
exactness, b will be exactly 6 meters.

We could discuss the indicated construction, if you
like. I have specific ones in mind for product and
quotient, but the constructions are off the point of
what the units for the product and quotient are.
This of course is consistent with unit analysis of physics.
Traversing onward to complex values we will see that the
discrepancy grows in terms of geometrical correspondence and
yet proves useful in physics and engineering in less tangible
ways.
Post by Jim Burns
Post by Timothy Golden
On Friday, November 20, 2020 at 12:54:05 PM UTC-5,
On Friday, November 20, 2020 at 9:42:05 AM UTC-8,
Post by WM
omega is a point at the ordinal line.
If there is nothing immediately before omega,
why does it stay where it stays?
Why does zero (or one) stay where IT is?
http://www.sorites.org/Issue_11/item08.htm
Pretty sure without the product that you can play around
here. With the product the consequences are arguably
damaging, but the geometrical nature of the product never
made sense in the first place. Geometrical correspondence
of the natural numbers with the number line possibly allow
you discrete folks to accept that the ray is more
fundamental than the line. Still, you are going to pay a
dear price for it.
What sort of damage did you have in mind?

It can't be incorrect units.
My post with the allegedly incorrect units is after this.
I don't believe that Descartes used real numbers;
not the two-signed version. He worked from a concept of
magnitude and that ought not be construed as the real
value as we know it today.
It would not break my heart if I couldn't refer to
Rene Descartes, but there does seem to be a certain line
of descent from his work then to today's real numbers.
It wouldn't seem correct to ignore his contributions.

However, I am thinking of today's real numbers,
the complete ordered field, a very successful theory
of the points in a geometric line.
Timothy Golden
2020-11-24 13:16:25 UTC
Permalink
Post by Jim Burns
On Sunday, November 22, 2020 at 5:02:01 PM UTC-5,
Post by Jim Burns
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
If we start with some sort of standard unit length '1'
(something like the International Prototype Metre in a vault
in Paris), we can define the product and the quotient of
lengths x and y geometrically, by construction of similar
triangles.
If we have two similar triangles with sides of lengths
a,b,c and a',b',c', the corresponding sides have equal
ratios: a/a' = b/b' = c/c'
Construct two similar triangles such that
a = x, a' = 1, b' = y, and we will have x/1 = b/y
and thus b = x*y. b is our product length.
Construct two similar triangles such that
a = x, a' = y, b' = 1, and we will have x/y = b/1
and thus b = x/y. b is our quotient length.
(One way to construct similar triangles is for them
to share one angle and have one of the other angles be
a right angle. Two equal angles yield similar triangles,
which yield sides in the desired ratio.)
Post by Timothy Golden
With the product the consequences are arguably damaging,
but the geometrical nature of the product never made sense
in the first place.
What sort of damage did you have in mind?
The critical aspect is that closure requires the result of
the operator be in the same set as the sources. It is
arguably true that your result as you have laid out the
product is not at all in the same set and if anything lands
in something closer to the square of that set, though I don't
necessarily want to be held to account on this language.
No, you are mistaken.
x, y, x*y, and x/y all are in the same units, the length '1'
which I compared to the International Prototype Metre
(previously the actual definition of the SI metre).
The standard unit could be any single length, but let's make it
concrete: let '1' be the SI metre.
What is the product of 2 times 3?
We construct two similar triangles of sides a,b,c and
a',b',c'. The sides are in the ratio a/a' = b/b' = c/c'.
a = 2 m
a' = 1 m
b' = 3 m
Because similar triangles, (2 m)/(1 m) = b/(3 m)
If we perform the indicated construction with perfect
exactness, b will be exactly 6 meters.
We could discuss the indicated construction, if you
like. I have specific ones in mind for product and
quotient, but the constructions are off the point of
what the units for the product and quotient are.
This of course is consistent with unit analysis of physics.
Traversing onward to complex values we will see that the
discrepancy grows in terms of geometrical correspondence and
yet proves useful in physics and engineering in less tangible
ways.
Post by Jim Burns
Post by Timothy Golden
On Friday, November 20, 2020 at 12:54:05 PM UTC-5,
On Friday, November 20, 2020 at 9:42:05 AM UTC-8,
Post by WM
omega is a point at the ordinal line.
If there is nothing immediately before omega,
why does it stay where it stays?
Why does zero (or one) stay where IT is?
http://www.sorites.org/Issue_11/item08.htm
Pretty sure without the product that you can play around
here. With the product the consequences are arguably
damaging, but the geometrical nature of the product never
made sense in the first place. Geometrical correspondence
of the natural numbers with the number line possibly allow
you discrete folks to accept that the ray is more
fundamental than the line. Still, you are going to pay a
dear price for it.
What sort of damage did you have in mind?
It can't be incorrect units.
My post with the allegedly incorrect units is after this.
I don't believe that Descartes used real numbers;
not the two-signed version. He worked from a concept of
magnitude and that ought not be construed as the real
value as we know it today.
It would not break my heart if I couldn't refer to
Rene Descartes, but there does seem to be a certain line
of descent from his work then to today's real numbers.
It wouldn't seem correct to ignore his contributions.
However, I am thinking of today's real numbers,
the complete ordered field, a very successful theory
of the points in a geometric line.
OK, for the moment I will accept your scalar approach to the product, which incidentally reads rather differently than the familiar area style as if we put down unit squares in a 2x3 pattern to land with six. Under this more familiar version we land in square meters from meters, and we don't need any form of division to implement multiplication.

Strangely, modern versions of this arithmetic product do in fact invert the units within their formal requirements. Witness closure in modern langauge. ProofWiki.org currently lists 1,622 entries that use this term. https://proofwiki.org/w/index.php?search=closure. Of course we want to study the product. 87 results. Screw it. After fishing around I'm not getting where I want to go though they do look promising. It can be very difficult to find for some reason, and maybe it is being buried as a fraud funeral; the dead man still alive and a cadaver in his place in his grave. These are the ultimate class of operators. The ones that can't possibly work in the open any more. Like this page:
https://en.wikipedia.org/wiki/Ring_(mathematics)
where the first mention of closure is in the examples as in integral closure; something that I am not after.
They do however mention that the operators are binary operators, and I suppose you will accept this accounting for your own simple arithmetic instance. Now, going into binary operators. Ah, there it is in footnote c: https://en.wikipedia.org/wiki/Ring_(mathematics)#ref_c
"c: The closure axiom is already implied by the condition that +/• be a binary operation. Some authors therefore omit this axiom."

I am sorry to be so long winded about it, but this is how icky modern mathematics has become. Here I trace a contorted path for a noob to such a simple principal:
"More precisely, a binary operation on a set S is a mapping of the elements of the Cartesian product S × S to S"
- https://en.wikipedia.org/wiki/Binary_operation

I think you understand the problem just fine. Which is it Jim? Two dimensions become one? The physicist will certainly have it the other way around won't he? This is just the beginning. The inversion is cleanly enough exposed already and I've taken far too long to get to it.

Are you going to be able to discuss RxR here Jim? is that a two dimensional space created out of two one dimensional spaces? Are they the same set? Or is that two unique copies of the same set? Which is the copy of the other? Where did the original go? If R is a subset of RxR then which R do we assign the singular R to?

Hah, tools for fools. The folding blade that does not lock is just the one for beginners, eh? Keep it handy in your pocket. What's that, you don't use it any more? Blade too dull. One day our mass production society will have throw away blades for pennies. They might be for quarters now. It's really all in the turn of the hand. But a sharp blade is integral to proficiency. Sharpening the blade is an art in its own that exposes the quality of the steel. These foreign concepts are easily recovered from naught. I do see our need to return to naught and recover some sense in a system that has accumulated far too many layers of indirection. The cause; well; I won't offend you here. I've got a potty mouth Jim, and it's frothing.
Jim Burns
2020-11-25 00:40:09 UTC
Permalink
On Monday, November 23, 2020 at 1:43:28 PM UTC-5,
Post by Jim Burns
What is the product of 2 times 3?
We construct two similar triangles of sides a,b,c and
a',b',c'. The sides are in the ratio a/a' = b/b' = c/c'.
a = 2 m
a' = 1 m
b' = 3 m
Because similar triangles, (2 m)/(1 m) = b/(3 m)
If we perform the indicated construction with perfect
exactness, b will be exactly 6 meters.
OK, for the moment I will accept your scalar approach
to the product, which incidentally reads rather differently
than the familiar area style as if we put down unit squares
in a 2x3 pattern to land with six. Under this more familiar
version we land in square meters from meters, and we don't
need any form of division to implement multiplication.
I think you understand the problem just fine.
Which is it Jim? Two dimensions become one?
The physicist will certainly have it the other way around
won't he? This is just the beginning.
The inversion is cleanly enough exposed already and
I've taken far too long to get to it.
6 can be the measure of either meters or meters squared,
or it can be dimensionless, just a pure number.

Physicists buzz around dimensionless quantities[2] the way
that bees buzz around flowers. So many of our most useful
conceptual tools, sin(), cos(), exp(), log(), ...
can only be used with dimensionless numbers.

Here's a puzzle for you:
What sort of unit is sqrt(meter)?

Let's go back to our two similar triangles
a/a' = b/b' = c/c'

We construct them such that
a = 6 meter
b' = 1 meter
a' = b

See [1].

Then a' = b = sqrt(6) meter

Okay, I lied. There is no sqrt(meter) here.

But isn't this similar to the problem you see above?
Two dimensions become one... how about one dimension
becomes half a dimension?

----
[1]
| Let's go back to our two similar triangles
| a/a' = b/b' = c/c'
|
| We construct[1] them such that
| a = 6 meter
| b' = 1 meter
| a' = b

I wanted to be sure that when I call triangles
from the vasty deep, they do come. Thus...

( It would probably be best to draw what I'm describing,
( as we go along.
( I'd be glad to add an illustration, if I could,
( but, you know, ASCII.

( Here is a useful fact:
( If the segment AB is a diameter of a circle
( (the midpoint O of AB is the center of the circle,
( points A and B are on the circle)
( C is any other point on the circle,
( then triangle ABC is a right triangle,
( with a right angle at C.

Draw a line segment 6 units long
label the ends '0' and '6',
or, if you like, use an existing number-line.

Mark the point on '0''6' one unit from '0'
and label it '1'.

Find the midpoint of segment '0''6'.
If you like, label it '3'.

Draw a circle centered at 3 with the radius
3 = d('0''3') = d('3''6')

Construct a perpendicular to segment '0''6'
through point '1'.

Find the point of intersection for the circle and
the perpendicular, and label it S.
We will see that S is sqrt(6) units from '0'.

The triangle '0'S'6' has a right triangle at S,
because S is on the circle for which '0''6' is
a diameter.

The triangle '0''1'S has a right triangle at '1',
because '1'S is perpendicular to '0''6'.

The angles S'0''6' and '1''0'S are equal, because
they are, in fact, the same angle, picked out with
different points.

Thus, the triangles '0'S'6' and '0''1'S are similar,
and d('0'S)/d('0''1') = d('0''6')/d('0'S)

d('0'S) = sqrt(d('0''6')*d('0''1')) = sqrt(6)

[2]
| Physicists buzz around dimensionless quantities...

It might not be entirely on topic, but I've been told that
the reason high-energy physicists and cosmologists think that
gravity and the other fundamental forces will unify in the
general neighborhood of 10 kTTeV (10 kiloTeraTeravolts,
a unit I invented for only this purpose: to say 10^28 eV)
comes down to dimensional analysis.

They have no theory of unified forces yet, not really.
But they do have the observation that physical theories
haven't had either huge or tiny numbers incorporated into
them.

Q. What range of energy has neither-huge-nor-tiny numbers
if the significant constants, the speed of light c,
the gravitational constant G, and Planck's constant
reduced h/2pi have neither-huge-nor-tiny numbers?

A. If we have units of mass, length and time M,L,T such that
c = 1 L/T, G = 1 L^3/T^2/M, and h/2pi = 1 L^2 M/T, then the
Planck energy, the unit of energy in these units, is
1 M*L^2/T^2 = 12.2 kTTeV.

To put that number in perspective, the current
state-of-the-art is the Large Hadron Collider outside
Geneva, Switizerland. The LHC is able to look at energies
around 13 TeV (13 Teravolts), 15 orders of magnitude below
10 kTTeV. The technical term for this is "really $%^& huge".

----
Are you going to be able to discuss RxR here Jim?
is that a two dimensional space created out of two
one dimensional spaces? Are they the same set?
Or is that two unique copies of the same set?
Which is the copy of the other? Where did the original go?
If R is a subset of RxR then which R do we assign
the singular R to?
We can answer these sorts of questions better if, instead of
thinking of something *being* the set of real numbers
(or *being* a continuous function, etc), we think of
something *being described* as the set of real numbers, etc.

For example, the first components and the second components
of RxR can be *described* as real numbers. That tells us
a lot. Whatever they "really" are won't enter into whatever
we know because they can be *described* as real numbers.

This emphasis on description becomes especially valuable
if there is more than one way to describe whatever we're
interested in. I would argue that the unending _process_ of
succeeding, 0,1,2,3,..., gets *re-described* as a static
_relationship_ between certain pairs (0,1), (1,2), etc.
And, when we reason from the finite description of that
static relationship, we finitely "complete" the infinite
process 0,1,2,3,...

(Just kidding, we do not complete an infinite process.
We reason about it under another description.)
Timothy Golden
2020-11-25 14:34:40 UTC
Permalink
Post by Jim Burns
On Monday, November 23, 2020 at 1:43:28 PM UTC-5,
Post by Jim Burns
What is the product of 2 times 3?
We construct two similar triangles of sides a,b,c and
a',b',c'. The sides are in the ratio a/a' = b/b' = c/c'.
a = 2 m
a' = 1 m
b' = 3 m
Because similar triangles, (2 m)/(1 m) = b/(3 m)
If we perform the indicated construction with perfect
exactness, b will be exactly 6 meters.
OK, for the moment I will accept your scalar approach
to the product, which incidentally reads rather differently
than the familiar area style as if we put down unit squares
in a 2x3 pattern to land with six. Under this more familiar
version we land in square meters from meters, and we don't
need any form of division to implement multiplication.
I think you understand the problem just fine.
Which is it Jim? Two dimensions become one?
The physicist will certainly have it the other way around
won't he? This is just the beginning.
The inversion is cleanly enough exposed already and
I've taken far too long to get to it.
6 can be the measure of either meters or meters squared,
or it can be dimensionless, just a pure number.
Physicists buzz around dimensionless quantities[2] the way
that bees buzz around flowers. So many of our most useful
conceptual tools, sin(), cos(), exp(), log(), ...
can only be used with dimensionless numbers.
What sort of unit is sqrt(meter)?
Let's go back to our two similar triangles
a/a' = b/b' = c/c'
We construct them such that
a = 6 meter
b' = 1 meter
a' = b
See [1].
Then a' = b = sqrt(6) meter
Okay, I lied. There is no sqrt(meter) here.
But isn't this similar to the problem you see above?
Two dimensions become one... how about one dimension
becomes half a dimension?
----
[1]
| Let's go back to our two similar triangles
| a/a' = b/b' = c/c'
|
| We construct[1] them such that
| a = 6 meter
| b' = 1 meter
| a' = b
I wanted to be sure that when I call triangles
from the vasty deep, they do come. Thus...
( It would probably be best to draw what I'm describing,
( as we go along.
( I'd be glad to add an illustration, if I could,
( but, you know, ASCII.
( If the segment AB is a diameter of a circle
( (the midpoint O of AB is the center of the circle,
( points A and B are on the circle)
( C is any other point on the circle,
( then triangle ABC is a right triangle,
( with a right angle at C.
Draw a line segment 6 units long
label the ends '0' and '6',
or, if you like, use an existing number-line.
Mark the point on '0''6' one unit from '0'
and label it '1'.
Find the midpoint of segment '0''6'.
If you like, label it '3'.
Draw a circle centered at 3 with the radius
3 = d('0''3') = d('3''6')
Construct a perpendicular to segment '0''6'
through point '1'.
Find the point of intersection for the circle and
the perpendicular, and label it S.
We will see that S is sqrt(6) units from '0'.
The triangle '0'S'6' has a right triangle at S,
because S is on the circle for which '0''6' is
a diameter.
The triangle '0''1'S has a right triangle at '1',
because '1'S is perpendicular to '0''6'.
The angles S'0''6' and '1''0'S are equal, because
they are, in fact, the same angle, picked out with
different points.
Thus, the triangles '0'S'6' and '0''1'S are similar,
and d('0'S)/d('0''1') = d('0''6')/d('0'S)
d('0'S) = sqrt(d('0''6')*d('0''1')) = sqrt(6)
[2]
| Physicists buzz around dimensionless quantities...
It might not be entirely on topic, but I've been told that
the reason high-energy physicists and cosmologists think that
gravity and the other fundamental forces will unify in the
general neighborhood of 10 kTTeV (10 kiloTeraTeravolts,
a unit I invented for only this purpose: to say 10^28 eV)
comes down to dimensional analysis.
They have no theory of unified forces yet, not really.
But they do have the observation that physical theories
haven't had either huge or tiny numbers incorporated into
them.
Q. What range of energy has neither-huge-nor-tiny numbers
if the significant constants, the speed of light c,
the gravitational constant G, and Planck's constant
reduced h/2pi have neither-huge-nor-tiny numbers?
A. If we have units of mass, length and time M,L,T such that
c = 1 L/T, G = 1 L^3/T^2/M, and h/2pi = 1 L^2 M/T, then the
Planck energy, the unit of energy in these units, is
1 M*L^2/T^2 = 12.2 kTTeV.
To put that number in perspective, the current
state-of-the-art is the Large Hadron Collider outside
Geneva, Switizerland. The LHC is able to look at energies
around 13 TeV (13 Teravolts), 15 orders of magnitude below
10 kTTeV. The technical term for this is "really $%^& huge".
----
Are you going to be able to discuss RxR here Jim?
is that a two dimensional space created out of two
one dimensional spaces? Are they the same set?
Or is that two unique copies of the same set?
Which is the copy of the other? Where did the original go?
If R is a subset of RxR then which R do we assign
the singular R to?
We can answer these sorts of questions better if, instead of
thinking of something *being* the set of real numbers
(or *being* a continuous function, etc), we think of
something *being described* as the set of real numbers, etc.
For example, the first components and the second components
of RxR can be *described* as real numbers. That tells us
a lot. Whatever they "really" are won't enter into whatever
we know because they can be *described* as real numbers.
This emphasis on description becomes especially valuable
if there is more than one way to describe whatever we're
interested in. I would argue that the unending _process_ of
succeeding, 0,1,2,3,..., gets *re-described* as a static
_relationship_ between certain pairs (0,1), (1,2), etc.
And, when we reason from the finite description of that
static relationship, we finitely "complete" the infinite
process 0,1,2,3,...
(Just kidding, we do not complete an infinite process.
We reason about it under another description.)
I do appreciate all of this. I'd forgotten about the right angle in the circle, if I ever was taught it.
So used to trig I guess that avoids that right triangle; filters it out.

Really I should have kept it shorter on the binary operator. Isn't it sufficient arithmetically to simply state that for a,b, and c in S that
a * b = c
is a binary operation? Where then is the need of a cartesian product? It seems to me that the writers have injected functional analysis into operator theory, and this is a crude mistake that a programmer type gets. Functions are composed of operators; not the other way around. Again I'll quote the wiki binary operator page:
"More precisely, a binary operation on a set S is a mapping of the elements of the Cartesian product S × S to S:"
- https://en.wikipedia.org/wiki/Binary_operation#Terminology

This problem expounds in abstract algebra where polynomials are in use who deny the operator any action in a product such as
1.23 X
where the coefficient is real and X is not real. The ring definition sort of acts as a proxy that allows the moderner to bow around this detail, only to find when they get up that they are impaled on the X having broken the ring behavior. Of course I am about the only one who cares to dwell here. Well, I come at these things from a rather different angle. Somewhat as your right angle is quite some find versus the right angle of the Cartesian cross. Oh dear, does it all come back to religion again? Wouldn't that be something. I've already taken back the cross once. To do it again I would be named a double crosser. I am simply sharpening the points of the X now in preparation for what comes next. I have several AA jesus types that have come forward already. On the one hand it seems Quixotic, but on the other, I am getting pretty good with the blade. As I see it the mathematical details that I am attempting to discuss are near to rock bottom in terms of fundamental nature. That we ought to land on a ledge at some point in mathematics is entirely believable and desirable. That physics and philosophy ought as well to take their departures from the same ledge is obvious, for the three are merely falsely divorced in the first place. Thus the hope of a semi-classical theory still holds here.
Jim Burns
2020-11-25 21:38:26 UTC
Permalink
On Tuesday, November 24, 2020 at 7:40:22 PM UTC-5,
Post by Jim Burns
We can answer these sorts of questions better if, instead
of thinking of something *being* the set of real numbers
(or *being* a continuous function, etc), we think of
something *being described* as the set of real numbers, etc.
Isn't it sufficient arithmetically to simply state that
for a,b, and c in S that
a * b = c
is a binary operation?
Often it is sufficient.
Often that is what is simply stated.
IIRC, you pointed this out upthread.

But "often" and "always" are not the same thing.
Where then is the need of a cartesian product?
Cartesian products are more general.

Suppose that, instead of answering the question
"What is multiplication?"
we were answering
"What is it about an operation that makes it an operation?"

Then we would need to be able speak in more general terms,
about operators in general, whatever that would be.
Cartesian products (sets of ordered pairs) are good for
this purpose.
It seems to me that the writers have injected functional
analysis into operator theory, and this is a crude mistake
that a programmer type gets. Functions are composed of
operators; not the other way around.
We are _describing_ functions and operators and relations
and so on. I think that we want to distinguish this from
incrementing registers or something. This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.

The most general _description_ of a function is NOT
that it is composed of operators.
The most general _description_ of a function is that,
_for each combination of inputs, there is a unique output_
We can supplement that with more detail, but, if we do,
we will have narrowed our focus to only _some_ functions.

And maybe we want to narrow our focus, sometimes.
But maybe we don't, sometimes.

----
An example where we want to use as broad a notion of
function as we can would be the Schröder–Bernstein
theorem.
https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem

| In set theory, the Schröder–Bernstein theorem states
| that, if there exist injective functions f : A -> B and
| g : B -> A between the sets A and B, then there exists
| a bijective function h : A -> B.

For a _function_
for each combination of inputs, there is a unique output.

For an _injective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs.

For a _bijective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs,
AND all possible outputs are actual outputs for some input.

It doesn't matter what you're thinking of as "function"
this theorem will apply, given the pretty minimal
descriptions here.

----
An example of an injection f from the naturals to the
rationals would be f(k) = k/1
Different inputs have different outputs.
There are many rationals which are not outputs for
any input to f, so f is not a bijection.

Another example of an injection g from the positive
rationals to the naturals may be less obvious.
Suppose p/q is in lowest terms, that there is no j > 1
that divides both p and q.
Define g(p/q) = (p+q-1)*(p+q-2)/2 + p

Different pairs (p,q) and (r,s) map to different naturals.
Because some pairs, like (2p,2q), (3p,3q), ... are not
in lowest terms, no rational is mapped to their natural.
So, g is also not a bijection.

And, by the Schroeder-Bernstein theorem, the two _injections_
imply that a _bijection_ exists between naturals and
rationals.
Timothy Golden
2020-11-26 18:07:49 UTC
Permalink
Post by Jim Burns
On Tuesday, November 24, 2020 at 7:40:22 PM UTC-5,
Post by Jim Burns
We can answer these sorts of questions better if, instead
of thinking of something *being* the set of real numbers
(or *being* a continuous function, etc), we think of
something *being described* as the set of real numbers, etc.
Isn't it sufficient arithmetically to simply state that
for a,b, and c in S that
a * b = c
is a binary operation?
Often it is sufficient.
Often that is what is simply stated.
IIRC, you pointed this out upthread.
But "often" and "always" are not the same thing.
Where then is the need of a cartesian product?
Cartesian products are more general.
Suppose that, instead of answering the question
"What is multiplication?"
we were answering
"What is it about an operation that makes it an operation?"
Then we would need to be able speak in more general terms,
about operators in general, whatever that would be.
Cartesian products (sets of ordered pairs) are good for
this purpose.
It seems to me that the writers have injected functional
analysis into operator theory, and this is a crude mistake
that a programmer type gets. Functions are composed of
operators; not the other way around.
We are _describing_ functions and operators and relations
and so on. I think that we want to distinguish this from
incrementing registers or something. This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
The most general _description_ of a function is NOT
that it is composed of operators.
The most general _description_ of a function is that,
_for each combination of inputs, there is a unique output_
We can supplement that with more detail, but, if we do,
we will have narrowed our focus to only _some_ functions.
And maybe we want to narrow our focus, sometimes.
But maybe we don't, sometimes.
Wishy, Washy, Eh? I've heard of this sort of defense.
Structured thinking and compiler level integrity demand better than this Jim.
Let's just admit that the old mathematicians were operating with something like COBOL, and we've moved up a bit since those days.
As we discuss operators, and we believe that the ring definition offers a definition of operators, let's just witness that the mechanics of both addition and multiplication lay elsewhere. They are not even in the ring definition. That we are discussing operators here and not functions you have dismissed, yet you will have to admit that the operators are of a more fundamental nature. Especially are the operators which obey
a b = b a = c
and now possibly we have a new claim in mathematics: "pure operators" possess this feature and this then legitimates the confusion say of two points on a line being selected without attention to order. This means once again we can forgo the Cartesian product in this particular realm of operators especially and ordinary mathematics will survive just fine. Occam likes this. Set theorists ought to like this as well. Philosophers and physicists might approve. Mathematicians, well, stepping back in time I have no doubt that this simple ordinary interpretation went just fine.

I understand that I am ignoring some options, but it seems arguably true that forgoing the order of a and b in the construction helps admit the wrong. Ahhh... Perhaps this will turn the AA'ers on to the new way: I am arguing that
1.23 X
is not ring behaved due to the offense to the closure requirement. Possibly they will see
X 1.23
as offensive? Possibly the logic goes that there is need of order within their order so that
( 0, 1.23, 0, 0, 0, 0... )
needs to be more carefully defined as
( 0, ( 0, 1.23, 0, 0, 0, ... ) , 0, 0, 0 ... )
and of course this as well could be more carefully defined.
This could keep them busy for a while.
Post by Jim Burns
----
An example where we want to use as broad a notion of
function as we can would be the Schröder–Bernstein
theorem.
https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem
| In set theory, the Schröder–Bernstein theorem states
| that, if there exist injective functions f : A -> B and
| g : B -> A between the sets A and B, then there exists
| a bijective function h : A -> B.
For a _function_
for each combination of inputs, there is a unique output.
For an _injective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs.
For a _bijective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs,
AND all possible outputs are actual outputs for some input.
It doesn't matter what you're thinking of as "function"
this theorem will apply, given the pretty minimal
descriptions here.
----
An example of an injection f from the naturals to the
rationals would be f(k) = k/1
Different inputs have different outputs.
There are many rationals which are not outputs for
any input to f, so f is not a bijection.
Another example of an injection g from the positive
rationals to the naturals may be less obvious.
Suppose p/q is in lowest terms, that there is no j > 1
that divides both p and q.
Define g(p/q) = (p+q-1)*(p+q-2)/2 + p
Different pairs (p,q) and (r,s) map to different naturals.
Because some pairs, like (2p,2q), (3p,3q), ... are not
in lowest terms, no rational is mapped to their natural.
So, g is also not a bijection.
And, by the Schroeder-Bernstein theorem, the two _injections_
imply that a _bijection_ exists between naturals and
rationals.
Timothy Golden
2020-11-26 18:52:47 UTC
Permalink
Post by Timothy Golden
Post by Jim Burns
On Tuesday, November 24, 2020 at 7:40:22 PM UTC-5,
Post by Jim Burns
We can answer these sorts of questions better if, instead
of thinking of something *being* the set of real numbers
(or *being* a continuous function, etc), we think of
something *being described* as the set of real numbers, etc.
Isn't it sufficient arithmetically to simply state that
for a,b, and c in S that
a * b = c
is a binary operation?
Often it is sufficient.
Often that is what is simply stated.
IIRC, you pointed this out upthread.
But "often" and "always" are not the same thing.
Where then is the need of a cartesian product?
Cartesian products are more general.
Suppose that, instead of answering the question
"What is multiplication?"
we were answering
"What is it about an operation that makes it an operation?"
Then we would need to be able speak in more general terms,
about operators in general, whatever that would be.
Cartesian products (sets of ordered pairs) are good for
this purpose.
It seems to me that the writers have injected functional
analysis into operator theory, and this is a crude mistake
that a programmer type gets. Functions are composed of
operators; not the other way around.
We are _describing_ functions and operators and relations
and so on. I think that we want to distinguish this from
incrementing registers or something. This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
The most general _description_ of a function is NOT
that it is composed of operators.
The most general _description_ of a function is that,
_for each combination of inputs, there is a unique output_
We can supplement that with more detail, but, if we do,
we will have narrowed our focus to only _some_ functions.
And maybe we want to narrow our focus, sometimes.
But maybe we don't, sometimes.
Wishy, Washy, Eh? I've heard of this sort of defense.
Structured thinking and compiler level integrity demand better than this Jim.
Let's just admit that the old mathematicians were operating with something like COBOL, and we've moved up a bit since those days.
As we discuss operators, and we believe that the ring definition offers a definition of operators, let's just witness that the mechanics of both addition and multiplication lay elsewhere. They are not even in the ring definition. That we are discussing operators here and not functions you have dismissed, yet you will have to admit that the operators are of a more fundamental nature. Especially are the operators which obey
a b = b a = c
and now possibly we have a new claim in mathematics: "pure operators" possess this feature and this then legitimates the confusion say of two points on a line being selected without attention to order. This means once again we can forgo the Cartesian product in this particular realm of operators especially and ordinary mathematics will survive just fine. Occam likes this. Set theorists ought to like this as well. Philosophers and physicists might approve. Mathematicians, well, stepping back in time I have no doubt that this simple ordinary interpretation went just fine.
I understand that I am ignoring some options, but it seems arguably true that forgoing the order of a and b in the construction helps admit the wrong. Ahhh... Perhaps this will turn the AA'ers on to the new way: I am arguing that
1.23 X
is not ring behaved due to the offense to the closure requirement. Possibly they will see
X 1.23
as offensive? Possibly the logic goes that there is need of order within their order so that
( 0, 1.23, 0, 0, 0, 0... )
needs to be more carefully defined as
( 0, ( 0, 1.23, 0, 0, 0, ... ) , 0, 0, 0 ... )
and of course this as well could be more carefully defined.
This could keep them busy for a while.
I also find it extremely annoying that the binary operator does not simply extend generally to
a, a b, a b c, a b c d, ...
and now with the pure form these forms simply follow with no possible trouble. Just as with all ordinary arithmetic.The binary form exists, but the human predilection for bipolar symmetry is gone. Clearly this is not a binary operator. It is a pure operator. These are symmetries pure and simple.
You probably know who already is doing it this way. It seems obvious and any claim that this is not fundamental mathematics cannot possibly hold up.

Now going back to units analysis we've still got openings for interpretation. This is adimensional analysis that is taking place here. Just arithmetic. No geometry. There is an entire band of interdimensional analysis that has to arrive. The sort that grants the zero dimensional point two dimensions on your piece of paper while that point portrays a ray flying straight into your eye no matter where your eye may be. To breathe life into the point is to bring unity onto the stage. All the while in modernity our origin could land anywhere and there could be as many of them as you like. Why shouldn't they dance? Who will be bothered? I know that is going too far for now, but relativity may hold constraints that can be found by setting things free. Unification is not so difficult to confuse with unity especially when there is no hope of pointing to the origin. As elements of reality we are constrained. We are caught guessing and pulling things from thin air. We are Shakespeare's monkeys with filters. The mimicry that goes on is abysmal. I am no doubt a primate.
Post by Timothy Golden
Post by Jim Burns
----
An example where we want to use as broad a notion of
function as we can would be the Schröder–Bernstein
theorem.
https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem
| In set theory, the Schröder–Bernstein theorem states
| that, if there exist injective functions f : A -> B and
| g : B -> A between the sets A and B, then there exists
| a bijective function h : A -> B.
For a _function_
for each combination of inputs, there is a unique output.
For an _injective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs.
For a _bijective function_
for each combination of inputs, there is a unique output,
AND different inputs have different outputs,
AND all possible outputs are actual outputs for some input.
It doesn't matter what you're thinking of as "function"
this theorem will apply, given the pretty minimal
descriptions here.
----
An example of an injection f from the naturals to the
rationals would be f(k) = k/1
Different inputs have different outputs.
There are many rationals which are not outputs for
any input to f, so f is not a bijection.
Another example of an injection g from the positive
rationals to the naturals may be less obvious.
Suppose p/q is in lowest terms, that there is no j > 1
that divides both p and q.
Define g(p/q) = (p+q-1)*(p+q-2)/2 + p
Different pairs (p,q) and (r,s) map to different naturals.
Because some pairs, like (2p,2q), (3p,3q), ... are not
in lowest terms, no rational is mapped to their natural.
So, g is also not a bijection.
And, by the Schroeder-Bernstein theorem, the two _injections_
imply that a _bijection_ exists between naturals and
rationals.
Jim Burns
2020-11-26 21:01:25 UTC
Permalink
On Wednesday, November 25, 2020 at 4:38:41 PM UTC-5,
Post by Jim Burns
This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
Wishy, Washy, Eh? I've heard of this sort of defense.
The note I came into this thread on was praise for
the mathematicians' variable.

If I were to offer you an incantation which could allow
you to do infinitely many things at once, I think that
you would be very impressed, unless you flatly refused
to believe me at all.

Unfortunately, I have no incantation, no magic to offer
you, only a way to describe infinitely many things at once:
variables, what I also call indefinite references.

I suppose that, since they're NOT magic, they're NOT
impressive. (That's how I read "Wishy washy", "not impressive".)
Not impressive, they just work. Really. Every day.
Timothy Golden
2020-11-27 14:18:17 UTC
Permalink
Post by Jim Burns
On Wednesday, November 25, 2020 at 4:38:41 PM UTC-5,
Post by Jim Burns
This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
Wishy, Washy, Eh? I've heard of this sort of defense.
The note I came into this thread on was praise for
the mathematicians' variable.
If I were to offer you an incantation which could allow
you to do infinitely many things at once, I think that
you would be very impressed, unless you flatly refused
to believe me at all.
Unfortunately, I have no incantation, no magic to offer
variables, what I also call indefinite references.
I suppose that, since they're NOT magic, they're NOT
impressive. (That's how I read "Wishy washy", "not impressive".)
Not impressive, they just work. Really. Every day.
Damn it Jim, your the doctor.

I'm trying to get to the bottom of these things and you keep avoiding them.
That's not very scientific. If you don't know something I would rather hear that.
If you have to go on a five paragraph diatribe on binary operators I wouldn't mind it one bit.
Maybe just once here please. There is no need of perfection on USENET.
The maybes and might be are confusing sets with variables. You are confused, but I still trust your analysis.
Jim Burns
2020-11-27 19:26:17 UTC
Permalink
On Thursday, November 26, 2020 at 10:28:02 PM UTC-5,
Post by Jim Burns
On Wednesday, November 25, 2020 at 4:38:41 PM UTC-5,
Post by Jim Burns
This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
Wishy, Washy, Eh? I've heard of this sort of defense.
The note I came into this thread on was praise for
the mathematicians' variable.
If I were to offer you an incantation which could allow
you to do infinitely many things at once, I think that
you would be very impressed, unless you flatly refused
to believe me at all.
Unfortunately, I have no incantation, no magic to offer
variables, what I also call indefinite references.
I suppose that, since they're NOT magic, they're NOT
impressive. (That's how I read "Wishy washy", "not impressive".)
Not impressive, they just work. Really. Every day.
Damn it Jim, your the doctor.
I'm trying to get to the bottom of these things and
you keep avoiding them.
That's not very scientific. If you don't know something
I would rather hear that.
For one, I don't know what you mean by "Wishy, Washy, Eh?"

I responded to my best guess at what you mean,
that what a variable (an indefinite reference) refers to
is wishy-washy == too vague.

What do you mean by "Wishy, Washy, Eh?"?

My point above is that it is the vagueness of indefinite
reference, what might be called the wishywashyness of
variables, that makes them capable of dealing with
infinite things.

I'll probably keep returning to this point, if I continue
to respond to your posts. My best guess is that you're
trying to correct mathematics into something more familiar
to you, but which I argue is much less powerful than what
we have.

My philosophy of explaining things is that I try to see what
the _first_ wrong turn was, and throw a "Maybe you should've
taken that left turn at Albuquerque" into the mix, as needed.

Wanting to remove the vagueness from all this looks to me
like your first wrong turn.
There is no need of perfection on USENET.
Isn't there a need?
On the one side, there is
| Ah, but a man's reach should exceed his grasp,
| Or what's a heaven for?
On the other,
| Flood the zone with shit.

I think that there's a need.
The maybes and might be are confusing sets with variables.
You are confused, but I still trust your analysis.
My best guess is that you are trying to read "variable"
the way it is used in programming.
Call this "indirect reference".

That is not what I am calling your attention to.
Call what I am calling your attention to "indefinite
reference".

Consider
| A driver is required by law to have a valid license.

"A driver" is an indefinite reference.
It is not intended to refer to a particular driver.
The POINT of using an indefinite reference here
is that it NOT refer to a particular driver.
To treat it otherwise, as though there is, somewhere,
ONE DRIVER who is required to have a valid license,
is to misunderstand what is being said.
Timothy Golden
2020-11-28 16:09:51 UTC
Permalink
Post by Jim Burns
On Thursday, November 26, 2020 at 10:28:02 PM UTC-5,
Post by Jim Burns
On Wednesday, November 25, 2020 at 4:38:41 PM UTC-5,
Post by Jim Burns
This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
Wishy, Washy, Eh? I've heard of this sort of defense.
The note I came into this thread on was praise for
the mathematicians' variable.
If I were to offer you an incantation which could allow
you to do infinitely many things at once, I think that
you would be very impressed, unless you flatly refused
to believe me at all.
Unfortunately, I have no incantation, no magic to offer
variables, what I also call indefinite references.
I suppose that, since they're NOT magic, they're NOT
impressive. (That's how I read "Wishy washy", "not impressive".)
Not impressive, they just work. Really. Every day.
Damn it Jim, your the doctor.
I'm trying to get to the bottom of these things and
you keep avoiding them.
That's not very scientific. If you don't know something
I would rather hear that.
For one, I don't know what you mean by "Wishy, Washy, Eh?"
Alright, then; back to your text I call wishy, washy:

"The most general _description_ of a function is NOT
that it is composed of operators.
The most general _description_ of a function is that,
_for each combination of inputs, there is a unique output_
We can supplement that with more detail, but, if we do,
we will have narrowed our focus to only _some_ functions.

And maybe we want to narrow our focus, sometimes.
But maybe we don't, sometimes. "
(end of Jim's earlier statement I called wishy washy)

As I see it we are discussing operators versus functions, though variables are where we started out.
To me there is no doubt as to which is more fundamental; it is the operator.
Saying 'the operator' is terrible language and is as terrible as 'the function' so at least we are on equally bad footing, but should some instance be introduced then I think I can prove my point. Addition is such a fundamental concept that as we count; developing a set of values that we can name 'natural numbers' we witness the position of our first addition. You prefer successor? Well, sorry, but when we get to numerics we can simply increment. Without numerics you have very little. Anyway, lets get past ordinals here and up into the sort of numbers we worked with as grade schoolers at least.

Integration is in fact addition at its core. Vector behavior is addition at its core. Our numbers are ultimately addition at their core. If they lose this feature then there is a problem with their generality and it is this generality which even allows the discussion of functional analysis. I wish you could admit this detail or deny it. Possibly my language just does not translate through your own filters.

To argue that generality means that sometimes something is true and sometimes it is not true, well, I will have to state clearly that this is an ambiguity; not a generality. Now we've folded operators, functions, and variables into the mix and I would argue that some of your usage of the term 'variable' is in fact set theoretic language. When you speak of every possible value being represented then you are at the set level. This need not be confused with the more humble and usable variable such as we used in grade school. Perhaps we will need new classes of variable. For instance the value which is constant but unknown ought to be readily distinguishable from a value which roams a range; a stepped value (something heavily in use in programming) possibly deserves its own class which will provide the discrete/continuous aspects without any possibility of ambiguity, though I suspect the details here when we include the possibility of shifting and even rescaling will get pretty deep. There is actually nothing simple about computer arithmetic; just study IEEE implementations and even chip errata. Grab the source code and you'll see what might at first appear a pile of spaghetti. Precompile it for your processor and it should come cleaner, though I don't think I've actually tried that yet. You'd hope it would read clean. Not so sure it will though. By the time you are down with last byte first and your big endians and little endians you'll come out the other end worrying that somebody got something backwards along the way, and if they got it backwards twice you'll be OK. Perhaps this explains the human predilection for the binary signature. Possibly it is something deeper though. Things which work in n, Jim, are more fundamental than things pre-destined to two. The building of n already required addition. This is unconditional. Nothing wishy-washy here.

So it follows that the pure operators will hold in n with no concern of precedence. Lo and behold all the ordinary numbers we work with hold up just fine. Whose voodoo numbers don't fit? It is the ones who claim that
a + b
and
b + a
are two different things. These are not vector based. They are non-geometric. No calculus will be done on them, or if it can the contortions that will be needed likely do not need to be included in a sensible numerical basis. In short such an operator does not even deserve to be called addition. Why confuse the situation like this? In the name of generality? Oh I think I can see how some intricate thinking minds felt they could raise the bar. Well I am about to drop that bar right over the X of the AA polynomial. Hmmm... that could be a pretty good tool....
Post by Jim Burns
I responded to my best guess at what you mean,
that what a variable (an indefinite reference) refers to
is wishy-washy == too vague.
What do you mean by "Wishy, Washy, Eh?"?
My point above is that it is the vagueness of indefinite
reference, what might be called the wishywashyness of
variables, that makes them capable of dealing with
infinite things.
I'll probably keep returning to this point, if I continue
to respond to your posts. My best guess is that you're
trying to correct mathematics into something more familiar
to you, but which I argue is much less powerful than what
we have.
My philosophy of explaining things is that I try to see what
the _first_ wrong turn was, and throw a "Maybe you should've
taken that left turn at Albuquerque" into the mix, as needed.
Wanting to remove the vagueness from all this looks to me
like your first wrong turn.
There is no need of perfection on USENET.
Isn't there a need?
On the one side, there is
| Ah, but a man's reach should exceed his grasp,
| Or what's a heaven for?
On the other,
| Flood the zone with shit.
I think that there's a need.
Well said. Yes it is frustrating finding decent content here, especially through google groups. Somebody like King Bassam posts to your thread and it won't likely get clicked on while his name implies the gutter of sci.math. Still, some of his stances I actually do agree with. I'm afraid that what we witness here on USENET is actually humanity. This is uncensored humanity. It is not a pretty thing, and I agree with your stand. You are a fine example to others here.
Post by Jim Burns
The maybes and might be are confusing sets with variables.
You are confused, but I still trust your analysis.
My best guess is that you are trying to read "variable"
the way it is used in programming.
Call this "indirect reference".
That is not what I am calling your attention to.
Call what I am calling your attention to "indefinite
reference".
Consider
| A driver is required by law to have a valid license.
"A driver" is an indefinite reference.
It is not intended to refer to a particular driver.
The POINT of using an indefinite reference here
is that it NOT refer to a particular driver.
To treat it otherwise, as though there is, somewhere,
ONE DRIVER who is required to have a valid license,
is to misunderstand what is being said.
The trouble with such analogies is that they can continue to be filled out and we can arrive with a rather long series of details. Anyway here you have named and possibly defined (indirectly) a class of variable "indefinite reference". I like it. We need to get down to a place where there is nothing underneath us. The basis. Nicely I think I have introduced a "natural axiom" which even doubles back on itself coherently. That which works in n is natural; is fundamental. I am forced for misunderstood reasons to state that this natural value is unsigned and is not to be construed with half of the signed integers. That is something else. These naturals have discrete correspondence with continuous magnitude which is likewise unsigned, which is different than one-signed, which is the predecessor in a family of number systems
N1, N2, N3, N4, ...
where N2 are the familiar integers with their two-signed characteristics. N3 carry natural correspondence with the complex plane and are inherently have geometric correspondence with the plane just as N2 correspond to the line... just in their discrete form. I haven't tried to do anything special with these yet, and I think of the continuous form as being plenty general yet for numberphiles such as yourself this may be an opening into some great options. Welcome to polysign in discrete x... in n...
s n
Sigma over s ( s n ) = 0
N2: - 1 + 1 = 0; - 2 + 2 = 0; - 3 + 3 = 0;... - n + n = 0.
N3: - 1 + 1 * 1 = 0 ... - n + n * n = 0
N4: - 1 + 1 * 1 # 1 = 0
N1: - 1 = 0

You see the one-signed numbers offer a devastating geometric effect; they render to naught while they still do algebra. These numbers fit the ordinary familiar numerical concepts that offer well behaved algebra.
Jim Burns
2020-11-28 23:57:28 UTC
Permalink
On Friday, November 27, 2020 at 2:26:32 PM UTC-5,
Post by Jim Burns
My best guess is that you are trying to read "variable"
the way it is used in programming.
Call this "indirect reference".
That is not what I am calling your attention to.
Call what I am calling your attention to "indefinite
reference".
Consider
| A driver is required by law to have a valid license.
"A driver" is an indefinite reference.
It is not intended to refer to a particular driver.
The POINT of using an indefinite reference here
is that it NOT refer to a particular driver.
To treat it otherwise, as though there is, somewhere,
ONE DRIVER who is required to have a valid license,
is to misunderstand what is being said.
The trouble with such analogies is that they can continue
to be filled out and we can arrive with a rather long
series of details.
This isn't an actual problem.
We could add the detail that the driver is a man.
Or a woman.
Or that their first name is Sidney.
Or that their last name is.
Whatever conclusions we are able to draw from our
knowledge that (i) they are a driver and (ii) they are
required to have a valid license, we should be able
to to draw that same conclusion given additional
knowledge.

Consider
| A natural number k has a successor k+1.
| A natural number k has a finite linear set
| { 0,1,...,k-1 } of predecessors.

This is a very good description of k, and we can draw
lots of conclusions about k from it.
We can conclude, for example, that, if k > 0, then
no natural j exists such that 2*j*j = k*k. (That is,
that sqrt(2) is irrational).

We can add details.
k is prime.
k is composite.
k is 17.
k is 10^(10^(10^(10))).
None of these details change our conclusion that
no natural j exists such that 2*j*j = k*k.

----
Those details, k+1 and { 0,1,...,k-1 } are selected
to be details true of any natural number. We can fill
in more details about k. But these details should continue
to be true as we fill the others in, or we haven't
done our job properly. And the conclusions we draw
should still be valid with the new details, because
we still know the old details about k.
To argue that generality means that sometimes something
is true and sometimes it is not true, well, I will have
to state clearly that this is an ambiguity; not a
generality.
To be clear: What we are doing is the opposite of this.

Certainly, with an indefinite reference, there are
statements which sometimes true and sometimes not true.
But these are not the statements we use in these
indefinite-reference-based arguments. We restrict
ourselves to always-true (AKA "valid") statements.

This is key to making this kind of argument work.
Now we've folded operators, functions, and variables into
the mix and I would argue that some of your usage of the
term 'variable' is in fact set theoretic language.
It actually works in the other direction.
We have indefinite-reference variables and associated
axioms and inference rules. Then all of that gets used
in set theory.
Jim Burns
2020-11-28 23:58:32 UTC
Permalink
On Friday, November 27, 2020 at 2:26:32 PM UTC-5,
Post by Jim Burns
On Thursday, November 26, 2020 at 10:28:02 PM UTC-5,
Post by Jim Burns
On Wednesday, November 25, 2020 at 4:38:41 PM UTC-5,
Post by Jim Burns
This is where (what seems
to be) the magic happens: we describe _one of_ some thing,
a thing which could be any of infinitely many things,
and we have, to some extent or other, described infinitely
many things. Even though _we ourselves_ are finite.
Wishy, Washy, Eh? I've heard of this sort of defense.
The note I came into this thread on was praise for
the mathematicians' variable.
If I were to offer you an incantation which could allow
you to do infinitely many things at once, I think that
you would be very impressed, unless you flatly refused
to believe me at all.
Unfortunately, I have no incantation, no magic to offer
variables, what I also call indefinite references.
I suppose that, since they're NOT magic, they're NOT
impressive.
(That's how I read "Wishy washy", "not impressive".)
Not impressive, they just work. Really. Every day.
Damn it Jim, your the doctor.
I'm trying to get to the bottom of these things and
you keep avoiding them.
That's not very scientific. If you don't know something
I would rather hear that.
For one, I don't know what you mean by "Wishy, Washy, Eh?"
Okay, then. You mean pretty much what I thought you meant.
"The most general _description_ of a function is NOT
that it is composed of operators.
The most general _description_ of a function is that,
_for each combination of inputs, there is a unique output_
We can supplement that with more detail, but, if we do,
we will have narrowed our focus to only _some_ functions.
And maybe we want to narrow our focus, sometimes.
But maybe we don't, sometimes. "
(end of Jim's earlier statement I called wishy washy)
As I see it we are discussing operators versus functions,
though variables are where we started out.
Variables are certainly where I started in this thread.
You made a comment about how mathematicians are so fond
of variables, which sounded dismissive of variables to me.
Looking back, it seems that I was right about your
dismissiveness.
To me there is no doubt as to which is more fundamental;
it is the operator.
I hope that you're not asking to give you a pass on
this claim because to you there is no doubt.

----
All operators are functions, the for-each-input-a-unique-output
thingies.

Not all functions are operators and not all can be composed
from operators.

I'm pretty sure there is a diagonal argument that there are
more for-each-input-unique-output functions on an infinite
domain than there are finite compositions of finitely many
operators.

As I see it, that makes functions more fundamental than
operators, since we can always describe operators as
functions, but we can't always describe functions with
operators.

More later.
Jim Burns
2020-11-29 01:25:05 UTC
Permalink
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
This last suggestion about filters sounds perilously
close to making your not communicating somehow your fault.
You may want to reconsider that.

Integration uses addition and other things.

Vector behavior uses (a different kind of) addition
and scalar multiplication.

Our numbers are ultimately numbers and addition can be
a useful thing to do with them.

If a type of number that has addition loses that feature,
it will no longer be that type of number.

You seem to be using "generality" in some technical sense.
If you are concerned about your language not translating,
consider using more of it, to explain what you mean.

You also appear to have something in mind when you write
"functional analysis" that probably needs more words, too.
I think that you would like to dispute that what we call
"functions" are for-each-input-unique-output.

None of this response of mine looks to me particularly
useful to you. But you wished for an admission or a
denial.

My best guess is that what you are seeing addition as
essential to are primitive recursive functions.
https://en.wikipedia.org/wiki/Primitive_recursive_function

I would answer that addition is primitive recursive,
so it's unavoidable in talking about a primitive
recursive function (indefinitely). However, since addition
is not in the definition of primitive recursive function,
it doesn't look to me like addition is in its core.
Timothy Golden
2020-11-29 13:50:20 UTC
Permalink
Post by Jim Burns
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
This last suggestion about filters sounds perilously
close to making your not communicating somehow your fault.
You may want to reconsider that.
Integration uses addition and other things.
Vector behavior uses (a different kind of) addition
and scalar multiplication.
Our numbers are ultimately numbers and addition can be
a useful thing to do with them.
If a type of number that has addition loses that feature,
it will no longer be that type of number.
You seem to be using "generality" in some technical sense.
If you are concerned about your language not translating,
consider using more of it, to explain what you mean.
You also appear to have something in mind when you write
"functional analysis" that probably needs more words, too.
I think that you would like to dispute that what we call
"functions" are for-each-input-unique-output.
None of this response of mine looks to me particularly
useful to you. But you wished for an admission or a
denial.
My best guess is that what you are seeing addition as
essential to are primitive recursive functions.
https://en.wikipedia.org/wiki/Primitive_recursive_function
I would answer that addition is primitive recursive,
so it's unavoidable in talking about a primitive
recursive function (indefinitely). However, since addition
is not in the definition of primitive recursive function,
it doesn't look to me like addition is in its core.
Well I think you avoid expressing it, but it seems that you will refuse to admit that operators are more fundamental than functions. I've offered a way that does allow this thinking, and it does not make usage of the binary operator, whose requirements are not n-ary. Addition is a pure operator and is n-ary; in other words
a + b + c
is as unambiguous language as
a + b
is.
a, a + b, a + b + c, a + b + c + d, ...
is the n-ary pure form. Functions have an input and an output. Sums just are. They can be evaluated, but that evaluation is not a matter to be tampered with. Functions take many forms and are modifiable. The sum is implicitly existent within the types which carry it cleanly. This is in the spirit of the ring definition, yet their allowance for non-commutative systems has run amuck.

The poor youth and even oldsters like me when reading something like this:
"Camille Jordan named abelian groups after Norwegian mathematician Niels Henrik Abel, because Abel found that the commutativity of the group of a polynomial implies that the roots of the polynomial can be calculated by using radicals."

are now caught wondering which polynomial Abel was working with. I've got a new style X for the AA'ers with sharp points...

Jim, I've had a new thought now. We know that operators require things of the same type. I know you think I'm all computer-centric, but this is embodied within the closure requirement of the binary operator, which AA'ers still cling to, though possibly not for much longer. This poses a problem when we allow a real value in as an element and a non-real X in as an element as when we write the product
1.23 X
Clearly this product is meant not to operate. Clearly these elements are of differing types. Clearly closure is broken here. And so this cannot be a ring expression.

The product as we had spoken of above, which you managed to resolve via a ratio argument, but which most people tend to think of in other terms, such as
( 3 )( 4 ) = 12
and which are n-ary so that
( 3 )( 4 )( 5 )
is every bit as good an expression, and yet let's face it, if you three oranges in one hand and four in the other you have seven oranges; not twelve. The product requires something more, and yet what if I were to say that the product is the sum? Certainly we see that
3 + 3 + 3 + 3 = 4 + 4 + 4 = 12
but we know that the mechanics don't work out great on the continuum. Neither does ordinary numerical representation, which sends humans off on concerns of 'irrational' numbers, while others like yourself take comfort in infinities and even complete neglect of the numerical form in your ordinal representation. I of course mean you no harm with these rhetorical jabs, but hope to put a bit of phlogiston into this thing.

Should we allow for the consideration of two unknown but unique entities as
a z
then in general we may as well have
a d z
and
a d z e
in our n-ary language of generality. Yes you can rotate them around and it won't matter a bit. I just happen to like the adze. The question though is what difference does it make whether this notation is the product or the sum? Either way they will not operate. They simply maintain their presence and as coupled as they are in our arbitrary construction we must allow this coupling to be validated. Here I too am guilty for this word 'couple' is the binary fail of human nature. It is the multiplistic form, as if these entities are factors; important factors which must each weigh in. Next then is the puzzle of how you go about combining terms with like entities. This becomes a bit of a puzzle game.

The trouble is that modern mathematics has gone astray from this conception. The real value is claimed to be a subset of the complex value. Higher forms of algebra in the well behaved multiplistic sense become notions of CxCxR (P6) and so forth. And yet now we have a means to quibble over the Cartesian product. Tell me again Jim, when you work in RxR which R is the subset R in? There in P6 there are three of them. Which is it? Pin the tail on the donkey I believe is the answer; no different than pulling a three eared rabbit out of a hat.

It is true that the modern computer and its compiled language brings integrity. The stricture which some shy away from is exactly what prevents bugs from accruing. Structure built of simplistic things as the basis should yield objects of integrity farther along. Complicating the basis is no service whatsoever.
Dan Christensen
2020-11-29 15:36:41 UTC
Permalink
Post by Timothy Golden
Well I think you avoid expressing it, but it seems that you will refuse to admit that operators are more fundamental than functions. I've offered a way that does allow this thinking, and it does not make usage of the binary operator, whose requirements are not n-ary. Addition is a pure operator and is n-ary; in other words
a + b + c
is as unambiguous language as
a + b
is.
a, a + b, a + b + c, a + b + c + d, ...
is the n-ary pure form. Functions have an input and an output. Sums just are. They can be evaluated, but that evaluation is not a matter to be tampered with. Functions take many forms and are modifiable. The sum is implicitly existent within the types which carry it cleanly. This is in the spirit of the ring definition, yet their allowance for non-commutative systems has run amuck.
Here's a sketch of how to construct, i.e. prove the existence of the addition function on N using only basic set theory starting from Peano's Axioms:

Construct the subset A of N^3 such that:

For all a, b, c: [(a,b,c) in A <=> (a,b,c) in N^3
& For all d C N^3: [For all e in N: [(e,0,e) in d]
& For all e, f, g: [(e,f,g) in d => (e, S(f), S(g)) in d)]
=> (a,b,c) in d]]

where S is the usual successor function on N.

Then prove that A is the required binary function on N.

I hope this helps.

Dan

Download my DC Proof 2.0 freeware at http://www.dcproof.com
Visit my Math Blog at http://www.dcproof.wordpress.com
Jim Burns
2020-11-29 21:56:25 UTC
Permalink
On Saturday, November 28, 2020 at 8:25:18 PM UTC-5,
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
Well I think you avoid expressing it, but it seems that
you will refuse to admit that operators are more
fundamental than functions.
Here: "Functions are more fundamental than operators."

I've tried to explain why I say that. Perhaps what I am
saying got buried under my explanation.

What is more fundamental or less fundamental depends upon
the order in which we explain things. This might involve
some choices on the part of the explainer, so I would
suggest that fundamentalicity is not an absolute property,
as much as it _sounds_ like it should be.

We often explain operators with for-each-input-unique-output
(FEIUO) functions.

We _can't_ explain FEIUO functions with finite compositions
of finitely many operators (FCOFMO), because there are more
FEIUO functions than FCOFMO compositions. (I'm pretty sure
of this, though I haven't worked out the proof yet.)

This is why I say functions are more fundamental than
operators. Choices are made in how we develop things,
but not every choice is possible.

What I suspect is that you see operators as more fundamental
than operators _because_ you are only thinking of functions
defined by finite composition of finitely many operators.
FCOFMO functions -- functions with code for definitions, to
continue your analogy with programming.
I've offered a way that does allow this thinking, and it
does not make usage of the binary operator, whose
requirements are not n-ary. Addition is a pure operator and
is n-ary; in other words
a + b + c
is as unambiguous language as
a + b
is.
I don't see why this or what you said in your previous
post are arguments for addition being fundamental.
a, a + b, a + b + c, a + b + c + d, ...
is the n-ary pure form.
Not on topic, but you might find it interesting that
arbitrarily long but finite sequences of natural numbers
can be represented by single natural numbers.

(A more-relevant note: there is no important difference
between binary sums and finite sums.)

There's a _Chinese Remainder Theorem_ that shows what
to do for arbitrarily long sequences.

There's also a simpler way to set up a correspondence
between _pairs_ and single natural numbers. This method is
sufficient (if messy) for representing finite sequences,
if we use it over and over.

For example, We can write (i,j,k) as ((i,j),k),
map (i,j) to m, map (m,k) to n, and represent
(i,j,k) with n.

The basic plot is to _first_ arrange ordered pairs (i,j)
on the quarter-plane with i for the x-coordinate and j
for the y-coordinate...
(1,1) (2,1) (3,1) (4,1) (5,1) ...
(1,2) (2,2) (3,2) (4,2) ...
(1,3) (2,3) (3,3) ...
(1,4) (2,4) ...
(1,5) ...
...

...and _then_ line them up in a single row by
zigzagging diagonally across the quarter-plane.
1 3 6 10 15 ...
2 5 9 14 ...
4 8 13 ...
7 12 ...
11 ...
...

So,
(1,1) (1,2) (2,1) (1,3) (2,2) (3,1) ...
1 2 3 4 5 6 ...

The rule we would use to do this is to map the pair (i,j)
to the single number k = (i+j-1)*(i+j-2)/2 + i

It's invertible. For each k, there is a unique pair (i,k)
that maps to k by this rule.
Timothy Golden
2020-11-30 16:30:56 UTC
Permalink
Post by Jim Burns
On Saturday, November 28, 2020 at 8:25:18 PM UTC-5,
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
Well I think you avoid expressing it, but it seems that
you will refuse to admit that operators are more
fundamental than functions.
Here: "Functions are more fundamental than operators."
I've tried to explain why I say that. Perhaps what I am
saying got buried under my explanation.
What is more fundamental or less fundamental depends upon
the order in which we explain things. This might involve
some choices on the part of the explainer, so I would
suggest that fundamentalicity is not an absolute property,
as much as it _sounds_ like it should be.
We often explain operators with for-each-input-unique-output
(FEIUO) functions.
We _can't_ explain FEIUO functions with finite compositions
of finitely many operators (FCOFMO), because there are more
FEIUO functions than FCOFMO compositions. (I'm pretty sure
of this, though I haven't worked out the proof yet.)
This is why I say functions are more fundamental than
operators. Choices are made in how we develop things,
but not every choice is possible.
What I suspect is that you see operators as more fundamental
than operators _because_ you are only thinking of functions
defined by finite composition of finitely many operators.
FCOFMO functions -- functions with code for definitions, to
continue your analogy with programming.
I've offered a way that does allow this thinking, and it
does not make usage of the binary operator, whose
requirements are not n-ary. Addition is a pure operator and
is n-ary; in other words
a + b + c
is as unambiguous language as
a + b
is.
I don't see why this or what you said in your previous
post are arguments for addition being fundamental.
a, a + b, a + b + c, a + b + c + d, ...
is the n-ary pure form.
Not on topic, but you might find it interesting that
arbitrarily long but finite sequences of natural numbers
can be represented by single natural numbers.
(A more-relevant note: there is no important difference
between binary sums and finite sums.)
There's a _Chinese Remainder Theorem_ that shows what
to do for arbitrarily long sequences.
There's also a simpler way to set up a correspondence
between _pairs_ and single natural numbers. This method is
sufficient (if messy) for representing finite sequences,
if we use it over and over.
For example, We can write (i,j,k) as ((i,j),k),
map (i,j) to m, map (m,k) to n, and represent
(i,j,k) with n.
The basic plot is to _first_ arrange ordered pairs (i,j)
on the quarter-plane with i for the x-coordinate and j
for the y-coordinate...
(1,1) (2,1) (3,1) (4,1) (5,1) ...
(1,2) (2,2) (3,2) (4,2) ...
(1,3) (2,3) (3,3) ...
(1,4) (2,4) ...
(1,5) ...
...
...and _then_ line them up in a single row by
zigzagging diagonally across the quarter-plane.
1 3 6 10 15 ...
2 5 9 14 ...
4 8 13 ...
7 12 ...
11 ...
...
So,
(1,1) (1,2) (2,1) (1,3) (2,2) (3,1) ...
1 2 3 4 5 6 ...
The rule we would use to do this is to map the pair (i,j)
to the single number k = (i+j-1)*(i+j-2)/2 + i
"Here: "Functions are more fundamental than operators." "

Then how is it that operators are in use within your description of functions?

That's a bit short, but other than a lookup table style function, which arguably is ordinal and not numeral, then I don't see much in the way of functions being offered here. How about a fine instance of your function that does not use operators? How interesting will it be? Must it be trivial? Pick an f(), please.

I can name a few decent functions like sin(), whose series expansion does wonderful work... and contains operators. We simply cannot have it both ways. One ought to be more fundamental, and sensibility says it is the operator. Proof is practially free-standing. Are you going to declare for me a function that performs addition? Obviously no. You are just asking me to do an exercise here. Woof, woof.
Post by Jim Burns
It's invertible. For each k, there is a unique pair (i,k)
that maps to k by this rule.
Jim Burns
2020-11-30 20:36:22 UTC
Permalink
On Sunday, November 29, 2020 at 4:56:38 PM UTC-5,
Post by Jim Burns
On Saturday, November 28, 2020 at 8:25:18 PM UTC-5,
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
Well I think you avoid expressing it, but it seems that
you will refuse to admit that operators are more
fundamental than functions.
Here: "Functions are more fundamental than operators."
[...]
Post by Jim Burns
The rule we would use to do this is to map the pair (i,j)
to the single number k = (i+j-1)*(i+j-2)/2 + i
"Here: "Functions are more fundamental than operators.""
Then how is it that operators are in use within your
description of functions?
My description of functions is FEIUO,
for-each-input-unique-output.
There is no "operator" in that.

AH! But I can use '+','-','*','/' to _describe_
this map of ordered pairs (i,j) to individual k.
k = (i+j-1)*(i+j-2)/2 + i

Maybe these different kinds of description are the source
of our miscommunication.

(i+j-1)*(i+j-2)/2 + i describes one function.
FEIUO describes any function.

Let us return again to the concept of indefinite reference
(variables).

-- Describe what we are discussing.
-- Reason from the description to new statements.

Our starting description needs to be general enough
to be true of each thing which we are talking about.
This is how we know that the new statements will also
be true of each thing which we are talking about,
because we only use inferences that preserve this
true-of-each-thing-which-we-are-talking-about property

Again,
(i+j-1)*(i+j-2)/2 + i describes one function.
FEIUO describes any function.

We can use FEIUO as a starting place to reason about
functions-in-general. (i+j-1)*(i+j-2)/2 + i, not so much.

We could cast our net wider. Include '-','*','/'.
Include finitely many compositions of '+','-','*','/'.
Even so, those will only be enough to describe _some_
FEIUO functions, not _all_
How about a fine instance of your function that
does not use operators?
Let's start by listing all the functions that are
defined only by operators.

(I added "only". Otherwise, your request is silly.
Of course, whatever f(j) is, we can write it as
0+f(j). My best guess is that you mean "only".)

I'll suppose they all have finite-length definitions
written with a finite character set. We can translate
such a definition into a natural number.

Some operator-only functions have more than one
definition, so they're listed more than once.
That's okay. Even multiply listed, there are more
FEIUO functions than operator-only functions.

Some numbers don't have any definition that translates
into them. Maybe there's a character string, but it's
nonsense. Maybe there's not even a character string.
That's okay. Any number without a function, we'll
pretend its function is f(j) = 0. It won't matter.

So. For the i_th function, a finite-length operator-only
definition of f[i](j) is encoded into i.
We have a list L(i,j) = f[i](j)

Define a function different from each of those
functions: L(j,j)+1.
L(j,j)+1 does not use only operators.
We know that L(j,j)+1 does not use only operators
because we listed all the functions that use only operators,
and L(j,j)+1 is none of them.

However, L(j,j)+1 *is* for-each-input-unique-output.
Are you going to declare for me a function that
performs addition? Obviously no.
There's a communication problem here, if what is
obviously-no to you is obviously-yes to me.

Addition is a function that performs addition.
It is FEIUO, for-each-input-unique-output.
So, when I reason about FEIUO functions,
I reason about addition. Obviously.
Chris M. Thomasson
2020-11-30 21:15:13 UTC
Permalink
Post by Jim Burns
On Sunday, November 29, 2020 at 4:56:38 PM UTC-5,
Post by Jim Burns
On Saturday, November 28, 2020 at 8:25:18 PM UTC-5,
Post by Timothy Golden
Integration is in fact addition at its core. Vector
behavior is addition at its core. Our numbers are ultimately
addition at their core. If they lose this feature then there
is a problem with their generality and it is this generality
which even allows the discussion of functional analysis. I
wish you could admit this detail or deny it. Possibly my
language just does not translate through your own filters.
Well I think you avoid expressing it, but it seems that
you will refuse to admit that operators are more
fundamental than functions.
Here: "Functions are more fundamental than operators."
[...]
Post by Jim Burns
The rule we would use to do this is to map the pair (i,j)
to the single number k = (i+j-1)*(i+j-2)/2 + i
"Here: "Functions are more fundamental than operators.""
Then how is it that operators are in use within your
description of functions?
My description of functions is FEIUO,
for-each-input-unique-output.
[...]

Why unique?

function f(x) = x * 0

Many different inputs, gives the same output...
Python
2020-11-30 21:25:18 UTC
Permalink
...
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".

"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
Jim Burns
2020-11-30 22:09:52 UTC
Permalink
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.

But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
Python
2020-11-30 22:13:14 UTC
Permalink
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
You expressed it right, Chris is just mentally deficient. His
reading abilities are as bad as his writing abilities.
Chris M. Thomasson
2020-11-30 22:26:47 UTC
Permalink
Post by Python
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
You expressed it right, Chris is just mentally deficient. His
reading abilities are as bad as his writing abilities.
Sorry for my mistake on this. Humm... You remind me of JG: Nasty. Oh
well, shi% happens.
Chris M. Thomasson
2020-12-01 00:08:51 UTC
Permalink
Post by Chris M. Thomasson
Post by Python
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
You expressed it right, Chris is just mentally deficient. His
reading abilities are as bad as his writing abilities.
Sorry for my mistake on this. Humm... You remind me of JG: Nasty. Oh
well, shi% happens.
Sorry for snapping at you. The comparison to JG was uncalled for. Damn.
Sergio
2020-12-01 01:11:25 UTC
Permalink
Post by Chris M. Thomasson
Post by Python
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
You expressed it right, Chris is just mentally deficient. His
reading abilities are as bad as his writing abilities.
Sorry for my mistake on this. Humm... You remind me of JG: Nasty. Oh
well, shi% happens.
when they talk about "writing" abilities, or "spelting" errors, you have
won, as they are mentally back in grade school being punished by their
Englrish Teacher!
Sergio
2020-12-01 01:53:30 UTC
Permalink
Post by Sergio
Post by Chris M. Thomasson
Post by Python
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
You expressed it right, Chris is just mentally deficient. His
reading abilities are as bad as his writing abilities.
Sorry for my mistake on this. Humm... You remind me of JG: Nasty. Oh
well, shi% happens.
when they talk about "writing" abilities, or "spelting" errors, you have
won, as they are mentally back in grade school being punished by their
Englrish Teacher!
the use of "facepalm" is another sign of Engrlish Teacher abuse...

also the demand for an admission or a denial...


sounds like the old black box problem, lots of inputs (seperate
variables, or unique or distinct) with lots of outputs, or just one,
which is dependent upon the input variables in some way.

Took several courses in it, Control Theory. But I have been ignoring
this thread as it is too wordy, too shallow, only 1/k th inch deep,
where k> 100.

Chris M. Thomasson
2020-11-30 22:24:35 UTC
Permalink
Post by Jim Burns
Post by Python
Post by Chris M. Thomasson
Post by Jim Burns
My description of functions is FEIUO,
for-each-input-unique-output.
[...]
Why unique?
function f(x) = x * 0
Many different inputs, gives the same output...
*facepalm* Chris, "unique" does not mean "distinct".
Yikes! Sorry.
Post by Jim Burns
Post by Python
"unique" means that if (x,y) \in f and (x,y') \in f then y=y'.
which makes y=f(x) to be meaningful to start with.
What you said.
But my goal in using words like "unique" is to make
my posts more easily accessible. (I hope that if I use
up less mindspace on things like notation, my actual
point will get transmitted.) If "unique" or "distinct"
are not helping, I wish to hear about it. (I wouldn't
go so far as saying I'm glad to hear about it, but
it is what it is.)
Timothy Golden
2020-11-21 16:21:52 UTC
Permalink
Post by FredJeffries
Post by WM
omega is a point at the ordinal line. If there is nothing immediately before omega, why does it stay where it stays?
Why does zero (or one) stay where IT is?
http://www.sorites.org/Issue_11/item08.htm
I simply see a barrier at zero. Quite a lot of them. In fact a ridiculously large quantity of them. Not sure what that little character is where 1/8 would go. Looks like an s with a tail at gravitational bottom. It is a fun problem as laid out with some physics in the mix. Quite odd except the quantum well isn't far off. More interesting would be to allow the ball to jump and land in those many bounds and predict its behavior. With no drag there would have to be quite a lot of action in there. Essentially an iterated problem. Could get to some decent dynamics.
Sergio
2020-11-13 14:37:03 UTC
Permalink
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
why did you stop at ω ? that has only ω of numbers! it is not infinite.

does ℵo terms, mean infinite # of terms ? Or ω of terms ?
Post by WM
Every sequence of natural numbers descending from ω to 0 is finite.
and it is has ω # of terms
Post by WM
This follows from the axiom of foundation.
"But above all it is dictated by the practical impossibility to define
individually actually infinitely many predecessors of ω.

"....practical impossibility to define individually actually infinitely
many predecessors..."

.........so it is Not Impossible.
Post by WM
Obviously ℵo numbers are existing, but are not available as destinations for the leap from ω.
?
Post by WM
Regards, WM
Mostowski Collapse
2020-11-13 20:37:52 UTC
Permalink
WM wrote:
"Every sequence of natural numbers descending from ω to 0 is finite."

Thats how omega is constructed. It is the union of all finite ordinals.
Its a limit ordinal. It has infinte branching at the top, the lines are
membership relation ∈:

ω
/ / | \ \ ....
| | | | |
0 1 2 3 4

But because the finite ordinals are also in relationship to each other the
more correct depiction would be:

ω
/ / | \ \ ....
| | | | 4
| | | | /
| | | 3
| | | /
| | 2
| | /
| 1
| /
0

You see that the above is well founded, the descending
paths ω-0, ω-1-0, ω-2-1-0, etc.. are all finite.

https://en.wikipedia.org/wiki/Limit_ordinal

Hope This Helps!
Post by WM
Descending Sequences of Natural Numbers
Every sequence of natural numbers ascending from 0 to ω is actually infinite; it has ℵo terms.
Every sequence of natural numbers descending from ω to 0 is finite. This follows from the axiom of foundation. But above all it is dictated by the practical impossibility to define individually actually infinitely many predecessors of ω.
Obviously ℵo numbers are existing, but are not available as destinations for the leap from ω.
Regards, WM
Gus Gassmann
2020-11-13 21:02:23 UTC
Permalink
Post by Mostowski Collapse
"Every sequence of natural numbers descending from ω to 0 is finite."
Thats how omega is constructed. It is the union of all finite ordinals.
Its a limit ordinal. It has infinte branching at the top, the lines are
ω
/ / | \ \ ....
| | | | |
0 1 2 3 4
But because the finite ordinals are also in relationship to each other the
ω
/ / | \ \ ....
| | | | 4
| | | | /
| | | 3
| | | /
| | 2
| | /
| 1
| /
0
You see that the above is well founded, the descending
paths ω-0, ω-1-0, ω-2-1-0, etc.. are all finite.
https://en.wikipedia.org/wiki/Limit_ordinal
Hope This Helps!
Hope springs eternal... :)
WM
2020-11-14 12:15:00 UTC
Permalink
Post by Mostowski Collapse
You see that the above is well founded, the descending
paths ω-0, ω-1-0, ω-2-1-0, etc.. are all finite.
How many natnumbers can you select if you don't tell them whether you will continue upwars or downwards?
Post by Mostowski Collapse
Hope This Helps!
Regards, WM
Loading...