Discussion:
8-bit characters
(too old to reply)
Phillip Helbig (undress to reply)
2021-11-10 09:44:34 UTC
Permalink
Having to write some Icelanding words in a DECterm (as one does), I
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case
thorn (Þ þ if those characters get through). If entered by c
both create the character, unless it is at the beginning of a line, in
which case one sees <XDE> or <XFE> (one character, displayed as
several). ASCII values are 222 and 254. Refreshing the screen also
causes the mnenonics to appear. Also, they are not displayed via
HELP FORTRAN CHAR DEC.

Any deeper reason or just flaky instrumentation?

I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work). Again,
deeper meaning or just flaky?
Jan-Erik Söderholm
2021-11-11 00:21:38 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Having to write some Icelanding words in a DECterm (as one does), I
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case thorn
(Þ þ if those characters get through).  If entered by cboth create the
character, unless it is at the beginning of a line, in which case one
sees <XDE> or <XFE> (one character, displayed as several). ASCII values
are 222 and 254.  Refreshing the screen also causes the mnenonics to
appear.  Also, they are not displayed via HELP FORTRAN CHAR DEC.
Any deeper reason or just flaky instrumentation?
I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work).  Again,
deeper meaning or just flaky?
You're definitely not looking at ASCII, and AFAIK Þ and þ aren't in DEC
MCS, which likely means you're looking at inconsistent handling of or
inconsistent configuration of ISO 8859-1 among your apps and OS and
hardware; I'd guess some here is MCS, and some 8859-1.
You've asked variations of this question over the years too, usually
involving trying to use EDT past ASCII or maybe past DEC MCS.
https://groups.google.com/g/comp.os.vms/c/QAQAyRo9BPM/m/IrmCw1UJBQAJ
https://groups.google.com/g/comp.os.vms/c/Yji2Tufvv7k/m/mhUy-zKXAAAJ
etc.
This is part of the (lack of) UTF-8 and Unicode support in OpenVMS and its
tooling that I've grumbled. Not that adding UTF-8 and Unicode support is
ever going to be a small overhaul.
Now, UTF8 is just a "row of bytes", so if you use (as an example) Putty
in its default setup using UTF8, you can type (or copy/paste) any UTF8
character into Putty and it will be stored using whatever editor you
are using. It is just a row of bytes, so there is no specific need for
any "UTF8 support" for doing just that.

Later on, of you send the same text to some UTF8 compatible display (like
another Putty session using the default UTF8 setup, or a web browser using
UTF8 encoding) the Islandic characters would be displayed just fine.

But if you are using some display tool that doesn't support UTF8, you
will get garbled text, of course. But that is not the fault of OpenVMS.

It is unclear if ISO/IEC 646 have/had support for Icelandic characters,
the Wiki page has an entry for "IS" in some tables but no real data.
https://en.wikipedia.org/wiki/ISO/IEC_646

Then of course, it is a totally other matter if you are talkning about
UTF8 support for symbols/variables in compilers or in file/directory
names, but that is a totally differnt area from just storing and
displaying some "data" that happens to include UTF8 sequences.
But that isn't in the scope of the question asked.

I would not expect tools like DECterm or VT220 (really?) to handle
UTF8 or anything else outside the DEC-MCS range of characters. If you
need that, simply use modern tool from the last 20 years or so.
Stephen Hoffman
2021-11-11 01:28:26 UTC
Permalink
Post by Jan-Erik Söderholm
Now, UTF8 is just a "row of bytes", so if you use (as an example) Putty
in its default setup using UTF8, you can type (or copy/paste) any UTF8
character into Putty and it will be stored using whatever editor you
are using. It is just a row of bytes, so there is no specific need for
any "UTF8 support" for doing just that.
You're quite possibly headed for a few surprises within OpenVMS apps,
even if cutting-and-pasting wads of bytes around. Not the least of
which involves counting characters/code points/clusters (one byte is no
longer one character, so is the app looking for the buffer size or the
character/code point/cluster length, and what to do with the zero-width
stuff?), the fun that is directionality (there's a recent CVE related
to this), and identifying the string encoding and the string language
for each string, normalization, and the inherent language-sensitivity
of strings for purposes such as sorting. In aggregate, some baked-in
app and OpenVMS API assumptions—and developers' own assumptions—about
strings can and will break. Sure, UTF-8 is a "row of bytes", with some
caveats. Sort of. Mostly.

There are a few corners of OpenVMS that have some UTF-8 support, one is
the XQP. Another is C including the I18N bits. Java. I'd expect that
pervasive support is at least a decade away.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2021-11-11 01:46:10 UTC
Permalink
Post by Jan-Erik Söderholm
This is part of the (lack of) UTF-8 and Unicode support in OpenVMS and
its tooling that I've grumbled. Not that adding UTF-8 and Unicode
support is ever going to be a small overhaul.
Now, UTF8 is just a "row of bytes", so if you use (as an example) Putty
in its default setup using UTF8, you can type (or copy/paste) any UTF8
character into Putty and it will be stored using whatever editor you
are using. It is just a row of bytes, so there is no specific need for
any "UTF8 support" for doing just that.
Later on, of you send the same text to some UTF8 compatible display (like
another Putty session using the default UTF8 setup, or a web browser using
UTF8 encoding) the Islandic characters would be displayed just fine.
But if you are using some display tool that doesn't support UTF8, you
will get garbled text, of course. But that is not the fault of OpenVMS.
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length and that byte index i is not character
index i (and worse byte index i may not even point to a character at all
if it hits in the middle of a multi-byte sequence).
Post by Jan-Erik Söderholm
It is unclear if ISO/IEC 646 have/had support for Icelandic characters,
the Wiki page has an entry for "IS" in some tables but no real data.
https://en.wikipedia.org/wiki/ISO/IEC_646
http://www.kreativekorp.com/charset/encoding/ISO646IS/

https://www.freeutils.net/source/jcharset/ &
https://www.mvndoc.com/c/net.freeutils/jcharset/net/freeutils/charset/iso646/ISO646ISCharset.html

seems to indicate that it exist.

Arne
Lawrence D’Oliveiro
2021-11-11 04:48:39 UTC
Permalink
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
Arne Vajhøj
2021-11-11 16:21:42 UTC
Permalink
Post by Lawrence D’Oliveiro
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
No.

It is a practical problem in UTF-8 as everything not in ASCII is more
than 1 byte.

It is a theoretical problem in UTF-16 because there are defined unicode
code points that become more than 2 bytes (they are just extremely
rare).

It is not a problem for UTF-32 as everything is 4 bytes.

Arne
Craig A. Berry
2021-11-11 18:17:53 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D’Oliveiro
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
No.
It is a practical problem in UTF-8 as everything not in ASCII is more
than 1 byte.
It is a theoretical problem in UTF-16 because there are defined unicode
code points that become more than 2 bytes (they are just extremely
rare).
It is not a problem for UTF-32 as everything is 4 bytes.
Back when it was called UCS-4, I think that was true. But as far as I
know, all the ones with UTF in the name are varying width. I think
there are a couple of emojis that take more than 4 bytes and would need
two UTF-32 chunks to represent a single character. But even if the
encoding is not varying width, the number of characters displayed might
not match the number of code points because of things like combining
characters.
Craig A. Berry
2021-11-11 19:35:55 UTC
Permalink
Post by Arne Vajhøj
Post by Lawrence D’Oliveiro
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
No.
It is a practical problem in UTF-8 as everything not in ASCII is more
than 1 byte.
It is a theoretical problem in UTF-16 because there are defined unicode
code points that become more than 2 bytes (they are just extremely
rare).
It is not a problem for UTF-32 as everything is 4 bytes.
Back when it was called UCS-4, I think that was true.  But as far as I
know, all the ones with UTF in the name are varying width.  I think
there are a couple of emojis that take more than 4 bytes and would need
two UTF-32 chunks to represent a single character.
<quote>
In the Unicode Standard, the codespace consists of the integers from 0
to 10FFFF16, comprising 1,114,112 code points available for assigning
the repertoire of abstract characters.
</quote>
<quote>
Each  Unicode  code  point  is  represented directly by a single 32-bit
code unit. Because of this, UTF-32 has a one-to-one relationship
between encoded character and code unit; it is a fixed-width character
encoding form.
</quote>
Hmm. You're right. For some reason I had thought they'd blown the
4-byte limit with emojis, but it doesn't seem UTF-32 has any provision
for surrogate pairs.
                                                 But even if the
encoding is not varying width, the number of characters displayed might
not match the number of code points because of things like combining
characters.
Display is another issue - a way more complex issue.
Arne
Lawrence D’Oliveiro
2021-11-11 22:45:38 UTC
Permalink
<quote>
Each Unicode code point is represented directly by a single 32-bit
code unit. Because of this, UTF-32 has a one-to-one relationship
between encoded character and code unit; it is a fixed-width character
encoding form.
</quote>
Beware of terminology! What a normal person might call a “character”, they call a “text element”. This is represented by one or more of what they are calling an “encoded character”.

So they are able to call UTF-32/UCS-4 a “fixed-width” encoding, only with reference to “encoded characters”, not actually to “characters”.
Lawrence D’Oliveiro
2021-11-11 23:01:06 UTC
Permalink
Post by Lawrence D’Oliveiro
<quote>
Each Unicode code point is represented directly by a single 32-bit
code unit. Because of this, UTF-32 has a one-to-one relationship
between encoded character and code unit; it is a fixed-width character
encoding form.
</quote>
Beware of terminology! What a normal person might call a “character”, they call a “text
element”. This is represented by one or more of what they are calling an “encoded character”.
Actually, the term “text element” is less specific than that. More accurate terms, according to <https://www.unicode.org/reports/tr29/tr29-39.html>, would be “user-perceived character” or “grapheme cluster”.
Lawrence D’Oliveiro
2021-11-11 21:57:02 UTC
Permalink
Post by Lawrence D’Oliveiro
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
No.
You didn’t know, then, that what Unicode codes define are not characters, but code points?
Arne Vajhøj
2021-11-12 00:21:38 UTC
Permalink
Post by Lawrence D’Oliveiro
Post by Lawrence D’Oliveiro
Post by Arne Vajhøj
The biggest problems with UTF-8 is that the byte length is not
necessarily the character length ...
That would be true of any Unicode encoding, even UCS-4.
No.
You didn’t know, then, that what Unicode codes define are not characters, but code points?
Nonsense.

<quote>
The Unicode Standard specifies a numeric value (code point) and a name
for each of its characters.
...
Unicode characters are represented in one of three encoding forms: a
32-bit form (UTF-32), a 16-bit form (UTF-16), and an 8-bit form (UTF-8).
</quote>

Arne
Phillip Helbig (undress to reply)
2021-11-11 05:45:34 UTC
Permalink
Post by Jan-Erik Söderholm
Now, UTF8 is just a "row of bytes", so if you use (as an example) Putty
in its default setup using UTF8, you can type (or copy/paste) any UTF8
character into Putty and it will be stored using whatever editor you
are using. It is just a row of bytes, so there is no specific need for
any "UTF8 support" for doing just that.
Later on, of you send the same text to some UTF8 compatible display (like
another Putty session using the default UTF8 setup, or a web browser using
UTF8 encoding) the Islandic characters would be displayed just fine.
But if you are using some display tool that doesn't support UTF8, you
will get garbled text, of course. But that is not the fault of OpenVMS.
It is unclear if ISO/IEC 646 have/had support for Icelandic characters,
the Wiki page has an entry for "IS" in some tables but no real data.
https://en.wikipedia.org/wiki/ISO/IEC_646
Then of course, it is a totally other matter if you are talkning about
UTF8 support for symbols/variables in compilers or in file/directory
names, but that is a totally differnt area from just storing and
displaying some "data" that happens to include UTF8 sequences.
But that isn't in the scope of the question asked.
Right; just a text file.
Post by Jan-Erik Söderholm
I would not expect tools like DECterm or VT220 (really?)
Sorry, VT320. :-)
Post by Jan-Erik Söderholm
to handle
UTF8 or anything else outside the DEC-MCS range of characters. If you
need that, simply use modern tool from the last 20 years or so.
I don't expect anything more than MCS. I'm just wondering why in a
DECterm it is sometimes displayed correctly and sometimes not.
Michael Moroney
2021-11-11 06:41:27 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Jan-Erik Söderholm
to handle
UTF8 or anything else outside the DEC-MCS range of characters. If you
need that, simply use modern tool from the last 20 years or so.
I don't expect anything more than MCS. I'm just wondering why in a
DECterm it is sometimes displayed correctly and sometimes not.
If you're talking about how EDT behaves, it's because EDT was
inconsistent whether that character position was printable or not, if
not printable it treated it much like a control code (displaying it as
<Xnn> ) See my other reply.
Phillip Helbig (undress to reply)
2021-11-11 05:42:35 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Having to write some Icelanding words in a DECterm (as one does), I
"Icelandic" of course.
Post by Phillip Helbig (undress to reply)
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case
thorn (Þ þ if those characters get through). If entered by cboth
create the character, unless it is at the beginning of a line, in which
case one sees <XDE> or <XFE> (one character, displayed as several).
ASCII values are 222 and 254. Refreshing the screen also causes the
mnenonics to appear. Also, they are not displayed via HELP FORTRAN
CHAR DEC.
Any deeper reason or just flaky instrumentation?
I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work). Again,
deeper meaning or just flaky?
You're definitely not looking at ASCII, and AFAIK Þ and þ aren't
in DEC
MCS,
At least HELP FORTRAN CHAR DEC doesn't show them.
which likely means you're looking at inconsistent handling of or
inconsistent configuration of ISO 8859-1 among your apps and OS and
hardware; I'd guess some here is MCS, and some 8859-1.
Only LK411, Alpha hardware and DECterm (under CDE, but that's probably
irrelevant). Maybe they are inconsistent. :-|
You've asked variations of this question over the years too, usually
involving trying to use EDT past ASCII or maybe past DEC MCS.
Yes. :-)
Michael Moroney
2021-11-11 06:18:17 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
Having to write some Icelanding words in a DECterm (as one does), I
"Icelandic" of course.
Post by Phillip Helbig (undress to reply)
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case
thorn (Þ þ if those characters get through). If entered by cboth
create the character, unless it is at the beginning of a line, in which
case one sees <XDE> or <XFE> (one character, displayed as several).
ASCII values are 222 and 254. Refreshing the screen also causes the
mnenonics to appear. Also, they are not displayed via HELP FORTRAN
CHAR DEC.
Any deeper reason or just flaky instrumentation?
I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work). Again,
deeper meaning or just flaky?
You're definitely not looking at ASCII, and AFAIK Þ and þ aren't
in DEC
MCS,
At least HELP FORTRAN CHAR DEC doesn't show them.
which likely means you're looking at inconsistent handling of or
inconsistent configuration of ISO 8859-1 among your apps and OS and
hardware; I'd guess some here is MCS, and some 8859-1.
Only LK411, Alpha hardware and DECterm (under CDE, but that's probably
irrelevant). Maybe they are inconsistent. :-|
You've asked variations of this question over the years too, usually
involving trying to use EDT past ASCII or maybe past DEC MCS.
Yes. :-)
The character set ISO-8859-1 is almost the same as DEC-MCS with some of
the undefined DEC-MCS characters being defined in ISO-8859-1. The
exceptions are a few rarely used characters such as Œ and Ÿ.
Specifically, ISO-8859-1 has Icelandic Þ and þ, these positions are
undefined in DEC-MCS. 99% of the time one can use ISO-8859-1 instead of
DEC-MCS and get away with it.

There is an EDT patch which makes it more ISO-8859-1 friendly, actually
prompted by a customer who used EDT for strictly ASCII except for a
character at the 'þ' position (but not þ). EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Phillip Helbig (undress to reply)
2021-11-11 08:10:59 UTC
Permalink
Post by Michael Moroney
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case
thorn (Þ þ if those characters get through). If entered by both
create the character, unless it is at the beginning of a line, in which
case one sees <XDE> or <XFE> (one character, displayed as several).
ASCII values are 222 and 254. Refreshing the screen also causes the
mnenonics to appear. Also, they are not displayed via HELP FORTRAN
CHAR DEC.
Any deeper reason or just flaky instrumentation?
I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work). Again,
deeper meaning or just flaky?
You're definitely not looking at ASCII, and AFAIK Þ and þ aren't
in DEC
MCS,
At least HELP FORTRAN CHAR DEC doesn't show them.
which likely means you're looking at inconsistent handling of or
inconsistent configuration of ISO 8859-1 among your apps and OS and
hardware; I'd guess some here is MCS, and some 8859-1.
Only LK411, Alpha hardware and DECterm (under CDE, but that's probably
irrelevant). Maybe they are inconsistent. :-|
You've asked variations of this question over the years too, usually
involving trying to use EDT past ASCII or maybe past DEC MCS.
Yes. :-)
The character set ISO-8859-1 is almost the same as DEC-MCS with some of
the undefined DEC-MCS characters being defined in ISO-8859-1. The
exceptions are a few rarely used characters such as Œ and Ÿ.
Specifically, ISO-8859-1 has Icelandic Þ and þ, these positions are
undefined in DEC-MCS. 99% of the time one can use ISO-8859-1 instead of
DEC-MCS and get away with it.
Right. And ISO-8859-15 is also similar. I routinely write € in EDT to
get the Euro sign when most people read that text.
Post by Michael Moroney
There is an EDT patch which makes it more ISO-8859-1 friendly, actually
prompted by a customer who used EDT for strictly ASCII except for a
character at the 'þ' position (but not þ).
So the patch causes the wanted characters to be displayed? Of course,
one can enter any value in EDT.
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard? Not that I need a terminal with more
than 24 lines. :-)
Michael Moroney
2021-11-11 17:01:46 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
Post by Phillip Helbig (undress to reply)
Post by Phillip Helbig (undress to reply)
notice that COMPOSE-T-H and COMPOSE-t-h create upper and lower case
thorn (Þ þ if those characters get through). If entered by both
create the character, unless it is at the beginning of a line, in which
case one sees <XDE> or <XFE> (one character, displayed as several).
ASCII values are 222 and 254. Refreshing the screen also causes the
mnenonics to appear. Also, they are not displayed via HELP FORTRAN
CHAR DEC.
Any deeper reason or just flaky instrumentation?
I also notice that × (COMPOSE-x-x) works fine in a DECterm but not on a
real VT220 (where most or all other composed characters work). Again,
deeper meaning or just flaky?
You're definitely not looking at ASCII, and AFAIK Þ and þ aren't
in DEC
MCS,
At least HELP FORTRAN CHAR DEC doesn't show them.
which likely means you're looking at inconsistent handling of or
inconsistent configuration of ISO 8859-1 among your apps and OS and
hardware; I'd guess some here is MCS, and some 8859-1.
Only LK411, Alpha hardware and DECterm (under CDE, but that's probably
irrelevant). Maybe they are inconsistent. :-|
You've asked variations of this question over the years too, usually
involving trying to use EDT past ASCII or maybe past DEC MCS.
Yes. :-)
The character set ISO-8859-1 is almost the same as DEC-MCS with some of
the undefined DEC-MCS characters being defined in ISO-8859-1. The
exceptions are a few rarely used characters such as Œ and Ÿ.
Specifically, ISO-8859-1 has Icelandic Þ and þ, these positions are
undefined in DEC-MCS. 99% of the time one can use ISO-8859-1 instead of
DEC-MCS and get away with it.
Right. And ISO-8859-15 is also similar. I routinely write € in EDT to
get the Euro sign when most people read that text.
Post by Michael Moroney
There is an EDT patch which makes it more ISO-8859-1 friendly, actually
prompted by a customer who used EDT for strictly ASCII except for a
character at the 'þ' position (but not þ).
So the patch causes the wanted characters to be displayed? Of course,
one can enter any value in EDT.
Yes since all characters in the range xA0-xFF are defined and printable.
In theory it's compatible with any ISO-8859-x character set since EDT
doesn't care what the actual 8 bit characters are. It's up to the user
and their program to interpret things correctly.
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard? Not that I need a terminal with more
than 24 lines. :-)
It's a regular patch for VSI V8.4-2x and is already part of 9.X.
Phillip Helbig (undress to reply)
2021-11-11 18:33:54 UTC
Permalink
Post by Michael Moroney
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard? Not that I need a terminal with more
than 24 lines. :-)
It's a regular patch for VSI V8.4-2x and is already part of 9.X.
Nice to see that EDT is still under active maintenance and even
development at VSI. :-D
Dave Froble
2021-11-11 18:40:33 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard? Not that I need a terminal with more
than 24 lines. :-)
It's a regular patch for VSI V8.4-2x and is already part of 9.X.
Nice to see that EDT is still under active maintenance and even
development at VSI. :-D
Don't be too sure about that. I believe Michael mentioned in the past
that he did the mod on his own, ie; not assigned work. I could mis-remember.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Michael Moroney
2021-11-11 20:57:54 UTC
Permalink
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard?  Not that I need a terminal with more
than 24 lines.  :-)
It's a regular patch for VSI V8.4-2x and is already part of 9.X.
Nice to see that EDT is still under active maintenance and even
development at VSI.  :-D
It really isn't.
Don't be too sure about that.  I believe Michael mentioned in the past
that he did the mod on his own, ie; not assigned work.  I could
mis-remember.
You remember correctly. The hardwired '24 line terminal' assumption
pissed me off and a few times I looked at it I said no way I can fix
that spaghetti code. But one day the planets were aligned or something,
and I just did it.
Arne Vajhøj
2021-11-11 21:01:58 UTC
Permalink
Post by Michael Moroney
Post by Phillip Helbig (undress to reply)
Post by Michael Moroney
Post by Michael Moroney
EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
Will the patch become standard?  Not that I need a terminal with more
than 24 lines.  :-)
It's a regular patch for VSI V8.4-2x and is already part of 9.X.
Nice to see that EDT is still under active maintenance and even
development at VSI.  :-D
It really isn't.
Don't be too sure about that.  I believe Michael mentioned in the past
that he did the mod on his own, ie; not assigned work.  I could
mis-remember.
You remember correctly. The hardwired '24 line terminal' assumption
pissed me off and a few times I looked at it I said no way I can fix
that spaghetti code. But one day the planets were aligned or something,
and I just did it.
Macro-32 ?

Arne
Robert A. Brooks
2021-11-11 21:11:37 UTC
Permalink
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
--
-- Rob
Simon Clubley
2021-11-12 18:28:34 UTC
Permalink
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)

On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.

Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Robert A. Brooks
2021-11-12 18:45:57 UTC
Permalink
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
No, it's not. While it's true that one can write bad code in any language,
BLISS, due its procedural nature, does not default to spaghetti code.

While the EDT code can be confusing (due to lack of comments), the code flow
isn't that bad.

What can be confusing in BLISS is the use of nested macros, where a single
BLISS "statement" can expand into several pages of code.
--
-- Rob
Michael Moroney
2021-11-12 20:30:59 UTC
Permalink
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
No, it's not.  While it's true that one can write bad code in any language,
BLISS, due its procedural nature, does not default to spaghetti code.
While the EDT code can be confusing (due to lack of comments), the code flow
isn't that bad.
What can be confusing in BLISS is the use of nested macros, where a single
BLISS "statement" can expand into several pages of code.
What made it difficult for me when first considering this were a few things:

1) I was relatively weak at BLISS. Almost everything I did in my VMS
career was C/MACRO-32 and some other languages.

2) I think the coding style threw me. Sometimes that does that. Even my
own! :-) Some of the code was trying to squeeze every byte out, as old
code almost always does.

3) There was no equivalent of C's .H file containing something like
'#define terminal_lines 24'. I went through ALL modules looking for
EVERY instance of the character string/constant '24'. EDT is split up
into many modules. I then had to look at EVERY instance of '23' and '22'
(number of lines of the edited file being displayed). I also looked at
'21' and '25'. There was much more but I forget.

My first pass was hardwiring everything to something like '40', that is
terminals all had 40 lines, not 24. Then I tried things in a fixed 40
line long terminal window. It mostly worked on the second attempt. Then
I went back to making things variable, based on terminal characteristics.

There is much more but I forget, it was some time ago that I did this.
Stephen Hoffman
2021-11-12 23:59:07 UTC
Permalink
Post by Robert A. Brooks
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
No, it's not. While it's true that one can write bad code in any
language, BLISS, due its procedural nature, does not default to
spaghetti code.
Alas, I've looked at a whole lot of Bliss spaghetti code over the
years. More than I'd prefer, though less than the amount of Macro32
spaghetti.

In the past decades, Bliss and Macro32 both effectively became DSLs for
OpenVMS, for better or worse.

The Bliss compiler itself and the Bliss language could use an overhaul
with better diagnostics, with code-refactoring support, with IDE
support, and with other enhancements. But Bliss enhancement work is not
likely a priority for anybody.

Automatic source code refactoring has gotten substantially better too,
for those that haven't worked with it. Between that and source code
formatting tools, more than a little source code spaghetti can be
remediated.
Post by Robert A. Brooks
While the EDT code can be confusing (due to lack of comments), the code
flow isn't that bad.
Most of the Bliss code written by OpenVMS development was fairly well
done. The Bliss and Macro32 code with variant calling schemes was
always good for some puzzlement, though.
Post by Robert A. Brooks
What can be confusing in BLISS is the use of nested macros, where a
single BLISS "statement" can expand into several pages of code.
Some examples of Bliss macros were near-impenetrable, and most easily
read with the assistance of the Bliss compiler listings macro
expansion. Macro32 macro support suffered somewhat similarly. C's
macro preprocessor is comparatively simplistic. Not that I haven't used
the C macro preprocessor on Fortran and BASIC code. In some ways. Bliss
macros remind me of C++ macros and C++ operator overloading support.

I've been coming around toward how Zig, Swift, and other programming
languages are designed; with the abstractions in the language and the
compiler and the run-time, and without executable code embedded within
macros.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2021-11-13 00:18:23 UTC
Permalink
Post by Stephen Hoffman
Post by Robert A. Brooks
What can be confusing in BLISS is the use of nested macros, where a
single BLISS "statement" can expand into several pages of code.
Some examples of Bliss macros were near-impenetrable, and most easily
read with the assistance of the Bliss compiler listings macro
expansion.  Macro32 macro support suffered somewhat similarly. C's macro
preprocessor is comparatively simplistic. Not that I haven't used the C
macro preprocessor on Fortran and BASIC code. In some ways. Bliss macros
remind me of C++ macros and C++ operator overloading support.
C++ macros? Templates?

Arne
Simon Clubley
2021-11-14 18:32:19 UTC
Permalink
Post by Stephen Hoffman
Some examples of Bliss macros were near-impenetrable, and most easily
read with the assistance of the Bliss compiler listings macro
expansion. Macro32 macro support suffered somewhat similarly. C's
macro preprocessor is comparatively simplistic. Not that I haven't used
the C macro preprocessor on Fortran and BASIC code. In some ways. Bliss
macros remind me of C++ macros and C++ operator overloading support.
When people write code like that, they may think they are being
"clever" but in fact they are just being irresponsible because
they are just setting up major maintenance problems for further
down the road.

I refer you to the terminal driver for another example of this and
that, as a result, we don't even have something as simple as being
able to edit command lines that are longer than the terminal width.

This, BTW, is something that even the awful and primitive cmd shell
in Windows has absolutely no problem with.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Dave Froble
2021-11-14 19:38:57 UTC
Permalink
Post by Simon Clubley
Post by Stephen Hoffman
Some examples of Bliss macros were near-impenetrable, and most easily
read with the assistance of the Bliss compiler listings macro
expansion. Macro32 macro support suffered somewhat similarly. C's
macro preprocessor is comparatively simplistic. Not that I haven't used
the C macro preprocessor on Fortran and BASIC code. In some ways. Bliss
macros remind me of C++ macros and C++ operator overloading support.
When people write code like that, they may think they are being
"clever" but in fact they are just being irresponsible because
they are just setting up major maintenance problems for further
down the road.
I have no problem with clever code, but, dammit, explain it!

A short while back some posts in reply to some of mine claimed that there is
such a thing as "too many comments". Isn't this an example of justifying my
claim that there is no such thing as "too many comments"?

Perhaps a paragraph or two to explain macros? Then the really tough thing.
Updates to the comments to explain modifications. I prefer to do so with
additional comments, leaving the originals intact.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2021-11-14 18:41:02 UTC
Permalink
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
Sorry, but I live in a country where we don't yet have to issue
"trigger warnings" in everyday conversation. I don't know about
the rest of Europe however.

However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)

Was it comparing BLISS-32 to Macro-32 ?

Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?

Was it something else ? :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Dave Froble
2021-11-14 19:43:57 UTC
Permalink
Post by Simon Clubley
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
Sorry, but I live in a country where we don't yet have to issue
"trigger warnings" in everyday conversation. I don't know about
the rest of Europe however.
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it. No such thing as a language that can do everything.
Sometimes simpler is better.
Post by Simon Clubley
Was it something else ? :-)
Simon.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2021-11-14 23:50:03 UTC
Permalink
Post by Simon Clubley
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it.  No such thing as a language that can do
everything. Sometimes simpler is better.
Programming languages are definitely not "one size fits all".

I totally agree that simpler is better for programming languages.

And this is not just you and me - very complex languages tend not
to prosper in the industry.

But I don't think Simon's point was complex vs simple.

More like structured vs goto style. and strong typing vs weak
typing.

Because Ada is a complex language, but Pascal is actually a relative
simple language (at least in traditional form and in VMS flavor - Delphi
has sort of moved a bit in the complex direction).

Arne
Arne Vajhøj
2021-11-14 23:55:06 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it.  No such thing as a language that can do
everything. Sometimes simpler is better.
Programming languages are definitely not "one size fits all".
I totally agree that simpler is better for programming languages.
And this is not just you and me - very complex languages tend not
to prosper in the industry.
But I don't think Simon's point was complex vs simple.
More like structured vs goto style. and strong typing vs weak
typing.
Because Ada is a complex language, but Pascal is actually a relative
simple language (at least in traditional form and in VMS flavor - Delphi
has sort of moved a bit in the complex direction).
I like Pascal, but for something real I would probably prefer
Modula-2. I have always loved that language!

Arne
Norbert Schönartz
2021-11-15 13:26:56 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Simon Clubley
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it.  No such thing as a language that can do
everything. Sometimes simpler is better.
Programming languages are definitely not "one size fits all".
I totally agree that simpler is better for programming languages.
And this is not just you and me - very complex languages tend not
to prosper in the industry.
But I don't think Simon's point was complex vs simple.
More like structured vs goto style. and strong typing vs weak
typing.
Because Ada is a complex language, but Pascal is actually a relative
simple language (at least in traditional form and in VMS flavor - Delphi
has sort of moved a bit in the complex direction).
I like Pascal, but for something real I would probably prefer
Modula-2. I have always loved that language!
Arne
I totally agree. Modula-2 from ModulAware for OpenVMS VAX and Alpha were
great and the support was excellent. Unfortunately it was not ported to
Itanium, so we had to port our code from Modula-2 to C when we moved
from Alpha to Itanium in 2006. It was not a pleasure. And I think there
will be no version for x86, unfortunately.
--
Norbert
Simon Clubley
2021-11-15 18:39:36 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it.  No such thing as a language that can do
everything. Sometimes simpler is better.
Programming languages are definitely not "one size fits all".
I totally agree that simpler is better for programming languages.
And this is not just you and me - very complex languages tend not
to prosper in the industry.
But I don't think Simon's point was complex vs simple.
More like structured vs goto style. and strong typing vs weak
typing.
Yes, very much so.
Post by Arne Vajhøj
Because Ada is a complex language, but Pascal is actually a relative
simple language (at least in traditional form and in VMS flavor - Delphi
has sort of moved a bit in the complex direction).
I can't help but wonder however if there's still a latent market for a
language that has all the basics and type safety of Ada but without all
the complexity that has been bolted onto Ada (especially in recent versions).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-15 19:37:59 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
I'm figuring this is it.  No such thing as a language that can do
everything. Sometimes simpler is better.
Programming languages are definitely not "one size fits all".
I totally agree that simpler is better for programming languages.
And this is not just you and me - very complex languages tend not
to prosper in the industry.
But I don't think Simon's point was complex vs simple.
More like structured vs goto style. and strong typing vs weak
typing.
Yes, very much so.
Post by Arne Vajhøj
Because Ada is a complex language, but Pascal is actually a relative
simple language (at least in traditional form and in VMS flavor - Delphi
has sort of moved a bit in the complex direction).
I can't help but wonder however if there's still a latent market for a
language that has all the basics and type safety of Ada but without all
the complexity that has been bolted onto Ada (especially in recent versions).
Maybe there is.

But right now the only two languages in the "low level"
"more safe than C/C++" category with some traction seems
to be Rust and Go.

And I don't think they are nearly as safe as Ada.

Arne
Arne Vajhøj
2021-11-16 00:33:18 UTC
Permalink
Post by Arne Vajhøj
More like structured vs goto style. and strong typing vs weak
typing.
Languages don't provide structure, programmers do that.
Perhaps not all programmers end up with good structure.
Actually programming languages do provide features that
are commonly labeled "structured programming".

Which is really most programming languages: the
Algol/Pascal/Modula-2/Ada family, the C/C++/Java/C# family
etc..

Original Fortran and Basic did not qualify - too dependent
on GOTO. But modern flavors of those languages do qualify
as well.

Even though Python's block concept is let us call it "unusual"
(indentation based) then it also qualifies.

Arne
John Reagan
2021-11-15 14:26:47 UTC
Permalink
Post by Simon Clubley
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
Sorry, but I live in a country where we don't yet have to issue
"trigger warnings" in everyday conversation. I don't know about
the rest of Europe however.
However, you have been way too quick to issue a "trigger warning"
request without even telling me what "triggered" you. :-)
Was it comparing BLISS-32 to Macro-32 ?
Was it the desire to have a Pascal-like or Ada-like language as a
viable system implementation language ?
Was it something else ? :-)
Simon.
--
Walking destinations on a map are further away than they appear.
HaHa Both actually. :)

While BLISS has many shortcomings, I really can't compare it to assembly language much less equating the two.

And I actually agree with you that many parts of modern OS's could/should be written in a type-safe language. Unfortunately, many of the algorithms inside of OpenVMS don't lend themselves to type-safe languages.
Dave Froble
2021-11-14 19:42:15 UTC
Permalink
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Simon.
--
Walking destinations on a map are further away than they appear.
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
What? You don't like "walking destinations" ? Neither do I.

I really disagree with Simon on this topic, the implementation language thing,
not the walking thing. Well, maybe both.

One really doesn't need a language or compiler to get in the way of what needs
to be done.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Simon Clubley
2021-11-14 22:05:42 UTC
Permalink
Post by Dave Froble
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Simon.
--
Walking destinations on a map are further away than they appear.
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
What? You don't like "walking destinations" ? Neither do I.
I really disagree with Simon on this topic, the implementation language thing,
not the walking thing. Well, maybe both.
I take it you are not a walker David. :-)

I wonder how many people around here who are walkers either agree with
me or at least understand what I am saying ?
Post by Dave Froble
One really doesn't need a language or compiler to get in the way of what needs
to be done.
There is a move towards more safe languages for systems programming.

The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.

When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Bill Gunshannon
2021-11-14 23:43:42 UTC
Permalink
Post by Simon Clubley
Post by Dave Froble
Post by Simon Clubley
Post by Robert A. Brooks
Post by Arne Vajhøj
You remember correctly. The hardwired '24 line terminal' assumption pissed me
off and a few times I looked at it I said no way I can fix that spaghetti
code. But one day the planets were aligned or something, and I just did it.
Macro-32 ?
BLISS-32
Same difference. :-)
On a more serious note, BLISS-32 was a nice idea, but it's at way too
low a level to make a real difference.
Now, if we had a Pascal-like or Ada-like language that could be used
as a system implementation language... :-)
Simon.
--
Walking destinations on a map are further away than they appear.
Next time you post that, please add a "<Trigger warning for John Reagan>" at the top please.
What? You don't like "walking destinations" ? Neither do I.
I really disagree with Simon on this topic, the implementation language thing,
not the walking thing. Well, maybe both.
I take it you are not a walker David. :-)
I wonder how many people around here who are walkers either agree with
me or at least understand what I am saying ?
Post by Dave Froble
One really doesn't need a language or compiler to get in the way of what needs
to be done.
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
I thought this is the problem Ada was created to fix? :-)

bill
Arne Vajhøj
2021-11-14 23:57:09 UTC
Permalink
Post by Simon Clubley
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
I thought this is the problem Ada was created to fix?  :-)
It was.

But Ada did fall out of fashion.

There are probably many explanations for that, but my guess
is that the complexity of the language turned out to be a
problem.

Arne
Arne Vajhøj
2021-11-15 19:45:36 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
I thought this is the problem Ada was created to fix?  :-)
It was.
But Ada did fall out of fashion.
There are probably many explanations for that, but my guess
is that the complexity of the language turned out to be a
problem.
There's also the problem that the Ada compiler situation overall is not
good and that Adacore's Community Edition version of GNAT is pure GPL
with no runtime exception. See https://www.adacore.com/community for
details.
I know about that restriction. It has been discussed before.

If they really wanted it then they would pay ACT for the commercial edition.
There's still the FSF distribution of GNAT (at least for the targets
it supports) however.
Unfortunately then most GCC dists does not include gnat.

Supposedly m2 is going to be in standard GCC dist going forward,
so maybe Modula-2 instead of Ada??

Arne
Arne Vajhøj
2021-11-15 21:07:04 UTC
Permalink
Post by Arne Vajhøj
Supposedly m2 is going to be in standard GCC dist going forward,
so maybe Modula-2 instead of Ada??
But, sadly, Modula was intended for applications and not systems
programming.  I seriously doubt you could write a functional OS
beyond the most simplistic in Modula.
Modula-2 was created with the intention to write
OS code (Lilith) and does have a bunch of machine
close constructs and in most flavors integration
with C and assembler.

Arne
Simon Clubley
2021-11-16 18:43:13 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
But Ada did fall out of fashion.
There are probably many explanations for that, but my guess
is that the complexity of the language turned out to be a
problem.
There's also the problem that the Ada compiler situation overall is not
good and that Adacore's Community Edition version of GNAT is pure GPL
with no runtime exception. See https://www.adacore.com/community for
details.
I know about that restriction. It has been discussed before.
If they really wanted it then they would pay ACT for the commercial edition.
Unfortunately, that's still an issue when you don't have that problem
with other languages such as C and C++.

People might want to explore other languages (including Ada) but there
are plenty of production-quality free language compilers available today.

Adacore's policy is a barrier to attracting those types of people.
Post by Arne Vajhøj
There's still the FSF distribution of GNAT (at least for the targets
it supports) however.
Unfortunately then most GCC dists does not include gnat.
Supposedly m2 is going to be in standard GCC dist going forward,
so maybe Modula-2 instead of Ada??
Interesting. I didn't know that, thanks. I wonder if uppercase
keywords will be an optional feature instead of being mandatory ?

I wonder if you will be able to do bare metal coding with it ?

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Bill Gunshannon
2021-11-16 18:59:23 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Arne Vajhøj
But Ada did fall out of fashion.
There are probably many explanations for that, but my guess
is that the complexity of the language turned out to be a
problem.
There's also the problem that the Ada compiler situation overall is not
good and that Adacore's Community Edition version of GNAT is pure GPL
with no runtime exception. See https://www.adacore.com/community for
details.
I know about that restriction. It has been discussed before.
If they really wanted it then they would pay ACT for the commercial edition.
Unfortunately, that's still an issue when you don't have that problem
with other languages such as C and C++.
People might want to explore other languages (including Ada) but there
are plenty of production-quality free language compilers available today.
Adacore's policy is a barrier to attracting those types of people.
Post by Arne Vajhøj
There's still the FSF distribution of GNAT (at least for the targets
it supports) however.
Unfortunately then most GCC dists does not include gnat.
Supposedly m2 is going to be in standard GCC dist going forward,
so maybe Modula-2 instead of Ada??
Interesting. I didn't know that, thanks. I wonder if uppercase
keywords will be an optional feature instead of being mandatory ?
I wonder if you will be able to do bare metal coding with it ?
It would seem to me that bare metal coding isn't a compiler thing,
it's a library thing. Create the necessary libraries for your bare
metal and have at it.

bill
Simon Clubley
2021-11-16 19:19:39 UTC
Permalink
Post by Bill Gunshannon
It would seem to me that bare metal coding isn't a compiler thing,
it's a library thing. Create the necessary libraries for your bare
metal and have at it.
For one simple example, can you turn off garbage collection in a
language when running in bare metal mode ? (Go is a GC language)

How much of the language do you lose if you can do that ?

Go is also reported not to support low-level features such as
pointer arithmetric.

There's also the fact that at the very lowest levels, you need to
be able to write code that does not need a runtime because there
isn't an operating system under that code to support that runtime.
It wasn't clear to me if you can actually do that with Go.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-16 19:31:18 UTC
Permalink
Post by Simon Clubley
Post by Bill Gunshannon
It would seem to me that bare metal coding isn't a compiler thing,
it's a library thing. Create the necessary libraries for your bare
metal and have at it.
For one simple example, can you turn off garbage collection in a
language when running in bare metal mode ? (Go is a GC language)
How much of the language do you lose if you can do that ?
Bare metal and GC is not a problem.

Hard real time and GC is a problem.

But if your bare metal is also hard real time, then ...

It was not what Go was designed for.

But there are options available.

You can disable automatic and call GC when you want to.

There are external libraries available like:

https://github.com/teh-cmc/mmm
Post by Simon Clubley
Go is also reported not to support low-level features such as
pointer arithmetric.
They strongly recommends against doing it.

But there is an unsafe package if you want to shoot yourself in the
foot.
Post by Simon Clubley
There's also the fact that at the very lowest levels, you need to
be able to write code that does not need a runtime because there
isn't an operating system under that code to support that runtime.
It wasn't clear to me if you can actually do that with Go.
It is not designed for that.

But see one of the links I sent in the other post.

It looks doable.

Recommended? Questionable!

Arne
Arne Vajhøj
2021-11-14 23:53:50 UTC
Permalink
Post by Simon Clubley
Post by Dave Froble
One really doesn't need a language or compiler to get in the way of what needs
to be done.
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
Rust seems to be getting some traction.

With Mozilla, Microsoft and Linux kernel adding Rust code and
with Google and Amazon also backing it, then it may be difficult
to fall out of fashion.

Arne
Arne Vajhøj
2021-11-15 19:54:57 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Dave Froble
One really doesn't need a language or compiler to get in the way of what needs
to be done.
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
Rust seems to be getting some traction.
Unfortunately so. I like the concepts and desire to move to safer
languages but I find the Rust syntax itself ugly. People seem to
have forgotten that you write the code once (hopefully!) but then
read it many times.
What do you think about Go?

It does not have the same interest from OS people like Rust, but
it have a lot of interest from the container people. So maybe ...
Post by Arne Vajhøj
With Mozilla, Microsoft and Linux kernel adding Rust code and
with Google and Amazon also backing it, then it may be difficult
to fall out of fashion.
I wonder how many of them were major fans of Ruby and Ruby on Rails
before that environment and language were suddenly no longer fashionable ?
The popularity of RoR/Ruby has declined a bit the last decade, but it is
still an order of magnitude more popular than both Ada and Rust.

But probably not a relevant comparison. The expected lifetime of a web
application is significant shorter than of an OS kernel module.

Arne
Arne Vajhøj
2021-11-16 19:22:34 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Simon Clubley
Post by Dave Froble
One really doesn't need a language or compiler to get in the way of what needs
to be done.
There is a move towards more safe languages for systems programming.
The current fashion, Rust, has horrible syntax, and I have no confidence
that code written in it today will still compile on the Rust compilers
of 5 to 10 years from now, but its use is being driven by the desire
for using safer languages.
When Rust falls out of fashion, it would be nice if whatever follows
Rust would address both of those problems.
Rust seems to be getting some traction.
Unfortunately so. I like the concepts and desire to move to safer
languages but I find the Rust syntax itself ugly. People seem to
have forgotten that you write the code once (hopefully!) but then
read it many times.
What do you think about Go?
It doesn't seem to be a language designed for writing bare metal code
or operating systems in general so I have not really used it.
It was designed for servers.

But people are using it for small stuff.

Examples:

https://tinygo.org/

https://golangexample.com/a-go-unikernel-running-on-x86-bare-metal/
On a non-technical level, given how Google treats its projects, I would
also worry that suddenly one day Go would no longer be supported by Google.
Not likely. It has gotten so much traction that it would
be very difficult to stop it.

Docker, Kubernetes and OpenShift are all written in Go. In 5 years then
75% of the worlds server computing will be run in Kubernetes clusters.

Amazon, Microsoft, Google themselves, IBM/Redhat, VMWare etc. all
need that stuff.

Arne
Simon Clubley
2021-11-16 19:33:18 UTC
Permalink
Post by Arne Vajhøj
https://tinygo.org/
https://golangexample.com/a-go-unikernel-running-on-x86-bare-metal/
Interesting thanks.

I've just done a bit of reading about Go to refresh my memory and I
discovered this once again:

https://stackoverflow.com/questions/17153838/why-does-golang-enforce-curly-bracket-to-not-be-on-the-next-line

Go is the only language I know of that forces this style.

Great if you like this brace style, not so great otherwise... :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2021-11-16 19:42:45 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
https://tinygo.org/
https://golangexample.com/a-go-unikernel-running-on-x86-bare-metal/
Interesting thanks.
I've just done a bit of reading about Go to refresh my memory
You can pick and chose among lots of languages if you are looking
for high level languages for business applications.

But for low level stuff the choices are way more limited. If you want
something newer than C/C++ with some industry support and you do
not like Rust then Go becomes pretty obvious.

Alternatively you could learn to love Rust.

Arne
Arne Vajhøj
2022-05-13 01:03:53 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
What do you think about Go?
It doesn't seem to be a language designed for writing bare metal code
or operating systems in general so I have not really used it.
It was designed for servers.
But people are using it for small stuff.
https://stackoverflow.blog/2022/04/04/comparing-go-vs-c-in-embedded-applications/

Arne
Simon Clubley
2022-05-13 12:39:52 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Arne Vajhøj
What do you think about Go?
It doesn't seem to be a language designed for writing bare metal code
or operating systems in general so I have not really used it.
It was designed for servers.
But people are using it for small stuff.
https://stackoverflow.blog/2022/04/04/comparing-go-vs-c-in-embedded-applications/
Arne
From that article:

|However, we want to stress that Go cannot be considered a replacement for C
|as there are many places where C is and likely will be needed, such as in
|the development of real time operating systems or device drivers.

That kills even considering Go for most embedded systems.

It sounds to me like they are writing normal applications that are
running under Linux, but just on embedded hardware.

Let me know when they are using Go to, for example, directly fly UAVs
and directly control the UAV hardware (and where Go isn't just been
used as a shim on top of C code that does the actual real-time work).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Arne Vajhøj
2022-05-14 01:24:12 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Arne Vajhøj
What do you think about Go?
It doesn't seem to be a language designed for writing bare metal code
or operating systems in general so I have not really used it.
It was designed for servers.
But people are using it for small stuff.
https://stackoverflow.blog/2022/04/04/comparing-go-vs-c-in-embedded-applications/
|However, we want to stress that Go cannot be considered a replacement for C
|as there are many places where C is and likely will be needed, such as in
|the development of real time operating systems or device drivers.
That kills even considering Go for most embedded systems.
It sounds to me like they are writing normal applications that are
running under Linux, but just on embedded hardware.
Let me know when they are using Go to, for example, directly fly UAVs
and directly control the UAV hardware (and where Go isn't just been
used as a shim on top of C code that does the actual real-time work).
You are not an easy customer to sell a programming language to.

:-)

Rust and Go is what is new with some serious momentum/backing.

If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.

Arne
Simon Clubley
2022-05-16 18:02:20 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
https://stackoverflow.blog/2022/04/04/comparing-go-vs-c-in-embedded-applications/
|However, we want to stress that Go cannot be considered a replacement for C
|as there are many places where C is and likely will be needed, such as in
|the development of real time operating systems or device drivers.
That kills even considering Go for most embedded systems.
It sounds to me like they are writing normal applications that are
running under Linux, but just on embedded hardware.
Let me know when they are using Go to, for example, directly fly UAVs
and directly control the UAV hardware (and where Go isn't just been
used as a shim on top of C code that does the actual real-time work).
You are not an easy customer to sell a programming language to.
:-)
:-)

That's because I know exactly what I want from a programming language
and these days, it seems like most of the time you are using the
least-worst option or the most viable option instead of the best option.
(Most viable option does not mean _best_ option BTW).
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.

No 32-bit ARM support however (or 32-bit support in general).

It does appear to have some package/module based support.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Stephen Hoffman
2022-05-16 18:21:40 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe Hare will
fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
Zig has (some) support for ARMv7a, and ties into C quite well.

ARMv7a support and x86-32 support is available in the most recent 0.9.1
version, though is not presently available in Master.

For porting, Zig is self-hosting, though LLVM support was around as an
option when last I checked.
--
Pure Personal Opinion | HoffmanLabs LLC
Bill Gunshannon
2022-05-19 19:17:17 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
https://stackoverflow.blog/2022/04/04/comparing-go-vs-c-in-embedded-applications/
|However, we want to stress that Go cannot be considered a replacement for C
|as there are many places where C is and likely will be needed, such as in
|the development of real time operating systems or device drivers.
That kills even considering Go for most embedded systems.
It sounds to me like they are writing normal applications that are
running under Linux, but just on embedded hardware.
Let me know when they are using Go to, for example, directly fly UAVs
and directly control the UAV hardware (and where Go isn't just been
used as a shim on top of C code that does the actual real-time work).
You are not an easy customer to sell a programming language to.
:-)
:-)
That's because I know exactly what I want from a programming language
and these days, it seems like most of the time you are using the
least-worst option or the most viable option instead of the best option.
(Most viable option does not mean _best_ option BTW).
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs. A few more ego languages.

bill
Arne Vajhøj
2022-05-19 23:52:22 UTC
Permalink
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.

New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.

What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".

Arne
Bill Gunshannon
2022-05-20 01:05:03 UTC
Permalink
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
None of those languages do what the Wirth and K&R languages do.
Ignoring the fact that people took Pascal and C and proceeded
to use them for things they were not designed for and for which
there already were perfectly good languages.

What does Java do that other languages don't do (Other than the OOP
concept which I didn't drink the KoolAid for either)? Python? PHP?
Need I go on?

bill
Arne Vajhøj
2022-05-20 01:49:02 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
None of those languages do what the Wirth and K&R languages do.
The wrote Multics in PL/I and OS is supposed to be C core
competency.
Post by Bill Gunshannon
Ignoring the fact that people took Pascal and C and proceeded
to use them for things they were not designed for and for which
there already were perfectly good languages.
What does Java do that other languages don't do (Other than the OOP
concept which I didn't drink the KoolAid for either)?  Python? PHP?
Need I go on?
You may not like OOP but most do. Then there is generic programming.
Functional programming. Automatic garbage collection. Reflection.
Well defined types.

I have probably forgot a few things, but ...

Arne
Bill Gunshannon
2022-05-20 12:40:16 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
None of those languages do what the Wirth and K&R languages do.
The wrote Multics in PL/I and OS is supposed to be C core
competency.
They could have written it in PL/M. Even Ada or Pascal. No task
is limited to just one language. Part of Software Engineering is
supposed to be picking the right tool for the job. Never having
played with Multics source I can't say why they chose the language
they did but there may have been a good reason. Who knows, that
early in the game maybe there was limited C experience. Was Multics
heavily loaded with IBMers? That could easily influence the use
of PL/I over C.
Post by Arne Vajhøj
Post by Bill Gunshannon
Ignoring the fact that people took Pascal and C and proceeded
to use them for things they were not designed for and for which
there already were perfectly good languages.
What does Java do that other languages don't do (Other than the OOP
concept which I didn't drink the KoolAid for either)?  Python? PHP?
Need I go on?
You may not like OOP but most do.
I blame academia for that. :-)
I remember reading article a number of years ago on a meeting of all
the big names in OOP at an international OOP conference where they
admitted that the whole OOP thing had gotten out of hand and what
resulted was not what was intended.
Post by Arne Vajhøj
Then there is generic programming.
Functional programming. Automatic garbage collection. Reflection.
Well defined types.
Hmmm. Garbage collection is done different ways on different systems.
It wasn't a new idea and other languages did have it. Like Lisp.
Unless you have a different meaning for functional programming than
what I learned, lots of languages had that. Generic programming, same
thing, really. Pascal (in its misused manner), Modula, even C are all
capable of generic programming. Well defined types? Pascal, Modula,
Ada and even C where the builtin types are very well defined. You just
have ability to define more.
Post by Arne Vajhøj
I have probably forgot a few things, but ...
And, I think we look at a lot of this in a very different manner.
But that would be expected as we both come from very different
backgrounds, educations and experiences.

bill
Arne Vajhøj
2022-05-20 16:04:45 UTC
Permalink
Post by Bill Gunshannon
Post by Arne Vajhøj
Post by Bill Gunshannon
Ignoring the fact that people took Pascal and C and proceeded
to use them for things they were not designed for and for which
there already were perfectly good languages.
What does Java do that other languages don't do (Other than the OOP
concept which I didn't drink the KoolAid for either)?  Python? PHP?
Need I go on?
You may not like OOP but most do.
I blame academia for that. :-)
I remember reading article a number of years ago on a meeting of all
the big names in OOP at an international OOP conference where they
admitted that the whole OOP thing had gotten out of hand and what
resulted was not what was intended.
You could not write the software of today without it.
Post by Bill Gunshannon
Post by Arne Vajhøj
                                  Then there is generic programming.
Functional programming. Automatic garbage collection. Reflection.
Well defined types.
Hmmm.   Garbage collection is done different ways on different systems.
It wasn't a new idea and other languages did have it.  Like Lisp.
Java did not invent GC, but it was the first really widely used
language using it.
Post by Bill Gunshannon
Unless you have a different meaning for functional programming than
what I learned, lots of languages had that.
FP today is pretty well defined. Methods expecting functions, callers
specifying lambdas, optionally currying and partially applied functions.
Post by Bill Gunshannon
  Generic programming, same
thing, really.  Pascal (in its misused manner), Modula, even C are all
capable of generic programming.
????

Except newer Delphi then I have not seen that.
Post by Bill Gunshannon
  Well defined types?  Pascal, Modula,
Ada and even C where the builtin types are very well defined.  You just
have ability to define more.
Neither Pascal nor C have well defined types in the standard.

The size of integer in bits and one or two complement is
implementation specific.

Arne
Dave Froble
2022-05-20 02:22:03 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs. A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
Arne
Are you saying existing languages cannot evolve, and get new and better
capabilities?

A while back this happened to DEC Basic. Lots of enhancements.

Afraid I have to agree with the "ego" thing ...
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2022-05-20 12:16:05 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
Are you saying existing languages cannot evolve, and get new and better
capabilities?
No, because that happens.

Fortran 66 to 77 added if block and character type (and changed
semantics of do loop).

C 89 to 99 added declaration anywhere and // comments.

Java 1.4 to 1.5 added enum, annotations and generics.
Java 1.6 to 1.7 added try with resources.
Java 1.7 to 1.8 added lambdas.
Java 9 to 10 added type inference.

C# 1.2 to 2.0 added generics.
C# 2.0 to 3.0 added type inference and lambdas.
C# 4.0 to 5.0 added async and await.
C# 7.3 to 8.0 added optional nun-nullable reference types.

C++ got a bunch of big changes that I will not try and list.

But languages are somewhat restricted by their original design and
compatibility requirements.

Minor enhancements are usually not a problem.

But major enhancements tend to either break existing
code or end up with some ugly solutions.

From the above list there are a couple of examples
of ugly solutions. Java generics does not look as
they would have if they had been done in 1.0 but at
the time for 1.5 there were so much existing code
that a compromise was needed and as result performance
of generics with simple data types sucks. Same with
Java lambdas - calling with lambda looks fine but
defining something to be callable with lambda
looks ridiculous in the code.

So languages can evolve but there are limitations.
Limitation that new languages does not have.

If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.

Arne
Chris Townley
2022-05-20 12:22:47 UTC
Permalink
Post by Arne Vajhøj
Post by Dave Froble
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
Are you saying existing languages cannot evolve, and get new and
better capabilities?
No, because that happens.
Fortran 66 to 77 added if block and character type (and changed
semantics of do loop).
C 89 to 99 added declaration anywhere and // comments.
Java 1.4 to 1.5 added enum, annotations and generics.
Java 1.6 to 1.7 added try with resources.
Java 1.7 to 1.8 added lambdas.
Java 9 to 10 added type inference.
C# 1.2 to 2.0 added generics.
C# 2.0 to 3.0 added type inference and lambdas.
C# 4.0 to 5.0 added async and await.
C# 7.3 to 8.0 added optional nun-nullable reference types.
C++ got a bunch of big changes that I will not try and list.
But languages are somewhat restricted by their original design and
compatibility requirements.
Minor enhancements are usually not a problem.
But major enhancements tend to either break existing
code or end up with some ugly solutions.
From the above list there are a couple of examples
of ugly solutions. Java generics does not look as
they would have if they had been done in 1.0 but at
the time for 1.5 there were so much existing code
that a compromise was needed and as result performance
of generics with simple data types sucks. Same with
Java lambdas - calling with lambda looks fine but
defining something to be callable with lambda
looks ridiculous in the code.
So languages can evolve but there are limitations.
Limitation that new languages does not have.
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Arne
Easy - just enforce OPTION TYPE = EXPLICIT
--
Chris
Arne Vajhøj
2022-05-20 12:29:18 UTC
Permalink
Post by Chris Townley
Post by Arne Vajhøj
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Easy - just enforce OPTION TYPE = EXPLICIT
It is not a technical problem to implement but a customer
problem.

I am sure John Reagan could make the change in 10 minutes.

But VSI would need to get him bodyguards for the rest
of his life to protect him against angry VMS Basic users.

:-)

Arne
Bill Gunshannon
2022-05-20 12:44:06 UTC
Permalink
Post by Arne Vajhøj
Post by Chris Townley
Post by Arne Vajhøj
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Easy - just enforce OPTION TYPE = EXPLICIT
It is not a technical problem to implement but a customer
problem.
I am sure John Reagan could make the change in 10 minutes.
But VSI would need to get him bodyguards for the rest
of his life to protect him against angry VMS Basic users.
:-)
Thus what I meant about background, education and experience being
great influencers. I, personally, do not believe in IMPLICIT anything.
Even in Fortran I always explicitly declared all of my variables.
Trust no one, especially compilers. :-)

bill
Dave Froble
2022-05-20 14:13:00 UTC
Permalink
Post by Chris Townley
Post by Arne Vajhøj
Post by Dave Froble
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs. A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
Are you saying existing languages cannot evolve, and get new and better
capabilities?
No, because that happens.
Fortran 66 to 77 added if block and character type (and changed
semantics of do loop).
C 89 to 99 added declaration anywhere and // comments.
Java 1.4 to 1.5 added enum, annotations and generics.
Java 1.6 to 1.7 added try with resources.
Java 1.7 to 1.8 added lambdas.
Java 9 to 10 added type inference.
C# 1.2 to 2.0 added generics.
C# 2.0 to 3.0 added type inference and lambdas.
C# 4.0 to 5.0 added async and await.
C# 7.3 to 8.0 added optional nun-nullable reference types.
C++ got a bunch of big changes that I will not try and list.
But languages are somewhat restricted by their original design and
compatibility requirements.
Minor enhancements are usually not a problem.
But major enhancements tend to either break existing
code or end up with some ugly solutions.
From the above list there are a couple of examples
of ugly solutions. Java generics does not look as
they would have if they had been done in 1.0 but at
the time for 1.5 there were so much existing code
that a compromise was needed and as result performance
of generics with simple data types sucks. Same with
Java lambdas - calling with lambda looks fine but
defining something to be callable with lambda
looks ridiculous in the code.
So languages can evolve but there are limitations.
Limitation that new languages does not have.
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Arne
Easy - just enforce OPTION TYPE = EXPLICIT
See, already an option. But, I'm not too fond of the concept of "enforce".
I'll do my own enforcing.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Chris Townley
2022-05-20 14:40:53 UTC
Permalink
Post by Chris Townley
Easy - just enforce OPTION TYPE = EXPLICIT
See, already an option.  But, I'm not too fond of the concept of
"enforce". I'll do my own enforcing.
In our team, it as required for any Vax/Dec Basic modules.
I also frowned on assuming default initialisation of variables, and
would pick it up in any code review
--
Chris
Bill Gunshannon
2022-05-20 14:49:50 UTC
Permalink
Post by Chris Townley
Post by Chris Townley
Easy - just enforce OPTION TYPE = EXPLICIT
See, already an option.  But, I'm not too fond of the concept of
"enforce". I'll do my own enforcing.
In our team, it as required for any Vax/Dec Basic modules.
I also frowned on assuming default initialisation of variables, and
would pick it up in any code review
So, tell me, what year did you start in the IT biz?

bill
Chris Townley
2022-05-20 15:42:50 UTC
Permalink
Post by Bill Gunshannon
Post by Chris Townley
Post by Chris Townley
Easy - just enforce OPTION TYPE = EXPLICIT
See, already an option.  But, I'm not too fond of the concept of
"enforce". I'll do my own enforcing.
In our team, it as required for any Vax/Dec Basic modules.
I also frowned on assuming default initialisation of variables, and
would pick it up in any code review
So, tell me, what year did you start in the IT biz?
bill
80s
--
Chris
Bill Gunshannon
2022-05-20 16:05:19 UTC
Permalink
Post by Bill Gunshannon
Post by Chris Townley
Post by Chris Townley
Easy - just enforce OPTION TYPE = EXPLICIT
See, already an option.  But, I'm not too fond of the concept of
"enforce". I'll do my own enforcing.
In our team, it as required for any Vax/Dec Basic modules.
I also frowned on assuming default initialisation of variables, and
would pick it up in any code review
So, tell me, what year did you start in the IT biz?
bill
80s
Thank you. That's what I figured. It shows.
Oh, and if you don't get it, that is a compliment.

It was the 1980 when I really got into IT even though
I actually started in the very early 70's. We learned
a lot of things different (I think better) back then.

bill

Dave Froble
2022-05-20 14:10:47 UTC
Permalink
Post by Arne Vajhøj
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Arne
Why would you want to remove something that perhaps some users want to continue
to use? Leaving it doesn't hurt, and users can choose to not use it.

Starting with something existing is easier than starting from scratch. There is
nothing, other than omissions and restrictions, in any of the ego languages that
cannot be implemented in say, Basic. Since much of the compiler exists, that
work is already done, and only modifications need to be implemented.

It's ego. And perhaps so is coming up with meaningless objections, such as
yours above.
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2022-05-20 15:53:10 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
If we go over to VMS Basic, then VSI can easily add new
types and new functions. No problem. But could VSI
remove the variable name suffix does implicit typing
mechanism? I don't think so.
Why would you want to remove something that perhaps some users want to
continue to use?  Leaving it doesn't hurt, and users can choose to not
use it.
Starting with something existing is easier than starting from scratch.
There is nothing, other than omissions and restrictions, in any of the
ego languages that cannot be implemented in say, Basic.  Since much of
the compiler exists, that work is already done, and only modifications
need to be implemented.
It's ego.  And perhaps so is coming up with meaningless objections, such
as yours above.
The point is that backwards compatibility requirements impact
what languages can do.

And that sometimes makes it better to start from scratch.

Arne
Bill Gunshannon
2022-05-20 12:19:09 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
Post by Simon Clubley
Post by Arne Vajhøj
Rust and Go is what is new with some serious momentum/backing.
If you are willing to go for more exotic options, then maybe
Hare will fit your requirements.
Interesting. I wasn't aware of that one.
No 32-bit ARM support however (or 32-bit support in general).
It does appear to have some package/module based support.
Just what the industry needs.  A few more ego languages.
Programming languages is very much a matter of evolution.
New languages show up all the time. Those that are good replace
older languages. Those that are not good dies.
What if Wirth and K&R had decided in the late 60's that
Fortran, Cobol and PL/I had everything so no need for
any "ego languages".
Arne
Are you saying existing languages cannot evolve, and get new and better
capabilities?
No, existing languages can and do evolve. Turbo Pascal was greatly
enhanced from the Pascal described in the Jensen & Wirth book. Many
enhancements like an actual string. Sadly, because of abuse of the
original language he had to create another. But it certainly wasn't
for ego.
Post by Dave Froble
A while back this happened to DEC Basic.  Lots of enhancements.
Yes, another good example. But it also could have been someone creating
B++ with a totally different syntax that was unnecessary.
Post by Dave Froble
Afraid I have to agree with the "ego" thing ...
When people create languages that basically mimic other languages
but have totally different syntax and coding nuances what else
could it be?

bill
Bill Gunshannon
2021-11-15 21:01:19 UTC
Permalink
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.

Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada? Device Driver? Anything?

bill
Arne Vajhøj
2021-11-15 21:09:04 UTC
Permalink
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
When Ada on VMS was a thing, then I believe the only supported
language for device drivers was Macro-32.

But they could have written something else. Per the old
story about using every language for at least one small
piece of VMS, then there should be an Ada piece as well.
No ideas whether there actually is or was.

Arne
Robert A. Brooks
2021-11-15 21:18:13 UTC
Permalink
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.

Both are being rewritten in C.
--
-- Rob
Bill Gunshannon
2021-11-15 23:28:31 UTC
Permalink
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?

bill
Robert A. Brooks
2021-11-15 23:59:33 UTC
Permalink
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Probably not, since for X86, there is no ADA compiler, so it
needed to be rewritten.
--
-- Rob
Arne Vajhøj
2021-11-16 00:35:46 UTC
Permalink
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,

But I cannot imagine it having any significance on modern hardware.

They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.

Arne
Bill Gunshannon
2021-11-16 01:44:40 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
I realize all that. I would just like to see some comparisons.
I don't know that any were actually done. It all goes back to
a comment I got from someone from the Ada Users Group about 30
years ago. I mentioned an interest in a version of Unix rewritten
in Ada and was quickly informed that while it could be done it
would result in a useless operating system because the Ada version
would be very inefficient. Needless to say, I never tried it.
Might be fun to dig up some benchmarks and try it, but I always
prefer real world examples to contrived benchmarks.

bill
Arne Vajhøj
2021-11-16 02:23:27 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
I realize all that.  I would just like to see some comparisons.
I don't know that any were actually done.  It all goes back to
a comment I got from someone from the Ada Users Group about 30
years ago.  I mentioned an interest in a version of Unix rewritten
in Ada and was quickly informed that while it could be done it
would result in a useless operating system because the Ada version
would be very inefficient.  Needless to say, I never tried it.
Might be fun to dig up some benchmarks and try it, but I always
prefer real world examples to contrived benchmarks.
But there are two very different questions here.

Would Ada vs C for OS mean something 30 years ago (VAX 6000 and 3000)?

Would Ada vs C for OS mean something today (16/24/32 core x86-64)?

Arne
Arne Vajhøj
2021-11-16 14:47:33 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
I realize all that.  I would just like to see some comparisons.
I don't know that any were actually done.  It all goes back to
a comment I got from someone from the Ada Users Group about 30
years ago.  I mentioned an interest in a version of Unix rewritten
in Ada and was quickly informed that while it could be done it
would result in a useless operating system because the Ada version
would be very inefficient.  Needless to say, I never tried it.
Might be fun to dig up some benchmarks and try it, but I always
prefer real world examples to contrived benchmarks.
But there are two very different questions here.
Would Ada vs C for OS mean something 30 years ago (VAX 6000 and 3000)?
Would Ada vs C for OS mean something today (16/24/32 core x86-64)?
Thus the reason we have so much bloatware today.  If the program
runs badly, throw more cores at it. I am not interested in whether
or not something ran faster or slower on todays machines vs. yesterdays.
I am interested in whether or not ADD A TO B GIVING C is faster, slower
or the same between Ada and C.( and other languages as well!) Throwing
more cores at the above will not result in faster performance.
When I first started with programming we cared about programming and
efficiency.  We profiled our programs in order to find the bad parts
and we fixed them.  It is sad that efficiency is no longer considered
important to software development today.  And they call it engineering
while we just called it programming.
I would say that there is a lot of focus on efficiency today.

But there are two types of such focus.

There is the hacker/nerd crowd that focus on micro-benchmarks
of all sorts of things. ADD A TO B GIVING C will fit fine in
that.

And then there is the engineering/professional crowd that
focus on actual solution/system performance. But what measurement
in this area prove to be relevant has changed over the last 30 years.
Unless one is in a specialized area like HPC then ADD A TO B GIVING C
is not relevant for solution/system performance today. They are
looking for round trips between tiers, interpreted vs compiled,
data models etc..

Arne
Dave Froble
2021-11-16 15:08:14 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
I realize all that. I would just like to see some comparisons.
I don't know that any were actually done. It all goes back to
a comment I got from someone from the Ada Users Group about 30
years ago. I mentioned an interest in a version of Unix rewritten
in Ada and was quickly informed that while it could be done it
would result in a useless operating system because the Ada version
would be very inefficient. Needless to say, I never tried it.
Might be fun to dig up some benchmarks and try it, but I always
prefer real world examples to contrived benchmarks.
But there are two very different questions here.
Would Ada vs C for OS mean something 30 years ago (VAX 6000 and 3000)?
Would Ada vs C for OS mean something today (16/24/32 core x86-64)?
Thus the reason we have so much bloatware today. If the program
runs badly, throw more cores at it. I am not interested in whether
or not something ran faster or slower on todays machines vs. yesterdays.
I am interested in whether or not ADD A TO B GIVING C is faster, slower
or the same between Ada and C.( and other languages as well!) Throwing
more cores at the above will not result in faster performance.
When I first started with programming we cared about programming and
efficiency. We profiled our programs in order to find the bad parts
and we fixed them. It is sad that efficiency is no longer considered
important to software development today. And they call it engineering
while we just called it programming.
I would say that there is a lot of focus on efficiency today.
But there are two types of such focus.
There is the hacker/nerd crowd that focus on micro-benchmarks
of all sorts of things. ADD A TO B GIVING C will fit fine in
that.
And then there is the engineering/professional crowd that
focus on actual solution/system performance. But what measurement
in this area prove to be relevant has changed over the last 30 years.
Unless one is in a specialized area like HPC then ADD A TO B GIVING C
is not relevant for solution/system performance today. They are
looking for round trips between tiers, interpreted vs compiled,
data models etc..
Lipstick on a pig, huh ???
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: ***@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486
Arne Vajhøj
2021-11-16 16:20:41 UTC
Permalink
Post by Dave Froble
Post by Arne Vajhøj
I would say that there is a lot of focus on efficiency today.
But there are two types of such focus.
There is the hacker/nerd crowd that focus on micro-benchmarks
of all sorts of things. ADD A TO B GIVING C will fit fine in
that.
And then there is the engineering/professional crowd that
focus on actual solution/system performance. But what measurement
in this area prove to be relevant has changed over the last 30 years.
Unless one is in a specialized area like HPC then ADD A TO B GIVING C
is not relevant for solution/system performance today. They are
looking for round trips between tiers, interpreted vs compiled,
data models etc..
Lipstick on a pig, huh ???
Not sure what you mean.

There are lots of performance disasters.

But typical it is the high level design that is wrong not
the low level code.

Example 1: a system where reports was painful slow - turned out that the
report made 6000 database queries to get the data for the report.

Example 2: a web page was painful slow - turned out that the
client side JavaScript fetched data using 2000 AJAX requests.

Arne
Stephen Hoffman
2021-11-17 17:17:56 UTC
Permalink
Post by Arne Vajhøj
Thus the reason we have so much bloatware today.  If the program runs
badly, throw more cores at it.
Can't say I see much wrong with more cores and more resources, and with
big.LITTLE and other heterogenous processor designs.
Post by Arne Vajhøj
When I first started with programming we cared about programming and
efficiency. 
Because you had constraints that required that; very limited and slow hardware.
Post by Arne Vajhøj
We profiled our programs in order to find the bad parts and we fixed them. 
That still happens. But again, you had limited and slow hardware, and
comparatively weak tooling, and you had to look at adds and multiplies.
Now, not so much.
Post by Arne Vajhøj
It is sad that efficiency is no longer considered important to software
development today. 
A full VAX-11/780 server configuration cost around a hundred thousand
dollars US, back in the mid 1980s. That's closer to a quarter-million
2021 dollars. Which buys a lot of computer.

A full Mac mini M1 costs a ~hundredth of that 1980s price, is massively
faster, and massively more capable. And is dwarfed by the massive size
of the LSI-11 console, and might well consume less power than did the
RX01 console floppy.

Do I need to optimize adds and multiplies on an M1? Probably not. And
if I do, I'm probably looking at using SIMD or such via the Accelerate
framework.

Does Accelerate or other frameworks add bloat? Absolutely. But
hopefully it avoids costs and hassles when porting hardware, and the
costs adjusting existing code when newer hardware becomes available.

https://developer.apple.com/documentation/accelerate
Post by Arne Vajhøj
And they call it engineering while we just called it programming.
Economics, mostly. Hardware has gotten cheaper faster than programmer
investments have gotten cheaper.

Folks in the 1980s grumbled about changes in computing economics and
tooling, too. Tooling that many didn't understand and didn't want to
learn about. There were massive squabbles about 2GL and 3GL back then;
about assembler and the shift to compiled languages. Adding bloat, as
was widely reported back then.

I had more than a few discussions decades ago with a very experienced
and skilled developer that was then just having to mentally shift away
from punched card app designs and limits, and was aghast at an app
having a five thousand longword array. That overnight run for that
add-heavy app went from ~overnight to ~ten minutes, given new hardware
and new software. I didn't bother to profile the add and the multiply
times for that app.
Post by Arne Vajhøj
I would say that there is a lot of focus on efficiency today.
But there are two types of such focus.
There is the hacker/nerd crowd that focus on micro-benchmarks of all
sorts of things. ADD A TO B GIVING C will fit fine in that.
That instruction-level focus was wicked popular back in the VAX era,
too. Various instruction-timing tables were published. Folks optimized
individual instructions. There were folks that implemented instructions
(e.g. XFC) to get microcode speeds, too. Optimization work which mostly
vaporized when DEC released a new VAX CPU design with different
timings. And with no XFC.

If doing billions or trillions of adds as the core of the app, folks
are interested in the performance of adds and multiplies. Otherwise,
not so much.

Even back in the VAX era, folks were still throwing hardware at this
problem, too. Back then, with the FP780 floating point accelerator. The
FP780 speeded both floating point, as well as integer multiply. Or
corrupted floating point and integer multiply, if the FP780 was
recalcitrant.

For those interested in these sorts of benchmarking and performance
problems and the closely-related "instruction bumming" challenges, have
a look around for "code golf", among other discussions.

There's a performance variation awaiting here too, with algorithms now
ill-performing for the scale of the current data. I've seen these cases
arise in more than a few existing OpenVMS apps.
Post by Arne Vajhøj
And then there is the engineering/professional crowd that focus on
actual solution/system performance. But what measurement in this area
prove to be relevant has changed over the last 30 years. Unless one is
in a specialized area like HPC then ADD A TO B GIVING C is not relevant
for solution/system performance today. They are looking for round trips
between tiers, interpreted vs compiled, data models etc..
Absent a program executing billions or trillions of adds or multiplies,
~nobody cares about the speed of an individual add or multiply across
differing programming languages. Other cost factors are involved.

And for apps that are executing billions or trillions of adds or
multiplies, we're now looking at migrating out of iterative code and
into SIMD / AVX / Accelerate where feasible, or migrating the math
entirely off the (traditional) CPU.

For many years now, other app activities have utterly overwhelmed the
speed of integer math for most apps. DEC learned about that when the
VAX market drifted out of scientific and HPC apps, and drifted into
commercial apps. For hardware performance, Apple M1 Max will reportedly
run ~ten teraflops, give or take. And last-millennium HDD storage I/O
is glacial, as compared with NVMe and newer I/O. And in practical 2021
terms, VAX had ~-no storage and ~no memory, even the few VAX models
with extended physical addressing.

And economically, ill-chosen algorithms, and API compatibility, and of
under-maintained and un-refactored existing app code, and abstraction
layers and the rest of "bloat" will continue, as will the hardware
upgrades. Because the stuff involved still has to sell at a profit, or
management has to cover the costs of the app work from profits and
salaries. Trade-offs can and do and will shift, too. As will
gatekeeping.
--
Pure Personal Opinion | HoffmanLabs LLC
Simon Clubley
2021-11-16 19:07:49 UTC
Permalink
Post by Bill Gunshannon
I realize all that. I would just like to see some comparisons.
I don't know that any were actually done. It all goes back to
a comment I got from someone from the Ada Users Group about 30
years ago. I mentioned an interest in a version of Unix rewritten
in Ada and was quickly informed that while it could be done it
would result in a useless operating system because the Ada version
would be very inefficient. Needless to say, I never tried it.
Might be fun to dig up some benchmarks and try it, but I always
prefer real world examples to contrived benchmarks.
I think that person was talking (mostly) nonsense.

For example, the original version of RTEMS was written in Ada before
it was ported to C (IIRC, because C was more common in the open source
world and supported more architectures).

Also, look at the sheer amount of boundary checks and descriptor decoding
that VMS does in its APIs. (For example).

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.
Craig A. Berry
2021-11-19 13:08:13 UTC
Permalink
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
Right, but that doesn't mean the performance of the Ada version is good
or that bad performance doesn't matter. In fact, ACME server performance
is pretty lousy. SET SERVER ACME/RESTART takes 36 seconds on an rx2660.
That's 36 seconds of downtime during which no one can log in to the
system, which might not matter much in some environments, but is still a
very long time to restart a service.
Arne Vajhøj
2021-11-19 14:38:12 UTC
Permalink
Post by Craig A. Berry
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
Right, but that doesn't mean the performance of the Ada version is good
or that bad performance doesn't matter. In fact, ACME server performance
is pretty lousy.  SET SERVER ACME/RESTART takes 36 seconds on an rx2660.
 That's 36 seconds of downtime during which no one can log in to the
system, which might not matter much in some environments, but is still a
very long time to restart a service.
It is.

So if we say that the right time is 1 second, then those extra
35 seconds - what are they used on?

Doing Ada runtime checks??

I doubt it.

I suspect there are some bad logic that need to be redesigned.

Arne
Craig A. Berry
2021-11-19 23:41:55 UTC
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
Post by Bill Gunshannon
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
Were there ever any internal benchmarks run against them so that
a comparison of performance when the C conversion is done could
be looked at?
Depending on how many checks were disabled in the Ada version, then
the C version may be a little or a lot faster,
But I cannot imagine it having any significance on modern hardware.
They did not rewrite in C to save CPU cycles but because they did
not have an Ada compiler for the new platform.
Right, but that doesn't mean the performance of the Ada version is good
or that bad performance doesn't matter. In fact, ACME server performance
is pretty lousy.  SET SERVER ACME/RESTART takes 36 seconds on an rx2660.
  That's 36 seconds of downtime during which no one can log in to the
system, which might not matter much in some environments, but is still a
very long time to restart a service.
It is.
So if we say that the right time is 1 second, then those extra
35 seconds - what are they used on?
Doing Ada runtime checks??
I doubt it.
I suspect there are some bad logic that need to be redesigned.
Abysmal performance of a service written in Ada may or may not have
anything to do with Ada directly or indirectly. Or it may. ACME has no
reason to do a large quantity of network I/O or disk I/O, so it has no
reason to be considered a victim of the slow network stack or the slow
disk/filesystem infrastructure endemic to VMS. So the only thing I know
is that the slowest service I've ever seen on VMS in a few decades is
that this particular service is (uncharacteristically) written in Ada,
so Ada is suspect. I do not have the time, interest, or access to the
code that would be required to prove the actual performance bottleneck.
Robert A. Brooks
2021-11-15 23:58:35 UTC
Permalink
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
OK! Could that answer why our ACMELDAP makes the ACME_SERVER crash
multiple times each day stopping our roll-out of the switch from
local UAF to AD/LDAP login authentication?
And does "being rewritten in C" imply that there is a new version
in the works?
Yes, but I don't know if it'll be available for IA64 and Alpha.
It could be, but I'm not at all involved in that project.
And that our current ACME_SERVER that crashes all
the time is *currently* written in Ada? Interesting...
The one you are running is definitely written in ADA.
--
-- Rob
Mark Daniel
2021-11-16 02:05:34 UTC
Permalink
Post by Robert A. Brooks
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada?  Device Driver? Anything?
ACME_SERVER and SECURITY_SERVER are written in ADA.
Both are being rewritten in C.
V9.1-A has a SECURITY_SERVER but no ACME_SERVER.

This suggests SECURITY_SERVER currently is in field-test.
--
Anyone, who using social-media, forms an opinion regarding anything
other than the relative cuteness of this or that puppy-dog, needs
seriously to examine their critical thinking.
V***@SendSpamHere.ORG
2021-11-15 21:22:39 UTC
Permalink
Post by Bill Gunshannon
This talk of Ada, VMS and Systems programming has raised a new
question in my mind.
Given that Ada got it's start on VMS (one of the first validated
Ada Compilers was on VMS) has any attempt ever been made to write
any part of VMS using Ada? Device Driver? Anything?
The SECURITY SERVER comes to mind. ;)
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.
Lawrence D’Oliveiro
2021-11-11 08:16:25 UTC
Permalink
Specifically, ISO-8859-1 has Icelandic Þ and þ ...
Don’t they also use Ð and ð?
Michael Moroney
2021-11-11 16:51:34 UTC
Permalink
Post by Lawrence D’Oliveiro
Specifically, ISO-8859-1 has Icelandic Þ and þ ...
Don’t they also use Ð and ð?
Yes. ISO-8859-1 has many defined characters which are not defined in
DEC-MCS, including Ð and ð. I only mentioned þ because of the original
post and the EDT patch background.
Jon Pinkley
2021-11-11 18:45:34 UTC
Permalink
Post by Michael Moroney
There is an EDT patch which makes it more ISO-8859-1 friendly, actually
prompted by a customer who used EDT for strictly ASCII except for a
character at the 'þ' position (but not þ). EDT fans may want the patch
for its ability to understand terminals with more than 24 lines.
The patch breaks some features when used with KEA!420
(I don't have a real DEC terminal to test with).

If you have KEA!420 if you use gold 7 show buf it will not display the buffers

Here is what EDT on Alpha OpenVMS V8.3 looks like.


Command: show buf
=T 1 lines
MAIN No lines
PASTE No lines
Press return to continue

Same on IA64 OpenVMS V8.4-2L1


Command: show buf
Press return to continue

The work around is to use ^Z to drop into command mode
*show buf
=T 1 lines
MAIN No lines
PASTE No lines
*
Then use c to return to screen (change?) mode.

Oddly it does work correctly with putty.

I have always found that KEA!420 was a pretty good emulator, but this is a
case that putty works better, and it does support more lines.

If someone has a real VT terminal, it would be interesting to see if it
works correctly. (correctly meaning the way it always has worked before).
Loading...