Discussion:
How to create a 128 bit type in C
(too old to reply)
a***@gmail.com
2017-03-28 15:26:51 UTC
Permalink
Hi all,

Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t

I said I can create it like this,

struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1

I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.

Anyone can think of any way?
BartC
2017-03-28 15:42:47 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
Your struct is a bit funny with its member naming. Here's how I might off:

typedef struct {
sint64_t high;
uint64_t low;
};

int128 a={0,0},b={0,0};

Now, an expression to compare for equality is easy:

a.high==b.high && a.low==b.low


Doing relative compare istrickier, but if the top halves differ, then

if (a.high<b.high) // low parts no relevant, so a<b
else if (a.high>b.high) // a>b
else // ....
...
At the else, the top 64 bits are the same; it depends on the lower 64
bits. I think an unsigned compare will do it, but I'm not sure:

... if (a.low<b.low) // etc

That's a start anyway. But you need a way also of initialising a and b
for testing:

int128 a = {0x1111222233334444, 0x5555666677778888};

This sets a to 0x11112222333344445555666677778888. (For this purpose,
the high part must occur first in the struct.)
--
bartc
a***@gmail.com
2017-03-28 16:15:08 UTC
Permalink
Post by BartC
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
typedef struct {
sint64_t high;
uint64_t low;
};
int128 a={0,0},b={0,0};
a.high==b.high && a.low==b.low
Doing relative compare istrickier, but if the top halves differ, then
if (a.high<b.high) // low parts no relevant, so a<b
else if (a.high>b.high) // a>b
else // ....
...
At the else, the top 64 bits are the same; it depends on the lower 64
... if (a.low<b.low) // etc
That's a start anyway. But you need a way also of initialising a and b
int128 a = {0x1111222233334444, 0x5555666677778888};
This sets a to 0x11112222333344445555666677778888. (For this purpose,
the high part must occur first in the struct.)
--
bartc
By luck I wrote same thing.
I compared top halves
and put some ifs and elses
what I was confused was in this
----------------------------------
| sint | unt |
----------------------------------

-2 and 3

----------------------------------
| -2 | 3 |
----------------------------------

2= 62 0s10
62 1s 01 1s complememt
62 1s 10 2s complement

so final number would be

-2 3
62 1s 10 |62 0s 11

So now I have to arrange all this 128 bits to see actually what 128 bit I have entered.Isn't a little bad that I don't know what number I actually entered?

Another case was that I was not sure of if one unsigned int is bigger than 64 bit? Is that case covered ?
BartC
2017-03-28 18:15:13 UTC
Permalink
Post by a***@gmail.com
Post by BartC
typedef struct {
sint64_t high;
uint64_t low;
};
By luck I wrote same thing.
I compared top halves
and put some ifs and elses
what I was confused was in this
----------------------------------
| sint | unt |
----------------------------------
So now I have to arrange all this 128 bits to see actually what 128 bit I have entered.Isn't a little bad that I don't know what number I actually entered?
I don't know the answers; I've never coded 128 bits (I have done 8->16,
16->32 and so on, but I've long forgotten how, but it would have been
assembly.).

I started playing with different approaches, and some code I tried is
shown below. Although this goes beyond just doing a compare and getting
-1, 0 and 1.

I don't know if the routines here are right either. One problem is that
while you can construct a 128-bit binary number from decimal (atoi_128),
it's not so easy to print as decimal so I've used hex (and unsigned hex
too).

#include <stdio.h>
#include <stdint.h>

typedef struct {
int64_t high;
uint64_t low;
} int128;

int128 add128(int128 a, int128 b){
int128 x;
x.low=a.low+b.low;
x.high=a.high+b.high;
if (x.low<a.low && x.low<b.low)
++x.high;
return x;
}

int128 sub128(int128 a, int128 b){
int128 x;
x.low=a.low-b.low;
x.high=a.high-b.high;
if (a.low<b.low)
--x.high;
return x;
}

int128 neg128(int128 a){
static int128 zero={0,0};
return sub128(zero,a);
}

int128 shl128(int128 a, int n){
if (n<=0) return a;
while (n--) {
a.high=a.high<<1;
if (a.low & 0x8000000000000000) a.high |= 1;
a.low=a.low<<1;
}
return a;
}

int128 mul10(int128 a){
return add128(shl128(a,3),shl128(a,1));
}

int128 atoi_128(char* s) {
int128 x={0,0},y={0,0};
int neg=0;
if (*s=='-') {neg=1; ++s;}

while (*s) {
y.low=*s-'0';
x=add128(mul10(x),y);
++s;
}
if (neg) return neg128(x);
return x;
}

void print128_hex(char* caption,int128 a) {
printf("%s: %016llX%016llX\n",caption,a.high,a.low);
}

int main(void) {
int128 a={0};
int128 b={0};
int128 x={0};

a=atoi_128("-1234");
b=atoi_128("1235");
x=add128(a,b);
print128_hex("X",x);

a=atoi_128("-1");
b=atoi_128("1");
x=add128(a,b);
print128_hex("A ",a);
print128_hex("B ",b);
print128_hex("A+B",x);
print128_hex("B<<96",shl128(b,96));

a=atoi_128("18446744073709551616");
print128_hex("A",a);

}
--
bartc
Norbert_Paul
2017-03-28 15:44:24 UTC
Permalink
With respect to signed/unsigned you may be interested in the Two's Complement
representation of binary numbers:

https://en.wikipedia.org/wiki/Two%27s_complement

Cheers
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a> b, return 1
a< b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
Scott Lurndal
2017-03-28 16:29:37 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
Malcolm McLean
2017-03-28 17:52:07 UTC
Permalink
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is
sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
It takes maybe a day to build your own 128-bit type on top of smaller
C built in types.
It also takes about a day to locate, download, build and integrate
a 3rd party free library. Then you've got a dependency to manage.

That's why programmers frequently do re-implement (not reinvent) the
wheel.
Ike Naar
2017-03-28 17:55:33 UTC
Permalink
Post by Malcolm McLean
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is
sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
It takes maybe a day to build your own 128-bit type on top of smaller
C built in types.
Plus a month to take the bugs out.
Post by Malcolm McLean
It also takes about a day to locate, download, build and integrate
a 3rd party free library. Then you've got a dependency to manage.
That's why programmers frequently do re-implement (not reinvent) the
wheel.
jacobnavia
2017-03-28 18:17:39 UTC
Permalink
Post by Ike Naar
Post by Malcolm McLean
It takes maybe a day to build your own 128-bit type on top of smaller
C built in types.
Plus a month to take the bugs out.
More than that.

The full implementation of the 128 bit type in C took me months of work.

Bugs? I do not think so.

I do not have bugs... until I discover them. Lurking around ready to eat
me alive.

:-)

But that is precisely what makes programming fun isn't it?

To discover the bugs and make your software do what it should. With the
minimum of effort, in assembly.

I ported C to assembly and incorporated that into the language. The
compiler generates calls to run time procs, written in asm.

That's what compilers do anyway.

This implementation goes unseen, as my messages in this group. I am
invisible, not being gnu nor windows nor mac, whatever, I am invisible.
David Brown
2017-03-28 19:17:44 UTC
Permalink
Post by jacobnavia
Post by Ike Naar
Post by Malcolm McLean
It takes maybe a day to build your own 128-bit type on top of smaller
C built in types.
Plus a month to take the bugs out.
More than that.
The full implementation of the 128 bit type in C took me months of work.
Yes, but that was a /full/ implementation of it as a type in C, with all
the mathematical operators, fast multiplication and division,
conversions, printf support, etc. A small 128 bit type with a few
operations should be a great deal simpler.
Scott Lurndal
2017-03-28 18:09:56 UTC
Permalink
Post by Malcolm McLean
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is
sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
It takes maybe a day to build your own 128-bit type on top of smaller
C built in types.
As a learning exercise, cool. If you're stuck with Visual Studio, c'est la vie.

For production code; it requires engineering talent to verify the
implementation of the 128-bit math ops, and support it for the life
of the product. I would hesitate allow such a thing in production software
given the alternatives (gcc, clang, PGI and ICC compilers all support int128_t/uint128_t),
which just leaves visual studio as the outlier, which isn't a problem
for the software we develop which uses uint128_t as windows support
isn't a requirement. YMMV.
Post by Malcolm McLean
It also takes about a day to locate, download, build and integrate
a 3rd party free library. Then you've got a dependency to manage.
Most production-quality linux distributions (redhat, suse, canonical)
include libgmp out of the box. One may need to install the development
package to get header files and the archive library to build software,
but to simply execute, libgmp.so is already present.

But yes, for windows developers, your concern is valid.
jacobnavia
2017-03-28 17:52:43 UTC
Permalink
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
128 bit integers are not a gcc extension. Lcc-win has that too.

But the advantage of reinventing wheels is that you learn how wheels are
done and how they work...
Scott Lurndal
2017-03-28 18:11:01 UTC
Permalink
Post by jacobnavia
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
128 bit integers are not a gcc extension. Lcc-win has that too.
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
BartC
2017-03-28 18:21:18 UTC
Permalink
Post by Scott Lurndal
Post by jacobnavia
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
128 bit integers are not a gcc extension. Lcc-win has that too.
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
--
bartc
jacobnavia
2017-03-28 18:29:21 UTC
Permalink
Post by BartC
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
Scott loves gcc, ahd he has to remind us of that at each message.
Scott Lurndal
2017-03-28 18:55:38 UTC
Permalink
Post by jacobnavia
Post by BartC
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
Scott loves gcc, ahd he has to remind us of that at each message.
Actually, I don't particularly give a shit about any particular
compiler.

I simply pointed out that gcc started providing 128-bit integer
types, and others followed.
jacobnavia
2017-03-28 19:04:02 UTC
Permalink
Post by Scott Lurndal
Post by jacobnavia
Post by BartC
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
Scott loves gcc, ahd he has to remind us of that at each message.
Actually, I don't particularly give a shit about any particular
compiler.
I simply pointed out that gcc started providing 128-bit integer
types, and others followed.
In another message, I pointed out to you that the 128 bit types are a
consequence of the software/hardware combination passing to 64 bits Scott.

That wasn't a gcc invention.

Read that message.
Malcolm McLean
2017-03-28 19:13:17 UTC
Permalink
Post by BartC
Post by Scott Lurndal
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
What are you counting, that you need 128 bits to hold it?
Oh, it's a scaled integer. What can you measure to an accuracy of one
part in 2^128?
That seems to leave Mandelbrots and RSA encryption.
Robert Wessel
2017-03-28 21:14:08 UTC
Permalink
On Tue, 28 Mar 2017 12:13:17 -0700 (PDT), Malcolm McLean
Post by Malcolm McLean
Post by BartC
Post by Scott Lurndal
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
So it needed a genius at gcc to figure out that the next step after
64-bits might be 128?!
What are you counting, that you need 128 bits to hold it?
Oh, it's a scaled integer. What can you measure to an accuracy of one
part in 2^128?
That seems to leave Mandelbrots and RSA encryption.
Currency. At least intermediate results of calculations involving
currency. And while a full 128 bits isn't usually required, 64 bits
is *not* enough. And yes, those folks do actually care about the
individual pennies. And they often have the support of humorless
people with guns who will punish you for not getting it right.

Cobol has required ~64 bit types (actually 18 decimal digits*) for
storage since the early sixties, with longer intermediates. Recently
that's been changed to require 36 digit storage types.


*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
Scott Lurndal
2017-03-29 13:13:22 UTC
Permalink
Post by Robert Wessel
*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
If you use this architecture <http://vseries.lurndal.org/doku.php>,
you'll find support for 100 digit decimal fields built-in (including
addressing to the digit/nibble) (and no concept of word size).
Robert Wessel
2017-03-30 05:12:26 UTC
Permalink
Post by Scott Lurndal
Post by Robert Wessel
*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
If you use this architecture <http://vseries.lurndal.org/doku.php>,
you'll find support for 100 digit decimal fields built-in (including
addressing to the digit/nibble) (and no concept of word size).
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Scott Lurndal
2017-03-30 13:36:30 UTC
Permalink
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
If you use this architecture <http://vseries.lurndal.org/doku.php>,
you'll find support for 100 digit decimal fields built-in (including
addressing to the digit/nibble) (and no concept of word size).
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Actually the 1401 was first shipped in 1960, while the Electrodata
220 (the predecessor to the Burroughs B200 (1961) and Burroughs B2500 (1967))
was already shipping (and the 202 had been shipping for 6 years at
that point). The're all BCD machines.
Robert Wessel
2017-03-30 17:41:43 UTC
Permalink
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
If you use this architecture <http://vseries.lurndal.org/doku.php>,
you'll find support for 100 digit decimal fields built-in (including
addressing to the digit/nibble) (and no concept of word size).
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Actually the 1401 was first shipped in 1960, while the Electrodata
220 (the predecessor to the Burroughs B200 (1961) and Burroughs B2500 (1967))
was already shipping (and the 202 had been shipping for 6 years at
that point). The're all BCD machines.
While I can't speak to the Electrodata 220, the Burroughs B200 did not
support the ~100 digit decimal fields that the later Burroughs 2500
systems did. The B200 coded the length of both fields in arithmetic
instructions as 1-12 digits.
Scott Lurndal
2017-03-30 17:55:06 UTC
Permalink
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
*Mostly that's independent of the internal format of the type - if you
define your 18 decimal digit field as having a binary format, you'll
get a 64-bit format on most machines. Or if you specify a BCD-ish
format, you'll usually get 9 or 10 bytes, depending on the
implementation and whether or not the field is signed.
If you use this architecture <http://vseries.lurndal.org/doku.php>,
you'll find support for 100 digit decimal fields built-in (including
addressing to the digit/nibble) (and no concept of word size).
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Actually the 1401 was first shipped in 1960, while the Electrodata
220 (the predecessor to the Burroughs B200 (1961) and Burroughs B2500 (1967))
was already shipping (and the 202 had been shipping for 6 years at
that point). The're all BCD machines.
While I can't speak to the Electrodata 220, the Burroughs B200 did not
support the ~100 digit decimal fields that the later Burroughs 2500
systems did. The B200 coded the length of both fields in arithmetic
instructions as 1-12 digits.
Scott Lurndal
2017-03-30 17:59:49 UTC
Permalink
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Actually the 1401 was first shipped in 1960, while the Electrodata
220 (the predecessor to the Burroughs B200 (1961) and Burroughs B2500 (1967))
was already shipping (and the 202 had been shipping for 6 years at
that point). The're all BCD machines.
While I can't speak to the Electrodata 220, the Burroughs B200 did not
support the ~100 digit decimal fields that the later Burroughs 2500
systems did.
I don't believe that I claimed such, above, only that there was
a predecessor of the B2500 contemporaneously with the 1401.

FWIW, the penultimate member of the medium systems, the B4955,
is up and running at the LCC.

And, I'm not familiar with the phrase 'rubs-rough' - what does
that conote with respect to medium systems?
Robert Wessel
2017-04-01 05:42:16 UTC
Permalink
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Yes, which I why I said "most" and "usually". IBM's 1401 considerably
pre-dates the rubs-rough medium systems, and had some similar features
(and also had a Cobol compiler). The Burroughs systems have the
advantage of having existed (outside of emulation and museums) much
more recently than 1401s.
Actually the 1401 was first shipped in 1960, while the Electrodata
220 (the predecessor to the Burroughs B200 (1961) and Burroughs B2500 (1967))
was already shipping (and the 202 had been shipping for 6 years at
that point). The're all BCD machines.
While I can't speak to the Electrodata 220, the Burroughs B200 did not
support the ~100 digit decimal fields that the later Burroughs 2500
systems did.
I don't believe that I claimed such, above, only that there was
a predecessor of the B2500 contemporaneously with the 1401.
While the B200 was a predecessor, it is not usually considered one of
the Burroughs "Medium" systems, and those are the ones with the
100-digit decimal arithmetic support.
Post by Scott Lurndal
FWIW, the penultimate member of the medium systems, the B4955,
is up and running at the LCC.
And, I'm not familiar with the phrase 'rubs-rough' - what does
that conote with respect to medium systems?
It's an affectionate anagram of Burroughs. Is the problem that I'm
old enough to actually remember that? :-(
Scott Lurndal
2017-04-03 13:37:48 UTC
Permalink
Post by Robert Wessel
Post by Scott Lurndal
FWIW, the penultimate member of the medium systems, the B4955,
is up and running at the LCC.
And, I'm not familiar with the phrase 'rubs-rough' - what does
that conote with respect to medium systems?
It's an affectionate anagram of Burroughs. Is the problem that I'm
old enough to actually remember that? :-(
Hmm. I spent 14 years at Burroughs, seven in the Medium Systems MCP group,
and I don't recall ever hearing that one.
Robert Wessel
2017-04-03 22:38:43 UTC
Permalink
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
FWIW, the penultimate member of the medium systems, the B4955,
is up and running at the LCC.
And, I'm not familiar with the phrase 'rubs-rough' - what does
that conote with respect to medium systems?
It's an affectionate anagram of Burroughs. Is the problem that I'm
old enough to actually remember that? :-(
Hmm. I spent 14 years at Burroughs, seven in the Medium Systems MCP group,
and I don't recall ever hearing that one.
Hmmm. It may have been regional (US? Midwest? Chicago?), but it was a
reasonably well know nickname for us, not that we used it that often,
most of the crowd I hung around with was IBM. I'm pretty sure I even
saw it in print at least a couple of times, probably in Computerworld
or Datamation or some-such.

I remember the first time I was talking to some (IBM) mainframe users
in Europe, and they were telling us something about their experience
with "kicks"? Eh? What now? For whatever reason, it's almost
universal to call CICS that outside the US, and almost universal to
call it cee-eye-cee-ess in the US.

Also, we didn't generally use the many less-than-respectful nicknames
for IBM in front of IBMers either, so you may have been sheltered on
that front.
Robert Wessel
2017-04-03 22:49:53 UTC
Permalink
On Mon, 03 Apr 2017 17:38:43 -0500, Robert Wessel
Post by Robert Wessel
Post by Scott Lurndal
Post by Robert Wessel
Post by Scott Lurndal
FWIW, the penultimate member of the medium systems, the B4955,
is up and running at the LCC.
And, I'm not familiar with the phrase 'rubs-rough' - what does
that conote with respect to medium systems?
It's an affectionate anagram of Burroughs. Is the problem that I'm
old enough to actually remember that? :-(
Hmm. I spent 14 years at Burroughs, seven in the Medium Systems MCP group,
and I don't recall ever hearing that one.
Hmmm. It may have been regional (US? Midwest? Chicago?), but it was a
reasonably well know nickname for us, not that we used it that often,
most of the crowd I hung around with was IBM. I'm pretty sure I even
saw it in print at least a couple of times, probably in Computerworld
or Datamation or some-such.
I remember the first time I was talking to some (IBM) mainframe users
in Europe, and they were telling us something about their experience
with "kicks"? Eh? What now? For whatever reason, it's almost
universal to call CICS that outside the US, and almost universal to
call it cee-eye-cee-ess in the US.
Also, we didn't generally use the many less-than-respectful nicknames
for IBM in front of IBMers either, so you may have been sheltered on
that front.
And I found at least one reference on the 'net:

https://issuu.com/maddrake/docs/expert_c_programming_deep_c_secrets/222

So I'm not imagining it.

And does that link restore topicality?
Keith Thompson
2017-04-04 00:43:40 UTC
Permalink
Robert Wessel <***@yahoo.com> writes:
[...]
https://[DELETED]/expert_c_programming_deep_c_secrets/222
So I'm not imagining it.
And does that link restore topicality?
That link appears to be to a pirated copy of Peter Van Der Linden's book
"Expert C Programming, Deep C Secrets". I have a paper copy of the same
book, and I'm fairly sure it includes a copyright notice, which was
omitted from this online copy. I'm not aware of a legitimate free
electronic version. (It's available as a Kindle e-book for $19.24.)
--
Keith Thompson (The_Other_Keith) kst-***@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Robert Wessel
2017-04-04 01:39:40 UTC
Permalink
Post by Keith Thompson
[...]
https://[DELETED]/expert_c_programming_deep_c_secrets/222
So I'm not imagining it.
And does that link restore topicality?
That link appears to be to a pirated copy of Peter Van Der Linden's book
"Expert C Programming, Deep C Secrets". I have a paper copy of the same
book, and I'm fairly sure it includes a copyright notice, which was
omitted from this online copy. I'm not aware of a legitimate free
electronic version. (It's available as a Kindle e-book for $19.24.)
Well, that's embarrassing. I only looked at the one page with the
quote I found with a search.
Richard Heathfield
2017-04-04 06:00:26 UTC
Permalink
On 03/04/17 23:38, Robert Wessel wrote:
<snip>
Post by Robert Wessel
I remember the first time I was talking to some (IBM) mainframe users
in Europe, and they were telling us something about their experience
with "kicks"? Eh? What now? For whatever reason, it's almost
universal to call CICS that outside the US, and almost universal to
call it cee-eye-cee-ess in the US.
Concur. I never heard it pronounced in any other way than "kicks" by
British programmers, but US programmers (and you do occasionally get
them in Rightpondia) invariably pronounced each letter separately.
Post by Robert Wessel
Also, we didn't generally use the many less-than-respectful nicknames
for IBM in front of IBMers either, so you may have been sheltered on
that front.
In Rightpondia at least, employees of Andersen Consulting were often
called "androids". Whether it was because Andersen could identify and
reject people with personalities at the hiring stage, or whether their
employees just hid those personalities very, very carefully, I don't
know, but "android" was a common term of art. Or "droid" for short.
(Star Wars related nicknames were not particularly unusual at the time.)
They are typically of medium height, medium weight, medium build, and...
well, they're just generally medium, and about as colourful as a
pavement (sidewalk).

In the late 1990s, I was working at MumbleCo, alongside a few MumbleCo
employees, several other independent contractors, and a whole bunch of
"droids". I should point out here that I'm rather taller than average,
and also that I have never been a great fan of the barber, as a result
of which I had acquired my own SW nickname.

One day, there was a technical disagreement between myself and one of
the Andersen people, which was resolved in my favour by the project
manager. The Andersen chap complained that this was always the way these
arguments seemed to go.

One of my fellow indies immediately responded with: "That's because
droids don't rip people's arms out when they lose. Wookies have been
known to do that."
--
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
jacobnavia
2017-03-28 18:28:22 UTC
Permalink
Post by Scott Lurndal
Post by jacobnavia
128 bit integers are not a gcc extension. Lcc-win has that too.
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
Yes, and I think that the last entry of that list is important.

You wonder why?

Because it is a good implementation, nicely wown into the language.

And those compilers in the list, by the way, took 128 bit integers from
the hardware. A 64 bit hardware can do easily 128 bit arithmetic. Much
more easily than 32 bit hardware, you did not notice?

128 bit types became feasible with 64 bit OS and software. Gcc
implemented that, I haven't looked into their software. I ported the 64
bit software to 128 from my implementation of long long in lcc-win 32
bits. The same code, running in registers twice as wide and with a 128
bit operation of mult, and div become feasible in x86 asm software.

Gcc implemented that first?

Maybe, I did not follow the history of that to /that/ level of detail.
Robert Wessel
2017-03-28 21:37:16 UTC
Permalink
On Tue, 28 Mar 2017 20:28:22 +0200, jacobnavia
Post by jacobnavia
Post by Scott Lurndal
Post by jacobnavia
128 bit integers are not a gcc extension. Lcc-win has that too.
Ok. They're a gcc extension that has been adopted by icc, pgi,
clang and lcc-win.
Yes, and I think that the last entry of that list is important.
You wonder why?
Because it is a good implementation, nicely wown into the language.
And those compilers in the list, by the way, took 128 bit integers from
the hardware. A 64 bit hardware can do easily 128 bit arithmetic. Much
more easily than 32 bit hardware, you did not notice?
I have to disagree. Other than division, both double and quad
precision integer operations are pretty trivial to implement. Yes,
double precision is a bit easier, but not enough that it should bother
anyone. Multiple precision division is a PITA no matter what,
especially if you'd like it to perform well.

And while x86-64 can handle multiple precision code fairly well, a lot
of RISCs and other machines make it rather more tedious. Lack of a
carry flag or double width multiplication are common. Both can
obviously be worked around, but the difference between
(mov/add/mov/adc/mov/adc/mov/adc) to implement a quad add on x86 and
what you have to do on Alpha (without a carry flag) is pretty
substantial. And to not pick on RISCs excessively, S/360 and its
descendents did have a carry as a resulting condition from the
unsigned add instruction (Add Logical - AL), but it isn't a
conventional carry flag, and there was no add-with-carry instruction
until fairly recently - so it was a right pain there too. But the
difference between implementing double and quad precision is not that
big.
Post by jacobnavia
128 bit types became feasible with 64 bit OS and software. Gcc
implemented that, I haven't looked into their software. I ported the 64
bit software to 128 from my implementation of long long in lcc-win 32
bits. The same code, running in registers twice as wide and with a 128
bit operation of mult, and div become feasible in x86 asm software.
Gcc implemented that first?
No, I suspect 128 bit types, at least for intermediates, were first
implemented by Cobol compilers in the early sixties, or before.
Multiple precision integer arithmetic is far older than that, but I
suspect the early Cobol compilers were the first to build them into a
language in some fashion, although I would not be surprised if
something like FLOW-MATIC did it earlier.

I am curious, however: I believe you have both a 32-bit and 64-bit
compiler for x86, do you implement the 128 bit type for both? Same
question for the ARM port(s) I believe your working on.
jacobnavia
2017-03-28 21:56:32 UTC
Permalink
Post by Robert Wessel
I am curious, however: I believe you have both a 32-bit and 64-bit
compiler for x86, do you implement the 128 bit type for both?
I did a 32 bit hardware implementation of 128 bit arithmetic but it is
very slow of course... I did not pursue that in 32 bits. For the same
reasons that I do not implement 256 bit integers with 64 bit software.


Same
Post by Robert Wessel
question for the ARM port(s) I believe your working on.
ARM64 has no 128 bit mult/div. It has UMULH/SMULH, that returns the high
word of the multiplication though, so it is quite feasible. But I have
only managed to compile the source of the lcc compiler with itself
yesterday. It is a big work. 128 bit math can wait for a while

:-)
Robert Wessel
2017-03-28 23:36:56 UTC
Permalink
On Tue, 28 Mar 2017 23:56:32 +0200, jacobnavia
Post by jacobnavia
Post by Robert Wessel
I am curious, however: I believe you have both a 32-bit and 64-bit
compiler for x86, do you implement the 128 bit type for both?
I did a 32 bit hardware implementation of 128 bit arithmetic but it is
very slow of course... I did not pursue that in 32 bits. For the same
reasons that I do not implement 256 bit integers with 64 bit software.
I would generally expect that, relative to a double implementation, a
quad implementation to do twice the amount of work for things like
addition, and four times for multiplication or division. Things like
base conversions (strtoX/printf) would fall somewhere between those
limits. That hardly seems objectionable give the increase in the
sizes.

But the utility of those would be high, at least for some
applications, and doubtless built-in support would be faster than
having to implement it via a bignum-like library.
Post by jacobnavia
Same
Post by Robert Wessel
question for the ARM port(s) I believe your working on.
ARM64 has no 128 bit mult/div. It has UMULH/SMULH, that returns the high
word of the multiplication though, so it is quite feasible. But I have
only managed to compile the source of the lcc compiler with itself
yesterday. It is a big work. 128 bit math can wait for a while
Are you only doing an ARM64 port?
jacobnavia
2017-03-28 23:43:19 UTC
Permalink
Post by Robert Wessel
Are you only doing an ARM64 port?
Yes, of course, and that leaves no room for doing anything else for the
time being. I was very happy I could compile the compiler with itself
and generate a running executable... 75 kloc, quite a big deal for the
code generator... And it worked.

Now I am turning the optimizer on (register variables) what dramatically
improves the naive code. But is more tricky, but produces code that is
8645 times faster in a 64 bit arithmetic benchmark.

In the matrix multiplication benchmark, I am getting quite a speed
boost, arriving at 56 seconds with gcc -O2 arriving at 45.

But it is not working yet for all programs, there are quite a few
problems to solve before I get high quality code.
Noob
2017-03-31 19:24:35 UTC
Permalink
I am turning the optimizer on, what improves the naive code.
***which*** improves ...

http://www.quickanddirtytips.com/education/grammar/which-versus-that-0

"what" is NEVER correct in that situation, except in
the *informal* idiom "what with".

https://dictionary.cambridge.org/dictionary/english/what-with
jacobnavia
2017-03-31 19:51:21 UTC
Permalink
Post by Noob
I am turning the optimizer on, what improves the naive code.
***which*** improves ...
Thanks, I wasn't aware of that.
Post by Noob
http://www.quickanddirtytips.com/education/grammar/which-versus-that-0
"what" is NEVER correct in that situation, except in
the *informal* idiom "what with".
https://dictionary.cambridge.org/dictionary/english/what-with
Good explanations. I should use which when the information can be left
out without changing the meaning of the sentence.

I am turning the optimizer on

the rest can be left out, "which" is necessary, not that.

Thanks again.
Ben Bacarisse
2017-04-01 11:34:40 UTC
Permalink
Post by jacobnavia
Post by Noob
I am turning the optimizer on, what improves the naive code.
***which*** improves ...
Thanks, I wasn't aware of that.
But it's a "good" error in that it has entered the language when used
for comic effect. Look up "the play what I wrote" to find out more.

<snip>
--
Ben.
s***@casperkitty.com
2017-03-28 22:11:43 UTC
Permalink
Post by Robert Wessel
And while x86-64 can handle multiple precision code fairly well, a lot
of RISCs and other machines make it rather more tedious. Lack of a
carry flag or double width multiplication are common. Both can
obviously be worked around, but the difference between
(mov/add/mov/adc/mov/adc/mov/adc) to implement a quad add on x86 and
what you have to do on Alpha (without a carry flag) is pretty
substantial.
On systems that lack add-with-carry, it's tough to get a correct carry
out of the second word in both cases where the second word of an input
operand has all bits set and there's a carry out of the first word.

For applications which need to yield an arithmetically correct result
if no overflow occurred, or report that an overflow occurred if the
result would not be arithmetically correct, but which are not required
to report overflows that don't end up affecting the results, it may be
helpful to have a type which could, at the compiler's leisure, either
keep some precision beyond a normal type or truncate such precision and
set an error flag if doing so would change the value. If a compiler
defined its semantics in such fashion and code did something like:

int x=(y+z)/2;

having a compiler do an add followed by a rotate right through carry may
be cheaper than checking whether the addition overflowed and trapping if
so. Likewise, when adding a bunch of numbers it may be easier to do an
extended-precision add and then check whether the result is in range of
the target type, than to check for overflow after every step. Some
platforms are good at overflow checking and bad at multi-precision math,
while others handle multi-precision math well but aren't as good at
overflow checking. Letting a compiler pick which approach would be better
in any given situation would allow the required semantics to be achieved
more quickly than if the programmer had to force it.
Robert Wessel
2017-03-29 00:30:09 UTC
Permalink
Post by s***@casperkitty.com
Post by Robert Wessel
And while x86-64 can handle multiple precision code fairly well, a lot
of RISCs and other machines make it rather more tedious. Lack of a
carry flag or double width multiplication are common. Both can
obviously be worked around, but the difference between
(mov/add/mov/adc/mov/adc/mov/adc) to implement a quad add on x86 and
what you have to do on Alpha (without a carry flag) is pretty
substantial.
On systems that lack add-with-carry, it's tough to get a correct carry
out of the second word in both cases where the second word of an input
operand has all bits set and there's a carry out of the first word.
Clumsy and with excessive overhead, but not actually tough. On Alpha,
for example, you'd do something like:

;$1=carry in/out
;$2=A (in)
;$3=B (in)
;$4=sum (out)
;$5=work

addq $2,$3,$4
cmplt $4,$3,$5
addq $1,$4,$4
cmplt $4,$1,$1
bis $1,$5,$5


On S/360 it was really painful, since the only way you could really do
this was to do a conditional branch after each add:

;R1=carry in/out
;R2=A (in)
;R3=B (in)
;R4=sum (out)
;R5=work
;R6=1 (constant)

XR R5,R5 ;zero R5
LR R4,R2
ALR R4,R3
BC 6, NC1 ;branch no carry
LR R5,R6
NC1:
ALR R4,R1
BC 6, NC2 ;branch no carry
OR R5,R6
NC2:
LR R1,R5

Not actually difficult, just really painful.

The alternative of retrieving the condition code value and processing
that was even worse, because the only way to do that was to issue a
subroutine call instruction (which returned in in the high byte of the
return address register (that would have looked like the following IPM
based sample, but with the IPMs replaced with a subroutine call to the
following instruction - IOW "BALR Rx,0").

Somewhere at or just before the start of the 31-bit (XA) era a
specific instruction to do that was added (insert program mask). At
least theoretically you could do a shift and mask on that, and end up
with something like:

;R1=carry in/out
;R2=A (in)
;R3=B (in)
;R4=sum (out)
;R5=work
;R6=1 (constant)

LR R4,R2
AR R4,R3
IPM R5 ;it's either bit 2 or 3 that ends up indicating carry
SRL R5,28 ;a shift count of 28 is correct for bit 3
AR R4,R1
IPM R1
SRL R1,28
OR R1,R5
NR R1,R6

That's still ugly as sin, but at least it avoids *two* conditional
branches.

It was always tempting to do this with halfword sized limbs instead.
On S/370 and later, 24-bit limbs were plausible too.

Now, of course, you can do it in a single instruction.
Post by s***@casperkitty.com
For applications which need to yield an arithmetically correct result
if no overflow occurred, or report that an overflow occurred if the
result would not be arithmetically correct, but which are not required
to report overflows that don't end up affecting the results, it may be
helpful to have a type which could, at the compiler's leisure, either
keep some precision beyond a normal type or truncate such precision and
set an error flag if doing so would change the value. If a compiler
int x=(y+z)/2;
having a compiler do an add followed by a rotate right through carry may
be cheaper than checking whether the addition overflowed and trapping if
so.
Assuming there *is* a rotate through carry.
Post by s***@casperkitty.com
Likewise, when adding a bunch of numbers it may be easier to do an
extended-precision add and then check whether the result is in range of
the target type, than to check for overflow after every step. Some
platforms are good at overflow checking and bad at multi-precision math,
while others handle multi-precision math well but aren't as good at
overflow checking. Letting a compiler pick which approach would be better
in any given situation would allow the required semantics to be achieved
more quickly than if the programmer had to force it.
In many respects Cobol did that approximately right. You could
generally ask for over (really out-of-range) checking on any
computation, and it was up to the compiler to figure out how to do it:

compute x = (y+z)/2
on size error
...error handling code...

As a general concept, it would trigger the on size error clause when
the result would not fit in the destination. And in general the
assumption was that the result *would* be computed correctly (subject
to the as-if rule, of course). So if you defined X as a two (decimal)
digit field, and y and z as 18 digits, you'd expect the result to be
equivalent to computing a 19 digit sum, dividing that by two, and then
checking if that results fits in two digits.

You could omit the on size error clause, and then you'd usually get
some sort of odd truncation, often depending on the formats of the
types being used (for example if x and the intermediate result were
binary, you might just end up with the low 16 bits of the results in
x, despite that being rather out of range).. Note that the original
Cobol spec allowed only decimal truncation, except that was ignored by
basically everyone as the overhead was too high - newer versions
explicitly allow other truncations modes.
Scott Lurndal
2017-03-29 13:19:46 UTC
Permalink
Post by Robert Wessel
On Tue, 28 Mar 2017 20:28:22 +0200, jacobnavia
Post by jacobnavia
Gcc implemented that first?
No, I suspect 128 bit types, at least for intermediates, were first
implemented by Cobol compilers in the early sixties, or before.
Those systems (1401, 360/xx, Electrodata 220, Burroughs B2500)
used BCD representation, so no, they didn't used 128-bit types.
The B2500 could operate on up to 100 digit fields (400 bits)
using a single arithmetic instruction.

The 360 did support 128-bit binary floating point, but that
wouldn't have been used by COBOL.
Ben Bacarisse
2017-03-29 13:42:23 UTC
Permalink
***@slp53.sl.home (Scott Lurndal) writes:
<snip>
Post by Scott Lurndal
The 360 did support 128-bit binary floating point, but that
wouldn't have been used by COBOL.
I can't find any reference to it in the "IBM System/360 Principles of
Operation". Many some of the later models?

(In fact, the original 360 did not even have truly binary floating
point. It had hexadecimal floating-point.)
--
Ben.
Scott Lurndal
2017-03-29 15:12:21 UTC
Permalink
Post by Ben Bacarisse
<snip>
Post by Scott Lurndal
The 360 did support 128-bit binary floating point, but that
wouldn't have been used by COBOL.
I can't find any reference to it in the "IBM System/360 Principles of
Operation". Many some of the later models?
IIRC, the higher end models like the /85, /195 did. It was
emulated in software on other models.
Robert Wessel
2017-03-30 05:26:33 UTC
Permalink
On Wed, 29 Mar 2017 14:42:23 +0100, Ben Bacarisse
Post by Ben Bacarisse
<snip>
Post by Scott Lurndal
The 360 did support 128-bit binary floating point, but that
wouldn't have been used by COBOL.
I can't find any reference to it in the "IBM System/360 Principles of
Operation". Many some of the later models?
(In fact, the original 360 did not even have truly binary floating
point. It had hexadecimal floating-point.)
Quad precision HFP was optional (but not officially part of the ISA)
on at least some S/360s, and became an optional part of the ISA with
S/370, and was implemented on many S/370 systems. Quad division was
not architected, nor supported on any machine I know of. I'm not sure
exactly when quad HFP became standard on machines (I think everything
in the post S/370 generation - 4300s, 3030s, implemented quad), but
with the transition to XA (the 31 bit architecture), quad precision
HFP, including quad divide, became a base part of the ISA.

Binary FP became a standard feature on the 9672 G5's, the single
biggest motivation was Java support. The 9672's are the predecessors
of the current 64-bit zArch systems (the first such system, the z900,
was effectively the G7, the most current system, z13, gets the "13" in
its name from being the 13th generation in that line - IOW the G13).
Robert Wessel
2017-03-30 05:42:01 UTC
Permalink
Post by Scott Lurndal
Post by Robert Wessel
On Tue, 28 Mar 2017 20:28:22 +0200, jacobnavia
Post by jacobnavia
Gcc implemented that first?
No, I suspect 128 bit types, at least for intermediates, were first
implemented by Cobol compilers in the early sixties, or before.
Those systems (1401, 360/xx, Electrodata 220, Burroughs B2500)
used BCD representation, so no, they didn't used 128-bit types.
The B2500 could operate on up to 100 digit fields (400 bits)
using a single arithmetic instruction.
The 360 did support 128-bit binary floating point, but that
wouldn't have been used by COBOL.
While the Cobol on S/360 certainly supported decimal, you could also
define a field as being binary ("USAGE COMP"). It still supported the
usual 18 decimal digit (scaled) numbers. While that required only a
64-bit storage format, the accuracy requirements demanded larger
intermediate results.

In fact they *also* required larger intermediates for pure decimal,
and the Cobol compiler had to implement a multiple-precision library
for that, even in decimal. While the decimal addition and subtraction
instructions can handle 31 digit numbers, multiplication and division
have restrictions on S/360 (and still do), that prevents single
instructions from handling 18 digit numbers in an unrestricted way.

For example, a multiplier cannot be longer than 15 digits, and the
value in the multiplicand is limited by the physical size of the
multiplier - if you had a 7 byte (15 digit) multiplier, the
multiplicand has to start with seven bytes of zeros, limiting it to 17
digits. The rules are even more peculiar for decimal division (not
least because both the quotient and remainder have to end up fitting
in a single 16 byte result field).
Keith Thompson
2017-03-28 18:40:00 UTC
Permalink
Post by jacobnavia
Post by Scott Lurndal
Post by a***@gmail.com
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is
sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
128 bit integers are not a gcc extension. Lcc-win has that too.
Then they're also an lcc-win extension.

BTW, glibc's <stdint.h> doesn't define int128_t or uint128_t. gcc
predefines, *on some systems*, types `__int128` and `unsigned __int128`.
There are no integer constants of those types, and intmax_t and
uintmax_t are still 64 bits (which I consider unfortunate).

(Apparently there are some ABI issues that prevented them from treating
them as integer types and adjusting [u]intmax_t accordingly.)
Post by jacobnavia
But the advantage of reinventing wheels is that you learn how wheels are
done and how they work...
Yes, whether it makes sense to reinvent the wheel depends very much on
what your actual goal is.
--
Keith Thompson (The_Other_Keith) kst-***@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Keith Thompson
2017-03-28 18:40:58 UTC
Permalink
jacobnavia <***@jacob.remcomp.fr> writes:
[snip]
Content-Type: text/plain; charset=windows-1252; format=flowed
--
Keith Thompson (The_Other_Keith) kst-***@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
jacobnavia
2017-03-28 18:48:58 UTC
Permalink
Post by Keith Thompson
[snip]
Content-Type: text/plain; charset=windows-1252; format=flowed
You can't get rid of windows. I posted that in a mac, running OS X. I
told thunderbird to put utf8, but somehow, windows keeps coming back and
I do not feel like trying to debug this stuff.

Windows, like many other nasty things, keeps coming back to you through
I do not know what backdoors.

I can't change that right now.
Keith Thompson
2017-03-28 19:05:10 UTC
Permalink
Post by jacobnavia
Post by Keith Thompson
[snip]
Content-Type: text/plain; charset=windows-1252; format=flowed
You can't get rid of windows. I posted that in a mac, running OS X. I
told thunderbird to put utf8, but somehow, windows keeps coming back and
I do not feel like trying to debug this stuff.
Windows, like many other nasty things, keeps coming back to you through
I do not know what backdoors.
I can't change that right now.
Understood. I've gotten pretty tired of dealing with this myself.
I've just implemented a crude workaround that should at least let
me deal with accented characters correctly. (Universal UTF-8 is
the worst possible solution to all this, except for all the others.)
--
Keith Thompson (The_Other_Keith) kst-***@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Richard Heathfield
2017-03-28 19:13:13 UTC
Permalink
On 28/03/17 20:05, Keith Thompson wrote:
<snip>
Post by Keith Thompson
I've just implemented a crude workaround that should at least let
me deal with accented characters correctly. (Universal UTF-8 is
the worst possible solution to all this, except for all the others.)
You seem to have succeeded in switching to UTF-8.

I've told Thunderbird in no uncertain terms to use UTF-8, but it's still
using windows-1252. Sometimes I just want to take Thunderbird and
Firefox out to the bonfire, because they are the worst newsreader and
browser imaginable (except for all the others).
--
Richard Heathfield
Email: rjh at cpax dot org dot uk
"Usenet is a strange place" - dmr 29 July 1999
Sig line 4 vacant - apply within
jacobnavia
2017-03-28 19:43:59 UTC
Permalink
Post by Ben Bacarisse
<snip>
Post by Keith Thompson
I've just implemented a crude workaround that should at least let
me deal with accented characters correctly. (Universal UTF-8 is
the worst possible solution to all this, except for all the others.)
You seem to have succeeded in switching to UTF-8.
I've told Thunderbird in no uncertain terms to use UTF-8, but it's still
using windows-1252. Sometimes I just want to take Thunderbird and
Firefox out to the bonfire, because they are the worst newsreader and
browser imaginable (except for all the others).
But isn't the mail in /var/spool/mail somewhere?

My service provider is running linux, and there are some hookups in the
linux mail software to add a filter. Then you can parse the headers and
force utf8 or whatever...

But that is a lot of work and doesn't patch thunderbird. Is there
anywhere the source code of thunderbird?

if (isWindowsCharset(message)) {
SetUtf8Charset(message);
}

somewhere?
Noob
2017-03-31 19:38:05 UTC
Permalink
Post by jacobnavia
But that is a lot of work and doesn't patch thunderbird. Is there
anywhere the source code of thunderbird?
if (isWindowsCharset(message)) {
SetUtf8Charset(message);
}
somewhere?
Possible candidates in about:config

1) intl.fallbackCharsetList.ISO-8859-1
2) mailnews.force_charset_override
3) mailnews.reply_in_default_charset
4) mailnews.send_default_charset

comment for 1)
fallback charset list for Unicode conversion (converting from Unicode) currently used for mail send only to handle symbol characters ...

comment for 2)
ignore specified MIME encoding and use the default encoding for display

So best option seems to be #3 and #4
I have them set to ISO-8859-15
David Brown
2017-03-28 19:19:59 UTC
Permalink
Post by Scott Lurndal
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Personally, I'd either use the GCC extension int128_t/uint128_t or
use the gnu multiprecision library rather than reinventing the wheel.
The gnu gmp library is massively over the top for a single 128 bit type
- it will probably be a good deal more work trying to figure out how to
use it than to make your own struct of two 64-bit ints.

But if you can use a 64-bit gcc, and thus use __int128, your work is
done. The same applies with Jacob's compiler, which I believe has 128
bit types.
Ben Bacarisse
2017-03-28 17:15:18 UTC
Permalink
Today a friend of mine asked this to me create a 128 bit type and you
are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
<snip>
Anyone can think of any way?
One option: treat the numbers as two unsigned 64-bit ints and compare as
normal, but, on return, if the top bits of the two high words differ,
invert the result.
--
Ben.
Barry Schwarz
2017-03-28 17:27:51 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
Before deciding how to represent the value, let's agree on what the
value is. As a starting point, let's simplify the problem to
representing a 16-bit signed my_int given an 8-bit schar and uchar.

If schar is non-negative, it seems pretty straight forward that the
value represented is ARITHMETICALLY schar*2^8+uchar (those are not C
operators). So (0,5) is 5 while (1,7) is 263.

Now, what value does (-1,1) represent. And that value cannot depend
on bit patterns. It is the same value whether you are 2-complement,
1-complement, signed magnitude, roman numeral, etc. An even simpler
question: how do you represent -1 or -257?

Maybe using an schar and a uchar in the representation is not the
solution. How about

typedef struct{
bool sign;
uchar high;
uchar low;} my_int;

Now you can represent every arithmetic value from -(2^16-1) to
(2^16-1). And you get to decide if the structure with (1,0,0)
represents -0, -(2^16), or 2^16.

You can also establish the convention that given a negative schar and
any uchar, the actual value represented is -((|schar|-1)*2^8+uchar).
This answers the above questions: (-1,1) is -1; -5 is represented as
(-1,5); and -257 is represented as (-2,1). And (-1,0) represents
whatever convention you establish.

And since initializing a negative value using normal C declaration
syntax would be a pain, you create a function
my_int set_my_int(schar, uchar);
to compute the three values needed and use it in a declaration or
statement with
my_int x = set_my_int(-2,1);
or
y = set_my_int(1,7);
--
Remove del for email
Chris M. Thomasson
2017-03-28 21:16:39 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
Fwiw, check this out:

https://groups.google.com/d/topic/comp.lang.asm.x86/FScbTaQEYLc/discussion

Atomic 63 bit counter! ;^)
Rick C. Hodgin
2017-03-29 00:38:55 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
I said I can create it like this,
struct int128_t{
sint64_t sint;
uint64_t uint;
}a, b;
now he said to compare two such types in this function
a == b, return 0
a > b, return 1
a < b return -1
I could not think of how to do it, because I couldn't think how negative numbers How will I do it
a.sint = -21;
a.unit = 245;
In this case resulting number will be a 128 bit number combined, but what the number will be is not easy to deduce. So I gave up thinking.
Anyone can think of any way?
There's a quad-double hardware library that uses the FPU with 256 bits
of precision, including a mantissa that exceeds 128 bits, meaning you
could use it for both floating point and integer operations.

Double-double, and Quad-double:
http://crd-legacy.lbl.gov/~dhbailey/mpdist/

Direct link:
http://crd.lbl.gov/~dhbailey/mpdist/qd-2.3.17.tar.gz

It's C++, but you can create a thin wrapper for it if you can't use
a C++ compiler and link in your C code.

Thank you,
Rick C. Hodgin
Thiago Adams
2017-03-30 12:19:29 UTC
Permalink
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Once I did some functions to work with variable number of bits.
It's is C++ but can be easily adapted to C.

http://thradams.com/mathlib.htm
Malcolm McLean
2017-03-30 12:32:50 UTC
Permalink
Post by Thiago Adams
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Once I did some functions to work with variable number of bits.
It's is C++ but can be easily adapted to C.
http://thradams.com/mathlib.htm
Creating new numerical types is one place where C++ really makes sense.

Code is very hard to read if addition is

s128_add(&result, a b);

and the same is true for the other four arithmetical operation.
Thiago Adams
2017-03-30 12:40:58 UTC
Permalink
Post by Malcolm McLean
Post by Thiago Adams
Post by a***@gmail.com
Hi all,
Today a friend of mine asked this to me
create a 128 bit type and you are given two 64 bit int type. One is sint64_t and one is uint64_t
Once I did some functions to work with variable number of bits.
It's is C++ but can be easily adapted to C.
http://thradams.com/mathlib.htm
Creating new numerical types is one place where C++ really makes sense.
Code is very hard to read if addition is
s128_add(&result, a b);
and the same is true for the other four arithmetical operation.
C++ help me to test using the same template code using different types and bases.

To convert to C, it's necessary to create "instances" with names add_base10_int something like that.
Maybe the selection also could be done using _Generic.

I would not use C++ to override operators +, - etc..
Because this could be added latter (in other code) if someone need.
Jonas Eklundh
2017-04-07 23:37:29 UTC
Permalink
Marek's computer has more memory than Snit's. Marek wins. Snit loses. So take that, Snit.

See: I noted specific examples of Odd Bidkin stating quotable lies, failing to rise up and instead putting others down, holding me accountable for the actions of others, etc. Your response: to change the topic.

How is bash on Linux doing anything above the lowest common denominator?

If you have b.doc open in a user space program such as vim and you want to change its name to c.doc via a GUI menu item whilst connecting to a NFS host, that might be useful, if only to see it attach to a Xpra display.

You are as dumb as a shoe. I'm getting sick of the nonsense in here. I'm guessing the small-minded whackadoodle is proving the advocates right again. Yawn. Same old brain-dead trolls.

Everyone uses socks from time to time: We all know Silver Slimer used to be just Slimer.

See: I noted specific examples of Odd Bidkin flat out making things up, showing off insecurity, falsely attributing someone else's words to me, etc. Your response: to double down on your lies. It doesn't matter because the FROM line on a post is irrelevant we know it is from Snit anyway. You refuse to take responsibility for your own actions. Actions that are easily quoted and pointed out.

-
This Trick Gets Women Hot For You
http://tinyurl.com/gsleb4p
https://www.behance.net/SandmanNet
Jonas Eklundh Communication AB

Loading...