Discussion:
Interesting time_t debacle
(too old to reply)
seasoned_geek
2017-02-12 00:53:00 UTC
Permalink
All,

While working on a new book today I tried the following test program on Linux Lite 3.2 64-bit.

// max_time_test.cpp
#include <limits>
#include <iostream>
#include <time.h>
#include <string.h>
#include <stdio.h>

int main()
{
time_t maxTime = 0;
std::cout << "maxTime is " << maxTime << std::endl;
printf( "maxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);

maxTime = std::numeric_limits<time_t>::max();
std::cout << "maxTime is " << maxTime << std::endl;
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);

return 0;
}

***@roland-HP-Compaq-8100-Elite-SFF-PC:~/tst$ g++ max_time_test.cpp
***@roland-HP-Compaq-8100-Elite-SFF-PC:~/tst$ ./max_time_test
maxTime is 0
maxTime Wed Dec 31 18:00:00 1969

asctime() Thu Jan 1 00:00:00 1970

Year: 1970
maxTime is 9223372036854775807

maxTime (null)
asctime() (null)
Segmentation fault

Just for grins I fired up my trusty DS-10 running hobbyist license and OpenVMS 8.3 (yeppers, I'm a loooong way from current on that box.) I had to tweak the code slightly because standards have moved on over the last decade.

$ type max_time_test.cpp
// max_time_test.cpp
#include <limits>
#include <iostream>
#include <time.h>
#include <string.h>
#include <stdio.h>

int main()
{
time_t maxTime = 0;
cout << "maxTime is " << maxTime << endl;
printf( "maxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900)
;

maxTime = std::numeric_limits<time_t>::max();
cout << "maxTime is " << maxTime << endl;
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900)
;

return 0;
}
$
$ cxx max_time_test.cpp
$ cxxlink max_time_test
$ run max_time_test
maxTime is 0
maxTime Wed Dec 31 18:00:00 1969

asctime() Thu Jan 1 00:00:00 1970

Year: 1970
maxTime is 4294967295

maxTime Sun Feb 7 00:28:15 2106

asctime() Sun Feb 7 06:28:15 2106

Year: 2106
$

10 years out of date and OpenVMS is still well ahead of that GNU stuff!

Since Linux Lite is a YABU (Yet Another uBUntu) this "feature" is in a rash of Linux distros having different names. It also means that many embedded systems being built and shipped today have this debacle lying in wait. If they have data entry screens using std::numeric_limits<time_t>::max(); to obtain a value for date validation it will first show up there. I didn't run detailed tests to see if the current GNU stuff fails one second after 3:14:08 AM (GMT) on January 19, 2038, but seriously, these toys aren't ready for prime time.

Personally I wonder just how long this time bomb has been in place? I also wonder just how many embedded control systems in factories, nuke plants, mills and other has-to-always-work places have this bomb ticking away. Given how long nuke plants tend to be in operation, I fret much over such things.

http://news.nationalgeographic.com/news/energy/2011/07/pictures/110720-10-oldest-nuclear-plants-in-the-us/

Was kind of shocked to learn about Oyster built in 1969 and still in operation. I _thought_ everything prior to 1980 had been decommissioned.

At any rate, passing this tidbit along to you all in case you are using GNU somewhere.
Arne Vajhøj
2017-02-12 03:08:59 UTC
Permalink
Post by seasoned_geek
While working on a new book today I tried the following test program on Linux Lite 3.2 64-bit.
// max_time_test.cpp
#include <limits>
#include <iostream>
#include <time.h>
#include <string.h>
#include <stdio.h>
int main()
{
time_t maxTime = 0;
std::cout << "maxTime is " << maxTime << std::endl;
printf( "maxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);
maxTime = std::numeric_limits<time_t>::max();
std::cout << "maxTime is " << maxTime << std::endl;
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);
return 0;
}
maxTime is 0
maxTime Wed Dec 31 18:00:00 1969
asctime() Thu Jan 1 00:00:00 1970
Year: 1970
maxTime is 9223372036854775807
maxTime (null)
asctime() (null)
Segmentation fault
Since Linux Lite is a YABU (Yet Another uBUntu) this "feature" is in
a rash of Linux distros having different names. It also means that
many embedded systems being built and shipped today have this debacle
lying in wait. If they have data entry screens using
std::numeric_limits<time_t>::max(); to obtain a value for date
validation it will first show up there. I didn't run detailed tests
to see if the current GNU stuff fails one second after 3:14:08 AM
(GMT) on January 19, 2038, but seriously, these toys aren't ready for
prime time.
Personally I wonder just how long this time bomb has been in place? I
also wonder just how many embedded control systems in factories, nuke
plants, mills and other has-to-always-work places have this bomb
ticking away. Given how long nuke plants tend to be in operation, I
fret much over such things.
There is nothing wrong with your Linux system.

But C and C++ can be a tricky languages to use correct.

Your code has two serious bugs.

1) numeric_limits are not specialized for time_t, so
std::numeric_limits<time_t>::max() just return the
max size of the underlying type. Nowhere in C or
POSIX is it stated that all values of the underlying
type will be valid times. So things can go wrong.

2) ctime and gmtime are documented to return NULL if they
get an invalid value. The code does not test for that
return value and handle it. So the code does not
handle it when it goes wrong.

printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));

could have been coded as:

if(ctime(&maxTime) != NULL)
printf( "\nmaxTime %s\n", ctime(&maxTime));
else
printf( "ctime can not handle value\n");
if(gmtime(&maxTime) != NULL)
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
else
printf( "gmtime can not handle value: %s\n", strerror(errno));

In which case you would have gotten a hint.

ctime can not handle value
gmtime can not handle value: Value too large for defined data type

Since the standard does not specify what values will be valid, then
the only safe solution is to have the C code test for it.

If one is willing to use implementation specific features, then
one need to RTFM.

I have never heard of a 32 bit time_t not being valid for full range
of underlying type.

It is obvious that a 64 bit time_t can not not be valid for full range
of underlying type if int is 32 bit.

Microsoft has documented that their 64 bit time_t is only
valid until year 3000.

glibc/Linux does not seem to document where the cutoff is.

If VMS goes 64 bit time_t and keep int 32 bit, then they will
need to do a cutoff. And I hope they document it.

Regarding nuclear power plants, then I hope that they do not
code in C *and* make unfounded assumptions about valid ranges
*and* not properly check return values. But I don't know, so
I can only hope.

Arne
Arne Vajhøj
2017-02-13 00:13:17 UTC
Permalink
Post by Arne Vajhøj
2) ctime and gmtime are documented to return NULL if they
get an invalid value. The code does not test for that
return value and handle it. So the code does not
handle it when it goes wrong.
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
if(ctime(&maxTime) != NULL)
printf( "\nmaxTime %s\n", ctime(&maxTime));
else
printf( "ctime can not handle value\n");
if(gmtime(&maxTime) != NULL)
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
else
printf( "gmtime can not handle value: %s\n", strerror(errno));
In which case you would have gotten a hint.
ctime can not handle value
gmtime can not handle value: Value too large for defined data type
Since the standard does not specify what values will be valid, then
the only safe solution is to have the C code test for it.
And this is really something any VMS guy that has programmed
on VMS versions older than 6.0 should know.

Arne
Stephen Hoffman
2017-02-12 03:18:20 UTC
Permalink
Post by seasoned_geek
While working on a new book today I tried the following test program on
Linux Lite 3.2 64-bit.
gmtime() returns null on error, and nulls don't make for very good
pointers and null pointers tend to crash C programs; there's no error
checking in that C++ code.

As for the null, what's in errno after the calls?

In this case, I'd wonder if the trigger for the null is adding hours to
get to UTC that's blowing out a limit. Try setting everything to UTC
for local time, and see if that works better.

Here's what looks to be a related (Python and C) discussion:
http://stackoverflow.com/questions/32045725/get-the-highest-possible-gmtime-for-any-architecture
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-02-12 13:35:14 UTC
Permalink
Post by Stephen Hoffman
Post by seasoned_geek
While working on a new book today I tried the following test program
on Linux Lite 3.2 64-bit.
gmtime() returns null on error, and nulls don't make for very good
pointers and null pointers tend to crash C programs; there's no error
checking in that C++ code.
As for the null, what's in errno after the calls?
Yes.
Post by Stephen Hoffman
In this case, I'd wonder if the trigger for the null is adding hours to
get to UTC that's blowing out a limit. Try setting everything to UTC
for local time, and see if that works better.
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.

Arne
Craig A. Berry
2017-02-12 14:32:23 UTC
Permalink
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
Arne Vajhøj
2017-02-12 16:27:28 UTC
Permalink
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.

Arne
seasoned_geek
2017-02-12 17:11:55 UTC
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.
Arne
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold.

Good discussion about that here:

http://stackoverflow.com/questions/589575/what-does-the-c-standard-state-the-size-of-int-long-type-to-be#589684

As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.

We are now once again at the cusp of a change to the default integer size. It is either that or making serious standards changes to things which deal with time. By moving time_t to a 64-bit value without changing the supporting library routines and structures to accommodate it GNU has introduced a serious bug. Process control systems which use any of the supporting routines to check for the start of scheduled processes, like closing/opening of a water valve, will fail.
Arne Vajhøj
2017-02-12 17:22:49 UTC
Permalink
Post by seasoned_geek
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.
The C++ standard does not specify the size of integral types in
bytes,but it specifies minimum ranges they must be able to hold.
Absolutely true and also absolutely irrelevant.

The point was for systems where time_t happened to be 64 bit
and int happened to be 32 bit.
Post by seasoned_geek
We are now once again at the cusp of a change to the default integer size.
Maybe.
Post by seasoned_geek
It is either that or making serious standards changes to things which deal
with time. By moving time_t to a 64-bit value without changing the
supporting
Post by seasoned_geek
library routines and structures to accommodate it GNU has introduced
a serious bug.

No.

Their code follows the standards.

It only breaks bad code.

Arne
seasoned_geek
2017-02-12 18:30:04 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.
The C++ standard does not specify the size of integral types in
bytes,but it specifies minimum ranges they must be able to hold.
Absolutely true and also absolutely irrelevant.
Absolutely true and completely relevant.
Post by Arne Vajhøj
The point was for systems where time_t happened to be 64 bit
and int happened to be 32 bit.
No, that was the spin you tried to apply so it fit in your universe. The spin, however, had no relevance to reality.

When architecting solutions one cannot use the Agile method of looking 6 inches past the end of ones shoes, hacking something on the fly and claiming the story complete. This is one of the many many many reasons Agile is a completely fraudulent methodology.

One can even find Ubuntu discussions claiming they have the 2038 bug "fixed" in Ubuntu.
http://askubuntu.com/questions/299475/will-the-linux-clock-fail-at-january-19-2038-31408

In true Agile fashion a little hack changed the datatype of time_t without changing _any_ of the standard library routines which operate on it to fully support its range. It's a bug.
Post by Arne Vajhøj
No.
Their code follows the standards.
It only breaks bad code.
No, their code IS bad code.
Arne Vajhøj
2017-02-12 19:20:54 UTC
Permalink
Post by seasoned_geek
Post by Arne Vajhøj
The point was for systems where time_t happened to be 64 bit
and int happened to be 32 bit.
No, that was the spin you tried to apply so it fit in your universe.
The spin, however, had no relevance to reality.
It explains why your code produced the result it did.
Post by seasoned_geek
When architecting solutions one cannot use the Agile method of
looking
6 inches past the end of ones shoes, hacking something on the fly and
claiming the story complete. This is one of the many many many reasons
Agile is a completely fraudulent methodology.
I don't think agile vs non-agile matters much for the topic.

I don't think you understand agile very well either.

"the Agile method of looking 6 inches past the end of ones shoes,
hacking something on the fly"

does not in any way match the agile manifesto principle #9:

"Continuous attention to technical excellence and good design enhances
agility."
Post by seasoned_geek
One can even find Ubuntu discussions claiming they have the 2038 bug "fixed" in Ubuntu.
http://askubuntu.com/questions/299475/will-the-linux-clock-fail-at-january-19-2038-31408
Sure.
Post by seasoned_geek
In true Agile fashion a little hack changed the datatype of time_t
without changing _any_ of the standard library routines which operate
on it to fully support its range. It's a bug.
No.

The standards does not require that.

Neither C nor POSIX/SUS.

Arne
seasoned_geek
2017-02-12 22:15:49 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
Post by Arne Vajhøj
The point was for systems where time_t happened to be 64 bit
and int happened to be 32 bit.
No, that was the spin you tried to apply so it fit in your universe.
The spin, however, had no relevance to reality.
It explains why your code produced the result it did.
No Arne. You tried to justify a HACK which slipped out into the wild without thought to the cascading effect.
Post by Arne Vajhøj
Post by seasoned_geek
When architecting solutions one cannot use the Agile method of
looking
6 inches past the end of ones shoes, hacking something on the fly and
claiming the story complete. This is one of the many many many reasons
Agile is a completely fraudulent methodology.
I don't think agile vs non-agile matters much for the topic.
It is completely relevant to the discussion because Agile is _exactly_ how HACKs like this make it into the wild. Looking 6" past the end of one's shoes while hacking on the fly to get your story done in the sprint is _exactly_ what happens in Agile shops. Oh yes, continuous integration with Jenkins with automated test cases written by the programmer and no place along the line was there an architect involved in the process.

Agile = the accounting fraud method paving the fastest way to failure.


It's a bug introduced by an Agile process making Y2038 a story to get "done" in a single sprint and now we have train wrecks coming down the mountain.
Arne Vajhøj
2017-02-13 00:07:21 UTC
Permalink
Post by seasoned_geek
Post by Arne Vajhøj
Post by seasoned_geek
When architecting solutions one cannot use the Agile method of
looking 6 inches past the end of ones shoes, hacking something on
the fly and claiming the story complete. This is one of the many
many many reasons Agile is a completely fraudulent methodology.
I don't think agile vs non-agile matters much for the topic.
It is completely relevant to the discussion because Agile is
_exactly_ how HACKs like this make it into the wild. Looking 6" past
the end of one's shoes while hacking on the fly to get your story
done in the sprint is _exactly_ what happens in Agile shops.
Not if they are following agile principle #9.

And I don't think quick and dirty hacks were invented with agile. It
has happened with all methodologies.
Post by seasoned_geek
It's a bug introduced by an Agile process making Y2038 a story to get
"done" in a single sprint and now we have train wrecks coming down
the mountain.
In a few billion years.

Unless it get fixed before that.

Arne
seasoned_geek
2017-02-13 12:48:25 UTC
Permalink
Post by Arne Vajhøj
And I don't think quick and dirty hacks were invented with agile. It
has happened with all methodologies.
They were institutionalized with Agile stories.
Johnny Billquist
2017-02-12 20:20:18 UTC
Permalink
Post by seasoned_geek
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.
Arne
The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold.
http://stackoverflow.com/questions/589575/what-does-the-c-standard-state-the-size-of-int-long-type-to-be#589684
Yes. An int is supposedly whatever size makes good use of the native
instruction set. And Arne said that a system with a 32 bit int would not
be able to represent the year given by the max value of a 64-bit time_t.
Now, if you re-read this all, it should be clear that that statement is
correct, and is so even within the scope that on different
architectures, and with different compilers, an int might be of
different sizes.
Post by seasoned_geek
As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Post by seasoned_geek
We are now once again at the cusp of a change to the default integer size. It is either that or making serious standards changes to things which deal with time. By moving time_t to a 64-bit value without changing the supporting library routines and structures to accommodate it GNU has introduced a serious bug. Process control systems which use any of the supporting routines to check for the start of scheduled processes, like closing/opening of a water valve, will fail.
There shouldn't really be any problems with having time_t be whatever
size. The problem occurs because people have not actually been using
time_t, but a long, since they "know" that time_t was a long. When
time_t change, their code breaks.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
seasoned_geek
2017-02-12 23:33:59 UTC
Permalink
Post by Johnny Billquist
Post by seasoned_geek
As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Ahhh, you don't remember 8-bit hardware. Late CP/M early DOS and in the embedded world, the Z80 https://en.wikipedia.org/wiki/Zilog_Z80

Rule of thumb was int was the same size as a register before we moved into a world where registers could be many times larger than the processor could consume in one gulp.
Post by Johnny Billquist
Post by seasoned_geek
We are now once again at the cusp of a change to the default integer size. It is either that or making serious standards changes to things which deal with time. By moving time_t to a 64-bit value without changing the supporting library routines and structures to accommodate it GNU has introduced a serious bug. Process control systems which use any of the supporting routines to check for the start of scheduled processes, like closing/opening of a water valve, will fail.
There shouldn't really be any problems with having time_t be whatever
size. The problem occurs because people have not actually been using
time_t, but a long, since they "know" that time_t was a long. When
time_t change, their code breaks.
I will agree there "shouldn't" be any problem. The problem is Agile and stories which have to be completed within a sprint and don't involve an architect.

This HACK could have been done correctly. The could have changed time_t to 64-bit THEN filled out a struct tm with the maximum possible value for each component ran it through mktime() to generate the new 64-bit time_t then cut & pasted THAT value into numeric_limits with a big comment that until the default int size or the standard definition of struct tm changes the value could get no larger.

Instead they choose to do a blatant hack, switch the data type and call it done.

Well, wrong is a form of done I guess.
Arne Vajhøj
2017-02-12 23:53:43 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
As architectures change so does the size of int. Are you old
enough to have worked with DOS C compilers when 20MEG hard drives
ruled the land? int and char were the same size. So much so that
vast mountains of code used them interchangeably. When the
standard changed vast amounts of code was broken. People
grumbled, things broke, some software disappeared and other
software was written.
I don't think it was ever equivalent to a char, but it was the same
size as a short, which is perfectly consistent with the standard.
But a lot of programs broke back in the day when you started
compiling code written for 16-bit-oriented machines on 32-bit
machines... It was way more obvious on Unix, moving from the PDP-11
to a VAX than ever on DOS. :-)
Ahhh, you don't remember 8-bit hardware. Late CP/M early DOS and in
the embedded world, the Z80 https://en.wikipedia.org/wiki/Zilog_Z80
Rule of thumb was int was the same size as a register before we moved
into a world where registers could be many times larger than the
processor could consume in one gulp.
You are aware that the link you provide do talk about 16 bit
registers????

Arne
Johnny Billquist
2017-02-13 10:15:45 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Ahhh, you don't remember 8-bit hardware. Late CP/M early DOS and in the embedded world, the Z80 https://en.wikipedia.org/wiki/Zilog_Z80
"Remember"? I still get paid for programming the Z00.
And it actually have 16-bit registers.
Post by seasoned_geek
Rule of thumb was int was the same size as a register before we moved into a world where registers could be many times larger than the processor could consume in one gulp.
And I'm not sure how well you actually know processors like the Z80... :-)
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
We are now once again at the cusp of a change to the default integer size. It is either that or making serious standards changes to things which deal with time. By moving time_t to a 64-bit value without changing the supporting library routines and structures to accommodate it GNU has introduced a serious bug. Process control systems which use any of the supporting routines to check for the start of scheduled processes, like closing/opening of a water valve, will fail.
There shouldn't really be any problems with having time_t be whatever
size. The problem occurs because people have not actually been using
time_t, but a long, since they "know" that time_t was a long. When
time_t change, their code breaks.
I will agree there "shouldn't" be any problem. The problem is Agile and stories which have to be completed within a sprint and don't involve an architect.
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point. The
time_t thing was worked out over quite some time, and the solution is
pretty much a no brainer. You just throw an unreasonable number at it,
and then complain when it didn't produce a result, and your code didn't
have any error checking.

Fess up, and move on.
Post by seasoned_geek
This HACK could have been done correctly. The could have changed time_t to 64-bit THEN filled out a struct tm with the maximum possible value for each component ran it through mktime() to generate the new 64-bit time_t then cut & pasted THAT value into numeric_limits with a big comment that until the default int size or the standard definition of struct tm changes the value could get no larger.
Instead they choose to do a blatant hack, switch the data type and call it done.
Well, wrong is a form of done I guess.
Now wait. You are the one who takes a time_t, and then assign a maxint
to it. Already there you are making an error. maxint can be very
different of different machines, and there is nothing that guarantees
that mktime should work on any input. Try assigning the *same* values to
your time, and run it through mktime instead. You'll find that it works
fine both on VMS, and any other system you try it on, up to the point
where VMS fails and the other systems keep working.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Simon Clubley
2017-02-13 14:32:45 UTC
Permalink
Post by Johnny Billquist
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Ahhh, you don't remember 8-bit hardware. Late CP/M early DOS and in the embedded world, the Z80 https://en.wikipedia.org/wiki/Zilog_Z80
"Remember"? I still get paid for programming the Z00.
And it actually have 16-bit registers.
Are some of the registers sub-dividable into 8-bit registers ?

For example, on x86 you originally had (for example) the AX register
which is 16 bits, but it can be divided into 2 8-bit registers (AL
and AH). Likewise, when 32-bits came along, you got EAX, but still
kept AX.

It's been a very long time since I looked at the Z80, but I thought
some of the 16-bit registers could be divided into 8-bit registers.

I wonder if the OP meant all registers on the Z80 were only 8-bit
(which is not the case) or whether some can be addressed as 8-bit
registers.

I'll let you decide if that means the Z80 had 8 or 16 bit registers. :-)

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
seasoned_geek
2017-02-13 21:28:09 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Are some of the registers sub-dividable into 8-bit registers ?
For example, on x86 you originally had (for example) the AX register
which is 16 bits, but it can be divided into 2 8-bit registers (AL
and AH). Likewise, when 32-bits came along, you got EAX, but still
kept AX.
It's been a very long time since I looked at the Z80, but I thought
some of the 16-bit registers could be divided into 8-bit registers.
It was sooooo long ago. I do remember one or more C compilers where int and char were both the same size but that was the K&R days before any actual standards. Not that I would write any to try it, but does K&R style code even compile anymore? I seem to remember it officially going away but that may just have been every client simply banning it.

Under early DOS we also had
Compact Model near code and near data
Medium Model near code and far data
Large Model far code and far data
Johnny Billquist
2017-02-13 23:19:40 UTC
Permalink
Post by seasoned_geek
Post by Simon Clubley
Post by Johnny Billquist
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Are some of the registers sub-dividable into 8-bit registers ?
For example, on x86 you originally had (for example) the AX register
which is 16 bits, but it can be divided into 2 8-bit registers (AL
and AH). Likewise, when 32-bits came along, you got EAX, but still
kept AX.
It's been a very long time since I looked at the Z80, but I thought
some of the 16-bit registers could be divided into 8-bit registers.
It was sooooo long ago. I do remember one or more C compilers where int and char were both the same size but that was the K&R days before any actual standards. Not that I would write any to try it, but does K&R style code even compile anymore? I seem to remember it officially going away but that may just have been every client simply banning it.
I think at least some compilers, like gcc, can still be made to accept
K&R style code. But you might need to disable some options on the
command line.
Post by seasoned_geek
Under early DOS we also had
Compact Model near code and near data
Medium Model near code and far data
Large Model far code and far data
Ah, yes. That ugly/weird thing about how pointers were represented in
relationship to segmentation.

I'll fess up right away and say that I never programmed much under DOS. :-)
(And for a good reason... :-) )

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
seasoned_geek
2017-02-16 14:12:31 UTC
Permalink
Post by Johnny Billquist
Ah, yes. That ugly/weird thing about how pointers were represented in
relationship to segmentation.
I'll fess up right away and say that I never programmed much under DOS. :-)
(And for a good reason... :-) )
segment:offset

Note: I'm tired so didn't check if my hex values fit in 8 bits.

For one of the memory models on one of the machines/cpus (perhaps more) you used a single register. The low 8 bits was segment and high offset (or vice versa). This lead/leads to the lovely series of hardware specific pointer bugs which randomly happened based on where your executable started in memory. Many compilers of the day only compared the offset when comparing pointers. Been a long time, but segments weren't required to be linear which is what lead to the technique. What I mean by that is 0x0078 was not required to be physically next to segment 0x0079. Became even more true when EMS and XMS came along.

The concept of segment:offset wasn't too horrible. It "should" have made virtual memory a cake walk to implement. I haven't worked in VAX assembly in eons, but ultimately we locate data based on a virtual page and offset/location within the page.

Pointer math was really broken in C under DOS. Because many compilers didn't compare the offset when comparing pointers many pointer math operations only operated on the offset. This meant that

char *tmp;

could be initialized the the beginning of a string/buffer which was near the end of the offset, say 0xF8 and after a few increments it would wrap back to 00 but remain in the same segment. There were many tricks, but if you needed more than 64K in one data chunk, you had to span segments which did not have to be next to each other. This is why people were told not to use pointers to sequentially process a buffer but instead use an array subscript. The compiler/run-time knew were it put your data, but your code couldn't really. It could know where it _started_ but ...

Hmmm... according to this AMD dropped segmentation from the architecture when it created the 64-bit CPU.

https://en.wikipedia.org/wiki/X86_memory_segmentation

I kind of wonder just how true that is since the processor can also run 32-bit OS and code. That would still be using segment:offset of some kind.

The x86 was always a poor choice. The only time you will ever here me compliment that non-tech company Apple is right now. There initial choice to use a Motorola chip with true linear memory was correct. Too bad someone at IBM got a fart crossways and believed IBM couldn't be seen as "following" Apple into the PC market so they went with the x86. Of course then then compounded insult with injury by putting the add in card address space above the 640K line instead of starting at 0K and just skipping a certain amount. Addressing memory above 1MB forced everyone to skip a hole.

Rather apt you brought up the memory addressing since IBM justified creating the memory hole above 640K with "That's more memory than our current mainframe has. No one would be able to afford to put that much in a PC."

Much the same mentality going on with the time_t hack.
-----
http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog
http://lesedi.us
Johnny Billquist
2017-02-16 21:12:44 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Ah, yes. That ugly/weird thing about how pointers were represented in
relationship to segmentation.
I'll fess up right away and say that I never programmed much under DOS. :-)
(And for a good reason... :-) )
segment:offset
Note: I'm tired so didn't check if my hex values fit in 8 bits.
For one of the memory models on one of the machines/cpus (perhaps more) you used a single register. The low 8 bits was segment and high offset (or vice versa). This lead/leads to the lovely series of hardware specific pointer bugs which randomly happened based on where your executable started in memory. Many compilers of the day only compared the offset when comparing pointers. Been a long time, but segments weren't required to be linear which is what lead to the technique. What I mean by that is 0x0078 was not required to be physically next to segment 0x0079. Became even more true when EMS and XMS came along.
Like I said. I have avoided that world, intentionally, and for good
reasons...
Post by seasoned_geek
The concept of segment:offset wasn't too horrible. It "should" have made virtual memory a cake walk to implement. I haven't worked in VAX assembly in eons, but ultimately we locate data based on a virtual page and offset/location within the page.
It does not make it easy. It's a rather bad model, which is why it's
fallen out of popularity. :-)

Pages do not have a separate register telling where the page starts. Big
difference from a segment register. :-)
A page, and an offset into the page, are a simple split of the full
address, and comes very natural. (Well, on the PDP-11 it's a bit more
convoluted than on the VAX, but still sane...)
Post by seasoned_geek
Pointer math was really broken in C under DOS. Because many compilers didn't compare the offset when comparing pointers many pointer math operations only operated on the offset. This meant that
Pointer math in general becomes hard when you have segments and offsets,
since you actually need to calculate the actual address before you
really can do the pointer math. And of course, you might then need to
reverse calculate things in order to address your data again, which then
depends on what your segment register actually happen to contain.

Just thinking about this makes me queasy...
Post by seasoned_geek
char *tmp;
could be initialized the the beginning of a string/buffer which was near the end of the offset, say 0xF8 and after a few increments it would wrap back to 00 but remain in the same segment. There were many tricks, but if you needed more than 64K in one data chunk, you had to span segments which did not have to be next to each other. This is why people were told not to use pointers to sequentially process a buffer but instead use an array subscript. The compiler/run-time knew were it put your data, but your code couldn't really. It could know where it _started_ but ...
Like I said. Not enjoyable...
Post by seasoned_geek
Hmmm... according to this AMD dropped segmentation from the architecture when it created the 64-bit CPU.
https://en.wikipedia.org/wiki/X86_memory_segmentation
I kind of wonder just how true that is since the processor can also run 32-bit OS and code. That would still be using segment:offset of some kind.
Unless I'm confused, it only refers to when in 64-bit mode.
Post by seasoned_geek
The x86 was always a poor choice. The only time you will ever here me compliment that non-tech company Apple is right now. There initial choice to use a Motorola chip with true linear memory was correct. Too bad someone at IBM got a fart crossways and believed IBM couldn't be seen as "following" Apple into the PC market so they went with the x86. Of course then then compounded insult with injury by putting the add in card address space above the 640K line instead of starting at 0K and just skipping a certain amount. Addressing memory above 1MB forced everyone to skip a hole.
At the time IBM chose the x86, Apple was still using the 6502... Not
sure that is a processor I would praise. Talk about headaches in
addressing memory...

That said, I definitely agree that there must have been some brilliant
idiots who decided to put some hard addresses up below 1M.
Post by seasoned_geek
Rather apt you brought up the memory addressing since IBM justified creating the memory hole above 640K with "That's more memory than our current mainframe has. No one would be able to afford to put that much in a PC."
Yeah... Not clever.
Post by seasoned_geek
Much the same mentality going on with the time_t hack.
And this is where I totally disagree. As I've run through a couple of
times now. time_t is supposed to be an offset from epoch. There is *no*
other way to extend this than to add more bits.

You have a problem because you can't convert every possible time_t into
a struct tm, which isn't time_t's fault.
The easy solution to your problem would be to change the size of tm_year
in struct tm, but since this is a problem that is not really a problem
yet for a few billenium, I don't think anyone is addressing it yet.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Simon Clubley
2017-02-14 13:05:52 UTC
Permalink
Post by seasoned_geek
Under early DOS we also had
Compact Model near code and near data
Medium Model near code and far data
Large Model far code and far data
The tiny memory was also available for when everything could fit into
one 64 Kbyte segment.

There was also the Donald Trump memory model.

Simon.
--
Simon Clubley, ***@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
seasoned_geek
2017-02-16 12:26:25 UTC
Permalink
Post by Simon Clubley
Post by seasoned_geek
Under early DOS we also had
Compact Model near code and near data
Medium Model near code and far data
Large Model far code and far data
The tiny memory was also available for when everything could fit into
one 64 Kbyte segment.
Yes, I pooched that.

I believe only Borland called it "tiny". MS called it compact because it compiled into .com and yes, it was one 64k segment. Addressing done by offset only. Compact model was mostly used for drivers and TSR such as mouse.com. What I left out was the Small Model which, if Chardonnay hasn't pickled those particular brain cells, one code segment with one data segment. Besides being near code and far data the Medium model allowed for multiple data segments...I think...while large allowed for multiples of both segment types.
Johnny Billquist
2017-02-13 23:17:28 UTC
Permalink
Post by Simon Clubley
Post by Johnny Billquist
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
As architectures change so does the size of int. Are you old enough to have worked with DOS C compilers when 20MEG hard drives ruled the land? int and char were the same size. So much so that vast mountains of code used them interchangeably. When the standard changed vast amounts of code was broken. People grumbled, things broke, some software disappeared and other software was written.
I don't think it was ever equivalent to a char, but it was the same size
as a short, which is perfectly consistent with the standard. But a lot
of programs broke back in the day when you started compiling code
written for 16-bit-oriented machines on 32-bit machines... It was way
more obvious on Unix, moving from the PDP-11 to a VAX than ever on DOS. :-)
Ahhh, you don't remember 8-bit hardware. Late CP/M early DOS and in the embedded world, the Z80 https://en.wikipedia.org/wiki/Zilog_Z80
"Remember"? I still get paid for programming the Z00.
And it actually have 16-bit registers.
Are some of the registers sub-dividable into 8-bit registers ?
Yes.
Post by Simon Clubley
For example, on x86 you originally had (for example) the AX register
which is 16 bits, but it can be divided into 2 8-bit registers (AL
and AH). Likewise, when 32-bits came along, you got EAX, but still
kept AX.
It's been a very long time since I looked at the Z80, but I thought
some of the 16-bit registers could be divided into 8-bit registers.
Definitely the case, yes.
Post by Simon Clubley
I wonder if the OP meant all registers on the Z80 were only 8-bit
(which is not the case) or whether some can be addressed as 8-bit
registers.
Who knows.
Post by Simon Clubley
I'll let you decide if that means the Z80 had 8 or 16 bit registers. :-)
You can address the registers on a PDP-11 or a VAX as 8 bit registers as
well. Just use the byte instruction variants. Not sure if that then
makes a VAX an 8-bit CPU...

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Bill Gunshannon
2017-02-13 16:22:09 UTC
Permalink
Post by Johnny Billquist
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point.
Wow, Johnny, your the second one in this thread to actually agree
with what I said about Agile 5 years ago after being forced in to
taking a course on it by my then, current, employer. It is nice
to see my understanding of Agile (and yours apparently) vindicated
by no less than the ACM. :-) Good article on it in the CACM in
one of the last couple months editions.

bill
seasoned_geek
2017-02-13 21:37:20 UTC
Permalink
Post by Bill Gunshannon
Post by Johnny Billquist
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point.
Wow, Johnny, your the second one in this thread to actually agree
with what I said about Agile 5 years ago after being forced in to
taking a course on it by my then, current, employer. It is nice
to see my understanding of Agile (and yours apparently) vindicated
by no less than the ACM. :-) Good article on it in the CACM in
one of the last couple months editions.
A link would be nice.

Agile exists in corporate America for a single reason, to legitimize accounting fraud. It was adopted because it completely skirts Dodd Frank
Arne Vajhøj
2017-02-13 22:14:14 UTC
Permalink
Post by seasoned_geek
Post by Bill Gunshannon
Post by Johnny Billquist
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point.
Wow, Johnny, your the second one in this thread to actually agree
with what I said about Agile 5 years ago after being forced in to
taking a course on it by my then, current, employer. It is nice
to see my understanding of Agile (and yours apparently) vindicated
by no less than the ACM. :-) Good article on it in the CACM in
one of the last couple months editions.
A link would be nice.
Agile exists in corporate America for a single reason, to legitimize
accounting fraud. It was adopted because it completely skirts Dodd Frank
Agile was widely adopted many years before Dodd Frank.

And I predict that agile will not disappear if Dodd Frank gets
repealed this year.

Besides that I don't see any relation between agile and legitimizing
accounting fraud or between any of these and the regulation of US banks.

Arne
seasoned_geek
2017-02-13 20:50:12 UTC
Permalink
Post by Johnny Billquist
Post by seasoned_geek
Rule of thumb was int was the same size as a register before we moved into a world where registers could be many times larger than the processor could consume in one gulp.
And I'm not sure how well you actually know processors like the Z80... :-)
Been long long time. My last exposure was replacing the Z80. Client had massively expensive custom Z80 boxes which were part of a landfill/transfer station management system. I think the cheapest one was $30K back in the day. They had me come in and write a bunch of software for abandoned IBM XT computers with full height 10Meg hard drives, sticking in Digiboards and PIO-48 cards as required. They basically started saving north of $30K for each new landfill/transfer station they set up anywhere int the world not to mention no longer having to buy replacement boxes for existing locations. Things used to get taken out by lightning quite a bit.
Post by Johnny Billquist
Post by seasoned_geek
I will agree there "shouldn't" be any problem. The problem is Agile and stories which have to be completed within a sprint and don't involve an architect.
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point. The
time_t thing was worked out over quite some time, and the solution is
pretty much a no brainer. You just throw an unreasonable number at it,
and then complain when it didn't produce a result, and your code didn't
have any error checking.
Fess up, and move on.
The "solution" requires that it either fully cascade through the standard or that numeric_limits be specialized for the type. It has already been specialized for char16_t, char32_t, and wchar_t per the C++11 standard.
Post by Johnny Billquist
Post by seasoned_geek
This HACK could have been done correctly. The could have changed time_t to 64-bit THEN filled out a struct tm with the maximum possible value for each component ran it through mktime() to generate the new 64-bit time_t then cut & pasted THAT value into numeric_limits with a big comment that until the default int size or the standard definition of struct tm changes the value could get no larger.
Instead they choose to do a blatant hack, switch the data type and call it done.
Well, wrong is a form of done I guess.
Now wait. You are the one who takes a time_t, and then assign a maxint
to it. Already there you are making an error. maxint can be very
different of different machines, and there is nothing that guarantees
that mktime should work on any input. Try assigning the *same* values to
your time, and run it through mktime instead. You'll find that it works
fine both on VMS, and any other system you try it on, up to the point
where VMS fails and the other systems keep working.
I like you Johnny (since you sign Johnny I assume you wish to be called that). I find your responses well thought out, unlike some others.

I will pose this in response. I remember hearing much the same argument after C++98 over other language standard types. When C++11 came out we had more _t named in the standard as requiring specialization within numeric_limits. People made made much the same argument about current implementations being a hack.
Johnny Billquist
2017-02-13 23:46:16 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
Rule of thumb was int was the same size as a register before we moved into a world where registers could be many times larger than the processor could consume in one gulp.
And I'm not sure how well you actually know processors like the Z80... :-)
Been long long time. My last exposure was replacing the Z80. Client had massively expensive custom Z80 boxes which were part of a landfill/transfer station management system. I think the cheapest one was $30K back in the day. They had me come in and write a bunch of software for abandoned IBM XT computers with full height 10Meg hard drives, sticking in Digiboards and PIO-48 cards as required. They basically started saving north of $30K for each new landfill/transfer station they set up anywhere int the world not to mention no longer having to buy replacement boxes for existing locations. Things used to get taken out by lightning quite a bit.
I actually like the Z80, quirks and all. Even though I seriously dislike
the x86. Can't explain that one. :-)
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
I will agree there "shouldn't" be any problem. The problem is Agile and stories which have to be completed within a sprint and don't involve an architect.
No, I would say that this has nothing to do with Agile. I personally
think Agile is a lot of bad practice, but that is beside the point. The
time_t thing was worked out over quite some time, and the solution is
pretty much a no brainer. You just throw an unreasonable number at it,
and then complain when it didn't produce a result, and your code didn't
have any error checking.
Fess up, and move on.
The "solution" requires that it either fully cascade through the standard or that numeric_limits be specialized for the type. It has already been specialized for char16_t, char32_t, and wchar_t per the C++11 standard.
There are a lot of things that can be said about the native integer
types in C. But more on this below... :-)
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
This HACK could have been done correctly. The could have changed time_t to 64-bit THEN filled out a struct tm with the maximum possible value for each component ran it through mktime() to generate the new 64-bit time_t then cut & pasted THAT value into numeric_limits with a big comment that until the default int size or the standard definition of struct tm changes the value could get no larger.
Instead they choose to do a blatant hack, switch the data type and call it done.
Well, wrong is a form of done I guess.
Now wait. You are the one who takes a time_t, and then assign a maxint
to it. Already there you are making an error. maxint can be very
different of different machines, and there is nothing that guarantees
that mktime should work on any input. Try assigning the *same* values to
your time, and run it through mktime instead. You'll find that it works
fine both on VMS, and any other system you try it on, up to the point
where VMS fails and the other systems keep working.
I like you Johnny (since you sign Johnny I assume you wish to be called that). I find your responses well thought out, unlike some others.
Thanks. Not sure I deserve that, as I very often bang my head against
the table after posting something because I realize too late what I
actually should have said.

Oh, and by the way, I sign my posts with Johnny because it is my name.
My passport even says so. :-)
And I'm Swedish, and there everyone is on a first name basis with
everyone else.
And I say "there" because I don't actually live in Sweden anymore.

Life... :-)
Post by seasoned_geek
I will pose this in response. I remember hearing much the same argument after C++98 over other language standard types. When C++11 came out we had more _t named in the standard as requiring specialization within numeric_limits. People made made much the same argument about current implementations being a hack.
Let's take one step back here. Sorry if I'm not directly addressing your
response, but let me try and recap where (I think) we are.

Your problem is that time_t was expanded to 64 bits. But, you know, this
is kindof silly, because your problem is not with time_t at all.

What is time_t, actually? It is a value expressing the number of seconds
passed since epoch. And epoch is defined as January 1, 1970.
Now, maxint (be that 0xffff, 0xffffffff, or 0xffffffffffffffff) are all
actually just fine (assuming we're talking unsigned here). What do they
say? They actually just give a number of seconds since epoch.
They are all valid and "reasonable" values. There can't really be
anything ever going wrong with time_t, except when the value wraps, or
the number becomes to big to be represented in the number of bits allocated.

Can we agree on that one?

Ok. If so, then what is your problem, really?
Well, it is that your call to gmtime fails. And why does it fail?
Because gmtime tries to convert a time_t to a struct tm. And here is
your problem, and a rather broken design. And it is not time_t that is
the broken design, but struct tm. If you look at struct tm, you will see
(as I think Arne pointed out already in the first reply) that the
tm_year field in struct tm is actually defined as an int.

This is where the horrors of the vague definition of native integers in
C comes to bite you.

What this means is that, depending on the compiler, gmtime will fail at
some point, and the exact point depends on the size of an int.

Note how this have absolutely nothing to do with time_t. The problems
all lie within struct tm. And actually, the problem is more convoluted
than your program shows. Pretty much no matter what the size of an int
is, there will be values in a time_t that cannot be represented in a
struct tm, or values in a struct tm that cannot be represented in a time_t.

You just identified that on Ubuntu machines, with a 64 bit time_t, there
are time_t values which cannot be represented by a struct tm, since an
int on your system is 32 bits. So, you can never express a year beyond
about 2 billion (the max positive number of a 32-bit signed integer).
Anything beyond that simply cannot be stored in a struct tm.

On a VMS system, you actually have the opposite problem. Since a time_t
is only 32 bits, it can only represent offsets about 4 billion from
epoch, which translates to roughly the year 2106. So, how about you try
and setup a struct tm, with tm_year set to 2110, and then try the mktime
routine, which translates from a struct tm to a time_t. That should fail
miserably, because the time_t cannot represent such a time.

Coming back, time_t itself is, once more, just an offset from epoch.
Obviously, any size is actually fine. It is, after all, just a type used
to store a count of seconds, which is monotonically increasing. And it
will continue to increase as long as the Universe exists. The type is
simple, the expansion when we get to the limit expressable by a number
of bits will always be the very obvious solution of adding more bits.
There is no other way to do this, and actually no other way that makes
any sense.

And no matter what the design of the struct tm looks like, you will
still have this same problem happen, since the tm_year field will be a
number of bits, no matter how many, which will then not cleanly be
divisible to the number of years representable by the time_t type.

In short, you will never have a situation where you can translate any
time_t value into a struct tm, and any struct tm date that can be
translatable into a time_t. In one direction or the other, you will have
some values that are untranslatable.

And that is what you discovered.

The fact that you either assumed that this could not happen, or choose
to ignore that fact, and then blame the library when your code did not
include any error checking, is more a statement about your code than the
library (I think).

So, at the end of the day, the time_t type do not have any problems.

I would say that I dislike that the struct tm type are using plain ints
for the fields, meaning that you have a rather implementation defined
limit of a more obscure form in there. But just like time_t do have an
upper limit on times possible to represent, so does struct tm.
And there is no way around this.

And, truth be told, VMS is no better. There you have this time which is
based on the modified julian date, and then you have a 64 bit offset,
which is signed, and which express an offset in 100ns steps from there.
Whenever you have a library function that breaks this into separate
fields for year, month, day, and so on, you will face the same problem.
Either there is a risk that the internal time offset produce a year that
is too large to represent in the data type you have for the split out
fields, or else you face a problem that combining your split out fields
back to the VMS time offset can fail because the split out fields have a
date that cannot be represented.

I hope I didn't forget anything in this post, forcing me to comment
myself again... :-)

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
seasoned_geek
2017-02-16 13:39:23 UTC
Permalink
Johnny,

First let me thank you for taking the time to put together such a detailed response. After this post to you I will be bowing out of this conversation and before the end of the weekend filing a bug report.

I must also apologize for slow response, but I was busy getting a new novella all set up official like.

https://www.smashwords.com/books/view/704101

and hacking together a Web site for it which now only needs the book title image for the top of the menu, hopefully designer person will get to by end of weekend.

http://lesedi.us/
Post by Johnny Billquist
Let's take one step back here. Sorry if I'm not directly addressing your
response, but let me try and recap where (I think) we are.
Your problem is that time_t was expanded to 64 bits. But, you know, this
is kindof silly, because your problem is not with time_t at all.
Actually, my problem is with Agile and the time_t hack combined.

For more than a decade time_t could always be converted into a struct tm. This became part of coding lore. There are thousands of programs out there which relied on this learned behavior, rightly or wrongly, it became prior art.
Post by Johnny Billquist
Note how this have absolutely nothing to do with time_t.
On that we will disagree. The change of data type for time_t was ill conceived and poorly implemented.
Post by Johnny Billquist
On a VMS system, you actually have the opposite problem. Since a time_t
is only 32 bits, it can only represent offsets about 4 billion from
epoch, which translates to roughly the year 2106. So, how about you try
and setup a struct tm, with tm_year set to 2110, and then try the mktime
routine, which translates from a struct tm to a time_t. That should fail
miserably, because the time_t cannot represent such a time.
Prior art proved/relied upon ALL time_t values being representable by struct tm. The inverse was _never_ true.
Post by Johnny Billquist
In short, you will never have a situation where you can translate any
time_t value into a struct tm, and any struct tm date that can be
translatable into a time_t. In one direction or the other, you will have
some values that are untranslatable.
Prior to this hack ALL time_t values could be translated into struct tm. The inverse was never true from day one. This hack inverted prior art. With a non-scoped 64-bit integer as the underlying type of time_t all struct tm values will convert to time_t but time_t cannot convert to struct tm for a wide range of values.

Many programs in the field are now broken. Even more of them do not know they are broken because they are using third party things like Boost, which now appears to be broken.

http://www.boost.org/doc/libs/1_55_0/libs/spirit/optimization/high_resolution_timer.hpp

double elapsed_max() const // return estimated maximum value for elapsed()
{
return double((std::numeric_limits<time_t>::max)() - start_time.tv_sec);
}

Before academics go off into the weeds on the word "estimated" in the comment it was there because they could not ensure this code did not catch the edge of a second which would make the return value off by 1 second.
Post by Johnny Billquist
So, at the end of the day, the time_t type do not have any problems.
At the end of the day the change to time_t was an Agile induced hack without regard to prior art or the amount of code in the field which could be broken. It is also not the first time GNU has bludgeoned forward with an Agile induced hack on a numeric data type completely ignoring numeric_limits then had to go back and sweep it up.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40856

Not that it is completely germane to this discussion, but there is a pretty good discussion about numeric_limits specialization here:

https://bytes.com/topic/c/answers/127908-numeric_limits-specialization

This isn't a Y-2-billion bug. It is broken today. I personally don't use Boost, but I know of quite a few embedded and stock trading systems which do. Many of them use the high speed timer. I don't know how many other libraries, especially commercial ones which do not have their code posted on-line, which relied on the max() method returning a value struct tm could deal with, but given it was true and relied upon for more than a decade, probably a lot.

This hack to time_t and the Boost snippet also bring forth another debacle which will, except in this snippet's case, start to break things in the future. A 64-bit int cannot fit in many "double" data types. Not a bad discussion here.

http://stackoverflow.com/questions/2582032/find-max-integer-size-that-a-floating-point-type-can-handle-without-loss-of-prec


At any rate, I am done with this particular conversation. Hopefully, by Monday I will get caught up on everything and be able to file the bug report.

I wish you well Johnny. It has been a pleasure chatting with you.

-----
http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog
http://lesedi.us
Arne Vajhøj
2017-02-16 16:21:55 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Let's take one step back here. Sorry if I'm not directly addressing
your response, but let me try and recap where (I think) we are.
Your problem is that time_t was expanded to 64 bits. But, you know,
this is kindof silly, because your problem is not with time_t at
all.
Actually, my problem is with Agile and the time_t hack combined.
For more than a decade time_t could always be converted into a struct
tm. This became part of coding lore. There are thousands of programs
out there which relied on this learned behavior, rightly or wrongly,
it became prior art.
Prior art proved/relied upon ALL time_t values being representable by
struct tm. The inverse was _never_ true.
Prior to this hack ALL time_t values could be translated into struct
tm. The inverse was never true from day one. This hack inverted prior
art. With a non-scoped 64-bit integer as the underlying type of
time_t all struct tm values will convert to time_t but time_t cannot
convert to struct tm for a wide range of values.
You do not seem to understand the C language.

There is C code that is correct meaning that is has to produce the
expected result per specs.

And there is C code that use implementation specific or undefined
behavior.

The latter may work for decades. But some day it may break.

Having worked for decades does not make it less implementation
specific or undefined.

Prior art is an important thing in patent law.

It does not impact what is implementation
specific or undefined.

There must be millions of C programs out in the world
making various invalid assumptions resulting in
implementation specific or undefined behavior.

Code that assumes:
* all integers are two's complement
* all floating points are IEEE floating points
* all integer sizes are multipla of 8 bits
* all disk file have lines separated by LF's
etc.

These assumptions are commonly true, but they are
not required by the standard and the code may not
work.
Post by seasoned_geek
Post by Johnny Billquist
So, at the end of the day, the time_t type do not have any
problems.
At the end of the day the change to time_t was an Agile induced hack
without regard to prior art or the amount of code in the field which
could be broken.
Neither glibc nor Linux kernel projects are fully agile, so agile
can not really take the blame for whatever you want to blame those
two projects for.
Post by seasoned_geek
It is also not the first time GNU has bludgeoned
forward with an Agile induced hack on a numeric data type completely
ignoring numeric_limits then had to go back and sweep it up.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40856
They have also messed around there.

But not a bug since it is outside of standards.

And much easier to fix since it is all inside g++.
Post by seasoned_geek
Not that it is completely germane to this discussion, but there is a
https://bytes.com/topic/c/answers/127908-numeric_limits-specialization
I don't think it is applicable as I don't think you can do it
(even if you are willing to do it) on a typedef.
Post by seasoned_geek
This isn't a Y-2-billion bug. It is broken today. I personally don't
use Boost, but I know of quite a few embedded and stock trading
systems which do. Many of them use the high speed timer. I don't know
how many other libraries, especially commercial ones which do not
have their code posted on-line, which relied on the max() method
returning a value struct tm could deal with, but given it was true
and relied upon for more than a decade, probably a lot.
Possible.

There is a lot of bad code out in the world.
Post by seasoned_geek
This hack to time_t and the Boost snippet also bring forth another
debacle which will, except in this snippet's case, start to break
things in the future. A 64-bit int cannot fit in many "double" data
types.
Again bad C code making assumptions not guaranteed to always be true.

That kind of C code creates lots of problems.
Post by seasoned_geek
At any rate, I am done with this particular conversation. Hopefully,
by Monday I will get caught up on everything and be able to file the
bug report.
If you file a bug report it may be rejected because it is not a bug -
current functionality is compliant with the standards.

You can obviously file for an enhancement.

But I am somewhat skeptical about what they can do:
* they can not change the definition of struct tm as that is mandated
by C and SUS standards
* they can not force int to be 64 bit on all platforms
* they can not provide a numeric_limits specialization as they
don't control all C++ compilers and (as far as I can see) can not
specialize on a typedef

The only thing they can do is roll back time_t
from 64 bit to 32 bit.

Doing that with signed will reintroduce the 2038 problem.

Doing that with unsigned will convert a 2 billion year
problem to a 2106 problem.

I think it will be a hard sell.

Arne
Johnny Billquist
2017-02-16 21:27:56 UTC
Permalink
Post by Bill Gunshannon
Johnny,
First let me thank you for taking the time to put together such a detailed response. After this post to you I will be bowing out of this conversation and before the end of the weekend filing a bug report.
Oh well. Maybe I'm just writing a meaningless response then. But I'll
make a couple of small comments nontheless...
Post by Bill Gunshannon
Post by Johnny Billquist
Let's take one step back here. Sorry if I'm not directly addressing your
response, but let me try and recap where (I think) we are.
Your problem is that time_t was expanded to 64 bits. But, you know, this
is kindof silly, because your problem is not with time_t at all.
Actually, my problem is with Agile and the time_t hack combined.
I fail to see the connection to Agile here, and I fail to see that the
time_t change is either a hack, or broken.

time_t is a type for expressing the time since epoch. That is all it is.
And as such, it works, and continue to work. And there is no other way
to make it continue working beyond 2106.
Post by Bill Gunshannon
For more than a decade time_t could always be converted into a struct tm. This became part of coding lore. There are thousands of programs out there which relied on this learned behavior, rightly or wrongly, it became prior art.
That is definitely more lore than I have ever heard of in all of my
years writing code. I would never make such an assumption, and I would
say that anyone who makes such assumptions are just begging for
problems. In a way, a typical example of bad code making assumptions
because the coder "knew" some internal, implementation specific things.

This is about on par with old DOS code who used tight loops to pass
time, totally oblivious to the fact that with a faster CPU, their code
would not work as intended anymore. They knew too much about their
current environment, and did not understand the complication of having
their code possibly run under any other circumstances.
Post by Bill Gunshannon
Post by Johnny Billquist
Note how this have absolutely nothing to do with time_t.
On that we will disagree. The change of data type for time_t was ill conceived and poorly implemented.
I couldn't disagree more with you. time_t works, and it represents what
it is supposed to represent absolutely correctly. Your bug and crash
occur because of a conversion to a struct tm. There is nothing wrong
with your time_t value.
Post by Bill Gunshannon
Post by Johnny Billquist
On a VMS system, you actually have the opposite problem. Since a time_t
is only 32 bits, it can only represent offsets about 4 billion from
epoch, which translates to roughly the year 2106. So, how about you try
and setup a struct tm, with tm_year set to 2110, and then try the mktime
routine, which translates from a struct tm to a time_t. That should fail
miserably, because the time_t cannot represent such a time.
Prior art proved/relied upon ALL time_t values being representable by struct tm. The inverse was _never_ true.
I have never seen anyone, anywhere, make the claim that all time_t
values are representable by a struct tm. It might be true for the
forseeable future, but I have never seen anyone post a guarantee about this.
And you just tried it with someone a bit beyond the forseeable future.
Post by Bill Gunshannon
Post by Johnny Billquist
In short, you will never have a situation where you can translate any
time_t value into a struct tm, and any struct tm date that can be
translatable into a time_t. In one direction or the other, you will have
some values that are untranslatable.
Prior to this hack ALL time_t values could be translated into struct tm. The inverse was never true from day one. This hack inverted prior art. With a non-scoped 64-bit integer as the underlying type of time_t all struct tm values will convert to time_t but time_t cannot convert to struct tm for a wide range of values.
Disagree. You might not have seen any, but got knows what might be
turned up if we start searching for odd machines and implementations
(you know, such machines as those where a NULL pointer is not actually a
value where all bits are zero, for example...)
Post by Bill Gunshannon
Many programs in the field are now broken. Even more of them do not know they are broken because they are using third party things like Boost, which now appears to be broken.
I would suspect many of those programs are broken in other ways as well,
including possibly calling mktime with values are were out of range for
a 32-bit time_t.
Post by Bill Gunshannon
http://www.boost.org/doc/libs/1_55_0/libs/spirit/optimization/high_resolution_timer.hpp
double elapsed_max() const // return estimated maximum value for elapsed()
{
return double((std::numeric_limits<time_t>::max)() - start_time.tv_sec);
}
Before academics go off into the weeds on the word "estimated" in the comment it was there because they could not ensure this code did not catch the edge of a second which would make the return value off by 1 second.
I must be missing something. What is wrong here?
Isn't this function just doing exactly what the comment says?
You just get a very big number when time_t is 64 bits. But there is
nothing wrong with that.
Even with a 32-bit time_t, you'll get a pretty big number here. But what
else is new?

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Craig A. Berry
2017-02-12 19:52:03 UTC
Permalink
Post by Arne Vajhøj
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
struct tm field tm_year is defined by the standard to be int.
OK, by "get the max value" I thought you meant something like
incrementing a 64-bit value, repeatedly passing to gmtime() or
localtime() and checking for error; the last call that doesn't return an
error should be a reasonable guess at the maximum value that can be
represented in a tm struct.

Even aside from how silly it is to discuss the validity of dates eons
after the heat death of the universe, it makes no sense at all simply to
set every bit in a time_t value and consider something broken that can't
handle that large a value. Which I think was your point that I
misinterpreted.

A more meaningful test might involve how far out the leap year tables
have been calculated, and there are probably other limiting factors. But
basically, going beyond 32 bits for time_t means the number of seconds
available is no longer the limiting component.
seasoned_geek
2017-02-12 22:22:50 UTC
Permalink
Post by Craig A. Berry
Even aside from how silly it is to discuss the validity of dates eons
after the heat death of the universe, it makes no sense at all simply to
set every bit in a time_t value and consider something broken that can't
handle that large a value. Which I think was your point that I
misinterpreted.
It has more to do with a developer most likely throwing out numeric_limits tests which now no longer passed so they could get Jenkins or whatever automated testing tool to "sign off" on their change.

They jumped out ahead of a standards decision and half assedly did it.
Post by Craig A. Berry
A more meaningful test might involve how far out the leap year tables
have been calculated, and there are probably other limiting factors. But
basically, going beyond 32 bits for time_t means the number of seconds
available is no longer the limiting component.
I didn't dig into the leap year stuff yet. Given how most things get done in Agile I'm willing to bet absolutely nothing got done with leap year.
seasoned_geek
2017-02-12 16:57:16 UTC
Permalink
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
Apparently Arne didn't RTFP. I was on a 64-bit Linux distro. This is definitely a bug and a bug OpenVMS has had fixed for over a decade. I was testing the Y2038 bug. You can read more about it here: https://en.wikipedia.org/wiki/Year_2038_problem

Or on Neil's wonderful page: http://www3.sympatico.ca/n.rieck/docs/calendar_time_y2k_etc.html

OpenVMS came up with the correct date in 2106. If Microsoft is claiming to be accurate out to the year 3000 then they are doing something completely non-standard.

time_t holds the number of seconds since midnight of the epoch. Since all it does is contain 1 second clock ticks ALL values of time_t are valid dates and times.

The entire point of the test was to determine just how far the Linux world had come in fixing a design flaw DEC has had fixed for over a decade.

On Linux Lite 3.2 64-bit

// tst_size.cpp
//
#include <iostream>
#include <time.h>
#include <limits>
int main()
{
std::cout << "sizeof(int) " << sizeof(int) << std::endl;
std::cout << "sizeof(long) " << sizeof(long) << std::endl;
std::cout << "sizeof(time_t) " << sizeof(time_t) << std::endl;
std::cout << "int max " << std::numeric_limits<int>::max() << std::endl;
std::cout << "long max " << std::numeric_limits<long>::max() << std::endl;
std::cout << "time_t max " << std::numeric_limits<time_t>::max() << std::endl;
return 0;
}

***@roland-HP-Compaq-8100-Elite-SFF-PC:~/tst$ g++ tst_size.cpp
***@roland-HP-Compaq-8100-Elite-SFF-PC:~/tst$ ./a.out
sizeof(int) 4
sizeof(long) 8
sizeof(time_t) 8
int max 2147483647
long max 9223372036854775807
time_t max 9223372036854775807

On OpenVMS

$ type tst_size.cpp
// tst_size.cpp
//
#include <iostream>
#include <time.h>
#include <limits>
int main()
{
cout << "sizeof(int) " << sizeof(int) << endl;
cout << "sizeof(long) " << sizeof(long) << endl;
cout << "sizeof(unsigned long) " << sizeof(unsigned long) << endl;
cout << "sizeof(time_t) " << sizeof(time_t) << endl;
cout << "int max " << std::numeric_limits<int>::max() << endl;
cout << "long max " << std::numeric_limits<long>::max() << endl;
cout << "time_t max " << std::numeric_limits<time_t>::max() << endl;
cout << "unsigned long max " << std::numeric_limits<unsigned long>::max() << endl;
return 0;
}

$ cxx tst_size.cpp
$ cxxlink tst_size
$ run tst_size
sizeof(int) 4
sizeof(long) 4
sizeof(unsigned long) 4
sizeof(time_t) 4
int max 2147483647
long max 2147483647
time_t max 4294967295
unsigned long max 4294967295

It appears a decade ago OpenVMS implemented the first round of "fixes" by changing to an unsigned long which produced the expected 2106 year. Having done that it would mean negative values for times prior to the 1970 epoch would no longer produce the desired results. It meant time_t could no longer be used for birthdays since many of us were born prior to 19700101.

It also appears that GNU has pooched the move to 64-bit time_t. I don't have a "current" OpenVMS version but the code is here for those who wish to cut & paste it.
Arne Vajhøj
2017-02-12 17:34:03 UTC
Permalink
Post by seasoned_geek
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
Apparently Arne didn't RTFP. I was on a 64-bit Linux distro. This is
definitely a bug and a bug OpenVMS has had fixed for over a decade.
No.

VMS had not even implemented 64 bit time_t.
Post by seasoned_geek
https://en.wikipedia.org/wiki/Year_2038_problem
Not really.

You were testing a year many billions problem.
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
Yes.

And so does your glibc/Linux system.

Which you would have seen if you have tried it.

But instead you tried with many billions of years .
Post by seasoned_geek
If Microsoft is claiming to be accurate out to the year 3000 then
they are doing something completely non-standard.

No. The standard does not specify which values need to be supported.
Post by seasoned_geek
time_t holds the number of seconds since midnight of the epoch. Since
all it does is contain 1 second clock ticks ALL values of time_t are
valid dates and times.

It can sure contain a value, but no guarantee that localtime and gmtime
can handle it.

For some sizes of time_t and int they are guaranteed not to work for
all values.

Arne
Arne Vajhøj
2017-02-12 18:07:33 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
Yes.
And so does your glibc/Linux system.
Which you would have seen if you have tried it.
But instead you tried with many billions of years .
And if you want to file a bug report to the GNU and Linux
people then the problem show up at year 2147483647.

Shall we call it the year 2147483647 problem?

I am an optimist - I think they have sufficient time
to fix it.

Arne
seasoned_geek
2017-02-12 18:45:06 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
Yes.
And so does your glibc/Linux system.
Which you would have seen if you have tried it.
But instead you tried with many billions of years .
And if you want to file a bug report to the GNU and Linux
people then the problem show up at year 2147483647.
Shall we call it the year 2147483647 problem?
You can if you want, however, their bug occurs much much sooner than that.
Post by Arne Vajhøj
I am an optimist - I think they have sufficient time
to fix it.
Typical Agile hack it out today without any real design and let it fail in the field when someone else will have to fix it mentality.
Arne Vajhøj
2017-02-12 19:23:37 UTC
Permalink
Post by Arne Vajhøj
Post by Arne Vajhøj
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
Yes.
And so does your glibc/Linux system.
Which you would have seen if you have tried it.
But instead you tried with many billions of years .
And if you want to file a bug report to the GNU and Linux
people then the problem show up at year 2147483647.
Shall we call it the year 2147483647 problem?
I am an optimist - I think they have sufficient time
to fix it.
And if someone wants to check their system:

#include <stdio.h>
#include <string.h>
#include <stddef.h>
#include <time.h>

void test(int y)
{
time_t t;
struct tm *tparts;
t = time(NULL);
tparts = gmtime(&t);
tparts->tm_year = y;
t = mktime(tparts);
printf("t = %ld\n", t);
tparts = gmtime(&t);
printf("gmtime: %d %d\n", tparts != NULL, tparts != NULL ?
tparts->tm_year : 0);
t += (365*24*60*60);
printf("t = %ld\n", t);
tparts = gmtime(&t);
printf("gmtime: %d %d\n", tparts != NULL, tparts != NULL ?
tparts->tm_year : 0);
}


int main()
{
test(2038);
test(2106);
test(0x7FFFFFFF);
return 0;
}

[***@arne5 ~]$ uname -a
Linux arne5.vajhoej.dk 2.6.32-642.13.1.el6.x86_64 #1 SMP Wed Jan 11
20:56:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[***@arne5 ~]$ ./tfun
t = 62107777367
gmtime: 1 2038
t = 62139313367
gmtime: 1 2039
t = 64253694167
gmtime: 1 2106
t = 64285230167
gmtime: 1 2107
t = 67768036163857367
gmtime: 1 2147483647
t = 67768036195393367
gmtime: 0 0

Arne
seasoned_geek
2017-02-12 18:42:30 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
Post by Craig A. Berry
Post by Arne Vajhøj
It is impossible to get a the max value of a 64 bit time_t working on
a systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
Apparently Arne didn't RTFP. I was on a 64-bit Linux distro. This is
definitely a bug and a bug OpenVMS has had fixed for over a decade.
No.
VMS had not even implemented 64 bit time_t.
Once again you need to RTFP. I didn't say they implemented 64-bit time. I said they "fixed" the 2038 bug with one of the widely used solutions around a decade ago. They changed time_t to be unsigned thus abandoning support for dates prior to the epoch. GNU has violated the standard by creating a time_t datatype the standard library methods cannot operate on over its range of values.
Post by Arne Vajhøj
Post by seasoned_geek
https://en.wikipedia.org/wiki/Year_2038_problem
Not really.
You were testing a year many billions problem.
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
Yes.
And so does your glibc/Linux system.
Which you would have seen if you have tried it.
But instead you tried with many billions of years .
No, I tested their "solution" which is broken. They now have a "solution" which fails over the range of now possible input values.
Stephen Hoffman
2017-02-12 18:59:07 UTC
Permalink
Post by seasoned_geek
Post by Craig A. Berry
It is impossible to get a the max value of a 64 bit time_t working on a
systems with 32 bit int.
Why? Can't you just use a long long or int64_t or whatever?
Apparently Arne didn't RTFP.
Your system uses a four-char int and an eight-char long long.
Probably with 8-bit char and thus 32- and 64-bit int and long, but
specific bit sizes for these types are not required by the C and C++
standards. I'm assuming eight-bit-bytes here.
Post by seasoned_geek
I was on a 64-bit Linux distro.
Unless you're using a compiler with 64-bit int — which you're not — you
can't fit an int64 value into an int32 field. Even on OpenVMS.
Post by seasoned_geek
This is definitely a bug and a bug OpenVMS has had fixed for over a decade.
Here's a draft of the C99 spec, as this is what OpenVMS follows (within
the C compiler, though not the C library or C headers):
http://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definition.pdf

Specifically, "The range and precision of times representable in
clock_t and time_t are implementation-defined.", and the broken-down
values within the time_t structure are defined as int. If you have
32-bit int — which you've shown is the case below — then you'll
correctly get failures returned — null returns from various calls or
such, and errno set — when sending any too-large values at the call.
Post by seasoned_geek
I was testing the Y2038 bug.
Then test with 2038, 2039, and 2106, 2107 or whatever your outer date
testing limit is. I'd expect OpenVMS will show some failures with
dates past 2038, too. That was the outer limit of the Y2K testing.
There's probably going to be some "fun" at 2057 for OpenVMS, for
instance. That's the current pivot year.
Post by seasoned_geek
https://en.wikipedia.org/wiki/Year_2038_problem
http://www3.sympatico.ca/n.rieck/docs/calendar_time_y2k_etc.html
Neither of which is relevant to the bug in the example code.
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
For this case, maybe. But again, I wouldn't assume that OpenVMS
correctly handles dates past 2038.
Post by seasoned_geek
If Microsoft is claiming to be accurate out to the year 3000 then they
are doing something completely non-standard.
"The range and precision of times representable in clock_t and time_t
are implementation-defined."
Post by seasoned_geek
time_t holds the number of seconds since midnight of the epoch. Since
all it does is contain 1 second clock ticks ALL values of time_t are
valid dates and times.
"The range and precision of times representable in clock_t and time_t
are implementation-defined."
Post by seasoned_geek
The entire point of the test was to determine just how far the Linux
world had come in fixing a design flaw DEC has had fixed for over a
decade.
Undefined and implementation-specific behavior can be a whole lot more
subtle than many folks realize, too. Variables declared as long can
be 32-bit or 64-bit for instance, depending on the platform.

http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
Post by seasoned_geek
On Linux Lite 3.2 64-bit
...
sizeof(int) 4
That can't int store a value as large as what was requested. Per the
C99 standard — sorry, not going to go rummage for the C++ specs — the
fields in the time_t structure are required to be int. Hence nulls
are returned from the calls in the provided C++ code. Test with the
same integer as was used with OpenVMS (unsigned long max 4294967295),
and see if that works.

OpenVMS has a somewhat bizarre implementation here too, due to the
32-bit heritage and the longstanding quest for future-complexity
through past-compatibility, Some of the default field sizes and
pointer sizes are not what would be expected with a 64-bit platform.

FWIW and as a suggestion for the next time one of these cases is
encountered, ask folks why the code does not work as expected.
--
Pure Personal Opinion | HoffmanLabs LLC
seasoned_geek
2017-02-12 22:04:32 UTC
Permalink
Post by Stephen Hoffman
Post by seasoned_geek
I was testing the Y2038 bug.
Then test with 2038, 2039, and 2106, 2107 or whatever your outer date
testing limit is. I'd expect OpenVMS will show some failures with
dates past 2038, too. That was the outer limit of the Y2K testing.
There's probably going to be some "fun" at 2057 for OpenVMS, for
instance. That's the current pivot year.
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
For this case, maybe. But again, I wouldn't assume that OpenVMS
correctly handles dates past 2038.
Hello Hoff,

Thanks for the response.

My outer date testing limit was defined by numeric_limits<>.

My circa 2006 version of OpenVMS did the correct thing. It appears the implemented the very early "solution" of abandoning negative date value support changing to an unsigned value pushing the time_t failure point out to some time in 2106. So far, everything tested is as expected with respect to time_t manipulation and the ability to display it.
Post by Stephen Hoffman
"The range and precision of times representable in clock_t and time_t
are implementation-defined."
Which really changes nothing. Over on various Ubuntu forums they are telling people they have "fixed" the 2038 but when they have not. Changing the data type of time_t without changing the corresponding ability to display it in human form created a more vicious bug. An unsigned integer will wrap to epoch on overflow. While oddities may happen there will not be exceptions or hard errors and nulls returned for strings.

GNU has introduced a bug and the Linux world is crowing about no longer having a problem. The bulk of my work over the past decade has been embedded systems which will have multi-decade service lifespans and could cost lives if badness happens.
Post by Stephen Hoffman
FWIW and as a suggestion for the next time one of these cases is
encountered, ask folks why the code does not work as expected.
Thank you for the suggestion. In this case the reason it doesn't work is somewhere someone involved with GNU took it upon themselves to change the time_t data type with nary a thought about consequences. It has been several years since I checked this particular situation. GNU did not used to have this problem. I have sent queries out to clients who could be at risk to make certain the version of g++ is before this bug.

Simply changing time_t and announcing the Linux Y2038 bug "fixed" was incorrect and outright dangerous. Fixing Y2038 requires either abandoning 32-bit platforms like we did 8 and 16 by changing the default int ranges to be 64-bit in nature OR it requires significant changes to struct tm to support a larger time_t date type.

This HACK should have never made it out into the wild.
Arne Vajhøj
2017-02-13 00:03:42 UTC
Permalink
On Sunday, February 12, 2017 at 12:59:14 PM UTC-6, Stephen Hoffman
Post by Stephen Hoffman
"The range and precision of times representable in clock_t and
time_t are implementation-defined."
Which really changes nothing. Over on various Ubuntu forums they are
telling people they have "fixed" the 2038 but when they have not.
Sure they have.

2038, 2039, ... works fine. It first goes bad at 2147483647.
Changing the data type of time_t without changing the corresponding
ability to display it in human form created a more vicious bug. An
unsigned integer will wrap to epoch on overflow. While oddities may
happen there will not be exceptions or hard errors and nulls returned
for strings.
GNU has introduced a bug and the Linux world is crowing about no
longer having a problem.
Code that conforms to specs are by definition not buggy.

And the issue you talk about will:
* only hit code written by really bad C programmers (not checking
the return value of a function documented to be able to return NULL
is really bad programming)
* hit the code when they call localtime or gmtime for a time that
represents year 2147483647 or later
The bulk of my work over the past decade has
been embedded systems which will have multi-decade service lifespans
and could cost lives if badness happens.
I assume that no one working on such code write C/C++ code
like the one you posted.

If they do then I will be very concerned.
Thank you for the suggestion. In this case the reason it doesn't work
is somewhere someone involved with GNU took it upon themselves to
change the time_t data type with nary a thought about consequences.
It has been several years since I checked this particular situation.
GNU did not used to have this problem. I have sent queries out to
clients who could be at risk to make certain the version of g++ is
before this bug.
If they check the facts then I suspect they will be rolling on the
floor laughing.
Simply changing time_t and announcing the Linux Y2038 bug "fixed" was
incorrect and outright dangerous. Fixing Y2038 requires either
abandoning 32-bit platforms like we did 8 and 16 by changing the
default int ranges to be 64-bit in nature OR it requires significant
changes to struct tm to support a larger time_t date type.
This HACK should have never made it out into the wild.
It solved the year 2028 problem.

It did create a year 2 billion problem.

I consider the first a lot more urgent than the second.

Arne
seasoned_geek
2017-02-13 12:45:57 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
Changing the data type of time_t without changing the corresponding
ability to display it in human form created a more vicious bug. An
unsigned integer will wrap to epoch on overflow. While oddities may
happen there will not be exceptions or hard errors and nulls returned
for strings.
GNU has introduced a bug and the Linux world is crowing about no
longer having a problem.
Code that conforms to specs are by definition not buggy.
Spoken like someone pushing Agile stories without high level architect.
Post by Arne Vajhøj
* only hit code written by really bad C programmers (not checking
the return value of a function documented to be able to return NULL
is really bad programming)
* hit the code when they call localtime or gmtime for a time that
represents year 2147483647 or later
Wow Arne, you are reminding me of the first time we exchanged posts on here all of those years ago. Back when academics published textbooks for 4th grade students teaching them meat naturally contained maggots and that flies had nothing to do with it.


You ASSUME that because time_t has an underlying 64-bit data type that numeric_limits<> should return the maximum 64-bit value instead of the maximum value the underlying system can support. I do not.

There is the bug. Your assumption breaks existing behavior which has been in place for more than a decade.

Being able to store a time_t value without the ability to display it is rather pointless. An architect would see that, an academic, not so much. Putting a native time data type in place requiring each and ever developer using it to roll their own display code to support the full range of values is ludicrous.
Post by Arne Vajhøj
Post by seasoned_geek
The bulk of my work over the past decade has
been embedded systems which will have multi-decade service lifespans
and could cost lives if badness happens.
I assume that no one working on such code write C/C++ code
like the one you posted.
If they do then I will be very concerned.
For range testing and durability proofing of certain things it is required. One has to ensure that under the most adverse of software failures things like blood pressure cuffs do not continue inflating and squeezing off a patients arm. Devices have to perform at ALL limits not just convenient happy-path ones.
Post by Arne Vajhøj
If they check the facts then I suspect they will be rolling on the
floor laughing.
I believe they are still on the floor laughing about those text books teaching meat naturally contained maggots.
Post by Arne Vajhøj
It solved the year 2028 problem.
The 2028 problem?

numeric_limits<> should be returning the maximum value the infrastructure (i.e. the rest of the standard C library) can support not the maximum value the underlying data type can support.

I consider this a bug.
Arne Vajhøj
2017-02-13 15:00:10 UTC
Permalink
Post by seasoned_geek
And the issue you talk about will: * only hit code written by
really bad C programmers (not checking the return value of a
function documented to be able to return NULL is really bad
programming) * hit the code when they call localtime or gmtime for
a time that represents year 2147483647 or later
Wow Arne, you are reminding me of the first time we exchanged posts
on here all of those years ago. Back when academics published
textbooks for 4th grade students teaching them meat naturally
contained maggots and that flies had nothing to do with it.
I must admit that I don't see the relevance.

Not checking return values is bad programming.
Post by seasoned_geek
You ASSUME that because time_t has an underlying 64-bit data type
that numeric_limits<> should return the maximum 64-bit value instead
of the maximum value the underlying system can support. I do not.
No.

I do not assume.

I did what you should have done: checked.

numeric_limits does not have a specialization for time_t.
Post by seasoned_geek
There is the bug. Your assumption breaks existing behavior which has
been in place for more than a decade.
I don't think the assumption I did not make can have any impact
on your programs behavior.
Post by seasoned_geek
Being able to store a time_t value without the ability to display it
is rather pointless. An architect would see that, an academic, not
so much. Putting a native time data type in place requiring each and
ever developer using it to roll their own display code to support
the full range of values is ludicrous.
In the world of programming then specs matters more than what is
considered ludicrous by some.
Post by seasoned_geek
It solved the year 2028 problem.
The 2028 problem?
2038
Post by seasoned_geek
numeric_limits<> should be returning the maximum value the
infrastructure (i.e. the rest of the standard C library) can support
not the maximum value the underlying data type can support.
numeric_limits does not work by magic. It works by someone
putting in a specialization.

Such specialization was not put in for time_t. Not in the
standard and not in the glibc/Linux implementation. The first
can be verified in the spec and the second can be verified with
cat/grep.

Another issue is that I am not even sure that they could put
it in if they wanted to.

I am not sure that C++ template type parameter matching is
able to properly distinguish between a typedef name and
the underlying type.

But I will defer that to someone with more C++ template
experience than me.

If that is indeed is the case then the C standard specification
for time_t prevents it.

Arne
John Reagan
2017-02-13 15:15:02 UTC
Permalink
Post by Arne Vajhøj
If that is indeed is the case then the C standard specification
for time_t prevents it.
Arne
I'll just point out that the C99 standard says "time_t" must be an arithmetic type. Arithmetic types are INTEGER and FLOAT. It would be perfectly legal for time_t to be a double for instance. The POSIX standards are a little more explicit that time_t can be either integer or floating. That said, I don't know any platform that has gone the floating route.
Arne Vajhøj
2017-02-13 15:49:39 UTC
Permalink
Post by John Reagan
If that is indeed is the case then the C standard specification for
time_t prevents it.
I'll just point out that the C99 standard says "time_t" must be an
arithmetic type. Arithmetic types are INTEGER and FLOAT. It would
be perfectly legal for time_t to be a double for instance. The POSIX
standards are a little more explicit that time_t can be either
integer or floating.
My point here was just that a C typedef and C++ template specialization
may not work that well together.

Arne
Craig A. Berry
2017-02-13 20:48:56 UTC
Permalink
Post by John Reagan
Post by Arne Vajhøj
If that is indeed is the case then the C standard specification
for time_t prevents it.
Arne
I'll just point out that the C99 standard says "time_t" must be an arithmetic type. Arithmetic types are INTEGER and FLOAT. It would be perfectly legal for time_t to be a double for instance. The POSIX standards are a little more explicit that time_t can be either integer or floating. That said, I don't know any platform that has gone the floating route.
----

That was true until the 2016 edition of POSIX, which now says "time_t shall be an integer type."[1] Gotta love standards :-).

[1] <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html>
John Reagan
2017-02-13 21:43:26 UTC
Permalink
Post by Craig A. Berry
Post by John Reagan
Post by Arne Vajhøj
If that is indeed is the case then the C standard specification
for time_t prevents it.
Arne
I'll just point out that the C99 standard says "time_t" must be an arithmetic type. Arithmetic types are INTEGER and FLOAT. It would be perfectly legal for time_t to be a double for instance. The POSIX standards are a little more explicit that time_t can be either integer or floating. That said, I don't know any platform that has gone the floating route.
----
That was true until the 2016 edition of POSIX, which now says "time_t shall be an integer type."[1] Gotta love standards :-).
[1] <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html>
Ah, somebody has been paying attention for Release 7. Their "shall be an integer type" is marked as an "extension" to the C standard but it is actually a restriction on it (the C99 standard would allow integer or float time_t)

[CX] [Option Start] Extension to the ISO C standard [Option End]
The functionality described is an extension to the ISO C standard. Application developers may make use of an extension as it is supported on all POSIX.1-2008-conforming systems.
David Froble
2017-02-13 17:47:08 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
And the issue you talk about will: * only hit code written by
really bad C programmers (not checking the return value of a
function documented to be able to return NULL is really bad
programming) * hit the code when they call localtime or gmtime for
a time that represents year 2147483647 or later
Wow Arne, you are reminding me of the first time we exchanged posts
on here all of those years ago. Back when academics published
textbooks for 4th grade students teaching them meat naturally
contained maggots and that flies had nothing to do with it.
I must admit that I don't see the relevance.
Not checking return values is bad programming.
In general, not interested in this thread. Not sure I'll see 2038.

But, I will comment that a large part of my code is checking completion status.
A rather large part.

For example, calling a system service. Gotta check the status of queueing the
call, and than gotta check the status int he IOSB. Lot of "extra" code, but,
nothing else is reasonable.
seasoned_geek
2017-02-13 21:29:52 UTC
Permalink
Post by Arne Vajhøj
Post by seasoned_geek
And the issue you talk about will: * only hit code written by
really bad C programmers (not checking the return value of a
function documented to be able to return NULL is really bad
programming) * hit the code when they call localtime or gmtime for
a time that represents year 2147483647 or later
Wow Arne, you are reminding me of the first time we exchanged posts
on here all of those years ago. Back when academics published
textbooks for 4th grade students teaching them meat naturally
contained maggots and that flies had nothing to do with it.
I must admit that I don't see the relevance.
And now you have _your_ answer to life, the Universe and everything.
Stephen Hoffman
2017-02-13 00:31:16 UTC
Permalink
Post by seasoned_geek
Post by seasoned_geek
I was testing the Y2038 bug.
Then test with 2038, 2039, and 2106, 2107 or whatever your outer date>
testing limit is. I'd expect OpenVMS will show some failures with>
dates past 2038, too. That was the outer limit of the Y2K testing.
There's probably going to be some "fun" at 2057 for OpenVMS, for>
instance. That's the current pivot year.
Post by seasoned_geek
OpenVMS came up with the correct date in 2106.
For this case, maybe. But again, I wouldn't assume that OpenVMS>
correctly handles dates past 2038.
Hello Hoff,
Thanks for the response.
My outer date testing limit was defined by numeric_limits<>.
That much has been obvious. But it's not valid input in this context,
nor in general, as the errors returned by the code show.
Post by seasoned_geek
My circa 2006 version of OpenVMS did the correct thing.
Though with different input to the calls, different output is returned.
That much is shown in the output you've posted.
Post by seasoned_geek
It appears the implemented the very early "solution" of abandoning
negative date value support changing to an unsigned value pushing the
time_t failure point out to some time in 2106. So far, everything
tested is as expected with respect to time_t manipulation and the
ability to display it.
The posted code is deep in "nasal demons" territory; undefined or
implementation-specific behavior.
https://en.wikipedia.org/wiki/Undefined_behavior This and problems
with null pointers and suchlike are part of the reason why folks have
been looking at alternatives to C; at Rust or otherwise.
Post by seasoned_geek
"The range and precision of times representable in clock_t and time_t
are implementation-defined."
Which really changes nothing.
The posted example code is making an assumption — that max can be
specified as input to these time routines — that is not permissible
within C, and the example code is correctly getting an error. The
tm struct is limited to int, and int is 32-bits, and that year int
field cannot sustain anything approaching a ~0LL value.
Post by seasoned_geek
Over on various Ubuntu forums they are telling people they have
"fixed" the 2038 but when they have not. Changing the data type of
time_t without changing the corresponding ability to display it in
human form created a more vicious bug. An unsigned integer will wrap to
epoch on overflow. While oddities may happen there will not be
exceptions or hard errors and nulls returned for strings.
Have you tested with 2106 and 2107? Go try that. With pretty much
any date — any input time_t — that can be converted into a 32-bit int,
as is the declared storage in the tm struct? The 32-bit int field
limit within the tm structure is NOT the same 32-bit limit that 2038
involves. This means that the input is limited to producing a date of
around 2^30th — I'm not inclined to write some code to get the exact
value by reversing the tm back into a date to see where that is, but
that'll tell you the architectural limit of a standards-compliant tm
structure year value. This is the use of int for the fields inside
the tm structure, and not the 2038-related limit.

See page 337 in
http://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definition.pdf for
a discussion of the tm struct. OpenVMS isn't passing in anything
approaching ~0LL in what you've shown, either.

Note, in a previous reply, I'd incorrectly mentioned the time_t struct.
It's the tm struct here.
Post by seasoned_geek
GNU has introduced a bug and the Linux world is crowing about no longer
having a problem. The bulk of my work over the past decade has been
embedded systems which will have multi-decade service lifespans and
could cost lives if badness happens.
The source of the error is within the posted example code. It has
exceeded the range permissible for a 32-bit int, such as the one you're
using. An error was returned, as is appropriate.
Post by seasoned_geek
FWIW and as a suggestion for the next time one of these cases is>
encountered, ask folks why the code does not work as expected.
Thank you for the suggestion. In this case the reason it doesn't work
is somewhere someone involved with GNU took it upon themselves to
change the time_t data type with nary a thought about consequences. It
has been several years since I checked this particular situation. GNU
did not used to have this problem. I have sent queries out to clients
who could be at risk to make certain the version of g++ is before this
bug.
If the code the posted example was derived from was trying to determine
the maximum supported date as it might appear, I'm not aware of a good
(standards-compliant) way to do that.
Post by seasoned_geek
Simply changing time_t and announcing the Linux Y2038 bug "fixed" was
incorrect and outright dangerous. Fixing Y2038 requires either
abandoning 32-bit platforms like we did 8 and 16 by changing the
default int ranges to be 64-bit in nature OR it requires significant
changes to struct tm to support a larger time_t date type.
This HACK should have never made it out into the wild.
Could you elaborate on what the "hack" is here? Because I don't see
anything that's obviously incorrect here beyond some issues with the
posted C++ code, particularly given the C standards around the fields
in the time_t structure. As for GNU or Linux involvement here, I see
exactly the same behavior — an access violation on the penultimate
printf, and nulls from various calls — using llvm/clang on macOS.
That access violation is also what I'd expect to happen with the posted
code, too.
Post by seasoned_geek
maxTime is 0
maxTime Wed Dec 31 19:00:00 1969
asctime() Thu Jan 1 00:00:00 1970
Year: 1970
maxTime is 9223372036854775807
maxTime (null)
(lldb)
The lldb debugger stops on the penultimate printf, as mentioned.

I'd expect that the posted code will fail on OpenVMS Alpha when built
for 64-bit, but then the C++ code doesn't seem compile on the local
OpenVMS Alpha with the cited commands. There might be a DCL symbol
for CXX in use here?
Post by seasoned_geek
$ cxx/ver
HP C++ V7.3-009 for OpenVMS Alpha V8.4
$ cxx t.cxx
std::cout << "maxTime is " << maxTime << std::endl;
.............^
%CXX-E-NOTMEMBER, namespace "std" has no member "cout"
at line number 9 in file T.CXX;2
std::cout << "maxTime is " << maxTime << std::endl;
......................................................^
%CXX-E-NOTMEMBER, namespace "std" has no member "endl"
at line number 9 in file T.CXX;2
std::cout << "maxTime is " << maxTime << std::endl;
.............^
%CXX-E-NOTMEMBER, namespace "std" has no member "cout"
at line number 14 in file T.CXX;2
std::cout << "maxTime is " << maxTime << std::endl;
......................................................^
%CXX-E-NOTMEMBER, namespace "std" has no member "endl"
at line number 14 in file T.CXX;2
%CXX-I-MESSAGE, 4 errors detected in the compilation of "T.CXX;2".
$ type t.cxx
#include <limits>
#include <iostream>
#include <time.h>
#include <string.h>
#include <stdio.h>
int main(int argc, const char * argv[]) {
time_t maxTime = 0;
std::cout << "maxTime is " << maxTime << std::endl;
printf( "maxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);
maxTime = std::numeric_limits<time_t>::max();
std::cout << "maxTime is " << maxTime << std::endl;
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);
return 0;
}
TL;DR: time_t can be as big as you want, but the value must deconstruct
into the fields in the tm struct specified by the C standard. If the
input time_t doesn't fit into the int-length fields in the tm struct,
any reasonable gmtime call should return a null with errno set.
--
Pure Personal Opinion | HoffmanLabs LLC
Arne Vajhøj
2017-02-13 01:06:25 UTC
Permalink
Post by Stephen Hoffman
Have you tested with 2106 and 2107? Go try that. With pretty much
any date — any input time_t — that can be converted into a 32-bit int,
as is the declared storage in the tm struct? The 32-bit int field
limit within the tm structure is NOT the same 32-bit limit that 2038
involves. This means that the input is limited to producing a date of
around 2^30th — I'm not inclined to write some code to get the exact
value by reversing the tm back into a date to see where that is, but
that'll tell you the architectural limit of a standards-compliant tm
structure year value. This is the use of int for the fields inside the
tm structure, and not the 2038-related limit.
I posted a test a few hours ago.

Rather unsurprising:

void test(int y)
{
time_t t;
struct tm *tparts;
t = time(NULL);
tparts = gmtime(&t);
tparts->tm_year = y;
t = mktime(tparts);
printf("t = %ld\n", t);
tparts = gmtime(&t);
printf("gmtime: %d %d\n", tparts != NULL, tparts != NULL ?
tparts->tm_year : 0);
t += (365*24*60*60);
printf("t = %ld\n", t);
tparts = gmtime(&t);
printf("gmtime: %d %d\n", tparts != NULL, tparts != NULL ?
tparts->tm_year : 0);
}

...

test(2038);
test(2106);
test(0x7FFFFFFF);

[***@arne5 ~]$ uname -a
Linux arne5.vajhoej.dk 2.6.32-642.13.1.el6.x86_64 #1 SMP Wed Jan 11
20:56:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[***@arne5 ~]$ ./tfun
t = 62107777367
gmtime: 1 2038
t = 62139313367
gmtime: 1 2039
t = 64253694167
gmtime: 1 2106
t = 64285230167
gmtime: 1 2107
t = 67768036163857367
gmtime: 1 2147483647
t = 67768036195393367
gmtime: 0 0

Arne
seasoned_geek
2017-02-13 20:35:13 UTC
Permalink
Hello Hoff,

Thank you for taking time out of your busy day.
Post by Stephen Hoffman
but then the C++ code doesn't seem compile on the local
OpenVMS Alpha with the cited commands. There might be a DCL symbol
for CXX in use here?
No, I posted 2 sets of code. The CXX compiler version was too old to support std::cout and std::endl, instead it required just cout and endl.

$ type max_time_test.cpp
// max_time_test.cpp
#include <limits>
#include <iostream>
#include <time.h>
#include <string.h>
#include <stdio.h>

int main()
{
time_t maxTime = 0;
cout << "maxTime is " << maxTime << endl;
printf( "maxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);

maxTime = std::numeric_limits<time_t>::max();
cout << "maxTime is " << maxTime << endl;
printf( "\nmaxTime %s\n", ctime(&maxTime));
printf( "asctime() %s\n", asctime(gmtime(&maxTime)));
printf( "Year: %d\n", gmtime(&maxTime)->tm_year+1900);

return 0;
}
$ cxx/ver
HP C++ V7.1-015 for OpenVMS Alpha V8.3

$ type tst_size.cpp
// tst_size.cpp
//
#include <iostream>
#include <time.h>
#include <limits>
int main()
{
cout << "sizeof(int) " << sizeof(int) << endl;
cout << "sizeof(long) " << sizeof(long) << endl;
cout << "sizeof(unsigned long) " << sizeof(unsigned long) << endl;
cout << "sizeof(time_t) " << sizeof(time_t) << endl;
cout << "int max " << std::numeric_limits<int>::max() << endl;
cout << "long max " << std::numeric_limits<long>::max() << endl;
cout << "time_t max " << std::numeric_limits<time_t>::max() << endl;
cout << "unsigned long max " << std::numeric_limits<unsigned long>::max() << endl;
return 0;
}
Post by Stephen Hoffman
If the code the posted example was derived from was trying to determine
the maximum supported date as it might appear, I'm not aware of a good
(standards-compliant) way to do that.
The posted example code is making an assumption — that max can be
specified as input to these time routines — that is not permissible
within C, and the example code is correctly getting an error. The
TL;DR: time_t can be as big as you want, but the value must deconstruct
into the fields in the tm struct specified by the C standard. If the
input time_t doesn't fit into the int-length fields in the tm struct,
any reasonable gmtime call should return a null with errno set.
Therein is where we have a professional difference of opinion. Prior art had the standards-compliant method of testing this as

std::numeric_limits<time_t>::max()

While it may not have been the original intention, since C++ 98 it has been in the field and exists as suggestion and example in many places. It became institutionalized. Logically it made sense. When one requests the limits of a declared type they really want the limits of the declared type not the physical repository.

Were the C/C++ standard to create other declared types such as weekday_t and month_t would returning a max() for their types of the underlying implementation be of any logical use?

Yes, numeric_limits is currently defined as a template which complicates things, but it has been modified before. In C++98 we only had wchar_t. In C++11 we got char16_t and char32_t.

Given that it is not unreasonable to expect numeric_limits to be specialized for time_t until such time as struct tm and all which uses it be updated to handle the complete range. We can programatically generate the current max struct tm supported value of time_t so it never needs to be anything hard coded.
Arne Vajhøj
2017-02-13 22:28:05 UTC
Permalink
Post by seasoned_geek
Post by Stephen Hoffman
If the code the posted example was derived from was trying to determine
the maximum supported date as it might appear, I'm not aware of a good
(standards-compliant) way to do that.
The posted example code is making an assumption — that max can be
specified as input to these time routines — that is not permissible
within C, and the example code is correctly getting an error. The
TL;DR: time_t can be as big as you want, but the value must deconstruct
into the fields in the tm struct specified by the C standard. If the
input time_t doesn't fit into the int-length fields in the tm struct,
any reasonable gmtime call should return a null with errno set.
Therein is where we have a professional difference of opinion. Prior
art had the standards-compliant method of testing this as
std::numeric_limits<time_t>::max()
It is not standard compliant as the standards does not guarantee it to
provide a time_t that can be used by localtime and gmtime.
Post by seasoned_geek
While it may not have been the original intention, since C++ 98 it
hasbeen in the field and exists as suggestion and example in many places.
It became institutionalized. Logically it made sense. When one requests
the limits of a declared type they really want the limits of the
declared type not the physical repository.
Were the C/C++ standard to create other declared types such as
weekday_t and month_t would returning a max() for their types of the
underlying implementation be of any logical use?
The C++ standard specifies which types numeric_limits has to provide
specializations for.

No programmer should expect any other specializations unless
verified to be present in a specific implementation (and note
that in that case the code is not portable).
Post by seasoned_geek
Yes, numeric_limits is currently defined as a template which
complicates things, but it has been modified before. In C++98 we only
had wchar_t. In C++11 we got char16_t and char32_t.
The C++ standard specifies that numeric_limits specialization exist
for those.

That is easy because those are not typedefs in C++ (they may be
and frequently are typedefs in C).

Arne
Stephen Hoffman
2017-02-13 23:06:27 UTC
Permalink
Post by seasoned_geek
Therein is where we have a professional difference of opinion. Prior
art had the standards-compliant method of testing this as
std::numeric_limits<time_t>::max()
While it may not have been the original intention, since C++ 98 it has
been in the field and exists as suggestion and example in many places.
It became institutionalized. Logically it made sense. When one requests
the limits of a declared type they really want the limits of the
declared type not the physical repository.
Were the C/C++ standard to create other declared types such as
weekday_t and month_t would returning a max() for their types of the
underlying implementation be of any logical use?
Yes, numeric_limits is currently defined as a template which
complicates things, but it has been modified before. In C++98 we only
had wchar_t. In C++11 we got char16_t and char32_t.
Given that it is not unreasonable to expect numeric_limits to be
specialized for time_t until such time as struct tm and all which uses
it be updated to handle the complete range. We can programatically
generate the current max struct tm supported value of time_t so it
never needs to be anything hard coded.
Your code will break if you aim the same input at the underlying C
routines on OpenVMS, too.

The example code isn't manifesting the misbehavior due to
implementation differences between the hybrid 32-bit (on OpenVMS) and
64-bit (elsewhere). (The code doesn't link with ANSI enabled, either.)

Whether or not passing 64-bit time_t into these calls is reasonable
behavior or not is interesting, but it's also academic as the posted
code itself does not comply with the C standards.

Given the current design will continue to work for over a billion
years, I'm not sure whether anyone will be interested in introducing a
parallel API or breaking source compatibility with a tm structure
change, either.

I will hopefully not be continuing to program in C++ in a billion
years, nor do I hope to be tending to billion-year-old legacy code, nor
with dealing with the existing 32-64-bit hybrid API design of OpenVMS
until then.
--
Pure Personal Opinion | HoffmanLabs LLC
seasoned_geek
2017-02-16 14:12:52 UTC
Permalink
Hoff,

First off I must apologize to you and the newsgroup for having responded to Arne. I have asked him before not to chime in on my threads because he _always_ pulls a conversation deep into the weeds away from all which actually matters. Thus must be the purpose of academia.

This mortal sin will require a heartfelt act of contrition followed by 40 days of penance.

I wanted to respond to both yourself and Johnny, but after that I am done with this conversation and will file a bug report with GNU.
Post by Stephen Hoffman
Your code will break if you aim the same input at the underlying C
routines on OpenVMS, too.
Struggling a bit on this statement. On my VMS version time_t is defined to be 32-bits and the underlying C routines are prototyped to accept a time_t. Not sure a 64-bit value could be fed. Should have compiler warning, truncation, or both.
Post by Stephen Hoffman
Whether or not passing 64-bit time_t into these calls is reasonable
behavior or not is interesting, but it's also academic as the posted
code itself does not comply with the C standards.
I sooooo wish I had not responded to Arne. He pulled this off into VMS/ANSI C not checking blah blah blah when this was a short test program which had no intention of being bullet proof. I've never compiled with /ANSI on a VMS box because no client has ever wanted it. They all want the non-ANSI VMS extensions used.

As a result it has taken what feels like several hundred messages to get to

====
Whether or not passing 64-bit time_t into these calls is reasonable behavior
====

Not academic. It's a reality.
Post by Stephen Hoffman
I will hopefully not be continuing to program in C++ in a billion
years, nor do I hope to be tending to billion-year-old legacy code, nor
with dealing with the existing 32-64-bit hybrid API design of OpenVMS
until then.
We will all be victimized by the bug shortly. It is not the first time GNU has bludgeoned forward utilizing the criminal concept of Agile only to have to go back and sweep up numeric_limits and as long as they are using the criminal methodology called Agile it won't be the last.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40856

Since the C++ 98 standard rolled out ALL 32-bit time_t values could be converted into struct tm. Not all struct tm values could be converted into time_t due to the limited range of time_t but going from unusable to usable worked perfectly and become part of computing lore.

According to this link this hack even broke Boost.

http://www.boost.org/doc/libs/1_55_0/libs/spirit/optimization/high_resolution_timer.hpp

double elapsed_max() const // return estimated maximum value for elapsed()
{
return double((std::numeric_limits<time_t>::max)() - start_time.tv_sec);
}


There are thousands, possibly millions of programs out in the field relying on max() returning a value which is within the bounds of struct tm. Most of the developers involved, if they are still alive, won't even know about this hack let alone that they've been victimized by it. Most of the people using things like Boost never look at the underlying code just what each thing claims to do.

While Johnny calls the switch to unsigned a hack, it was at least a legitimate hack of good intent. The 64-bit time_t hack came without warning or even the tiniest bit of architecting.

Today we have absolute zero computer programming magazines which are worth reading let alone subscribing to. Back in the day we had "Programmers Journal", "Dr. Dobb's", "Computer Language", and "The C/C++ User's Journal" before P.J. Plauger sold his soul to Satan. For a while we even had Rex Jaeschke putting out that blue 2-staple pamphlet covering the C/C++ standards committee activities.

Back then we expected rather sweeping and poorly thought out changes. Remember Trigraphs? Yikes! At least they go away with C++ 17.

Anyway, there was ample methods of communication. Today even "Dr. Dobb's" isn't published. Yes, it supposedly got merged with "Information Week" but that magazine was never focused on coding or coding standards. More of a what-bs-can-we-sell-to-management type magazine.
Stephen Hoffman
2017-02-16 15:58:15 UTC
Permalink
Post by seasoned_geek
Not academic. It's a reality.
The posted example code is fundamentally broken.

Pass out-of-range data into a system call, get an error.

The GNU folks will likely (and correctly) close this bug with "user
error", too.
--
Pure Personal Opinion | HoffmanLabs LLC
Johnny Billquist
2017-02-16 21:30:27 UTC
Permalink
Post by Stephen Hoffman
Post by seasoned_geek
Not academic. It's a reality.
The posted example code is fundamentally broken.
Pass out-of-range data into a system call, get an error.
The GNU folks will likely (and correctly) close this bug with "user
error", too.
Well, even more to the point. If a programmer uses gmtime(), he should
read the documentation for gmtime(), and use it according to that
documentation.
And the documentation clearly states that if there is any kind of error,
gmtime() will return NULL, and set errno accordingly.

Anyone who writes code, and ignores information in the documentation for
a function, just gets what he deserves.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
Arne Vajhøj
2017-02-16 16:56:31 UTC
Permalink
Post by seasoned_geek
Post by Stephen Hoffman
Whether or not passing 64-bit time_t into these calls is reasonable
behavior or not is interesting, but it's also academic as the
posted code itself does not comply with the C standards.
I sooooo wish I had not responded to Arne. He pulled this off into
VMS/ANSI C not checking blah blah blah when this was a short test
program which had no intention of being bullet proof.
Lots of code get posted to c.o.v which is far from bullet proof.

That is perfectly fine if not being bullet proof does not
have any significance for the topic.

But given that the (original topic) was code that crashed due
to pointer being NULL, then the missing NULL check of return value
is extremely significant.
Post by seasoned_geek
Post by Stephen Hoffman
I will hopefully not be continuing to program in C++ in a billion
years, nor do I hope to be tending to billion-year-old legacy code,
nor with dealing with the existing 32-64-bit hybrid API design of
OpenVMS until then.
We will all be victimized by the bug shortly.
Why?

The change was made years ago. The world continued.

And a billion years takes a long time.
Post by seasoned_geek
It is not the first
time GNU has bludgeoned forward utilizing the criminal concept of
Agile
I don't think glibc uses agile at all.

Common open source methodologies share a lot with agile, but
it is practically impossible for them to follow all 12
agile principles.
Post by seasoned_geek
Since the C++ 98 standard rolled out ALL 32-bit time_t values could
be converted into struct tm.
The C++ standard has very little to do with this. time_t and struct tm
are both C and POSIX/SUS not C++.

But a 32 bit time_t can be stored in struct tm if int has more than
11 bits (time_t being signed) or 12 bits (time_t being unsigned).

You claimed previously in this thread to have seen systems with
a 8 bit int.

(I have never seen such, but there are a lot that I have not seen even
though it exists)
Post by seasoned_geek
There are thousands, possibly millions of programs out in the field
relying on max() returning a value which is within the bounds of
struct tm.
I am not so sure about that.

It is bad code.

And it is also a weird mix of C++ and C constructs that many would
try to avoid.
Post by seasoned_geek
Most of the developers involved, if they are still alive,
Most probably are. C++ numeric_limits are not that old.
Post by seasoned_geek
won't even know about this hack let alone that they've been
victimized by it.
Most developers writing bad code are not aware that it is bad
code - otherwise they would not have written it.
Post by seasoned_geek
While Johnny calls the switch to unsigned a hack, it was at least a
legitimate hack of good intent. The 64-bit time_t hack came without
warning or even the tiniest bit of architecting.
It is not really an architectural problem.

Arne
Arne Vajhøj
2017-02-16 18:59:12 UTC
Permalink
Post by Arne Vajhøj
But a 32 bit time_t can be stored in struct tm if int has more than
11 bits (time_t being signed) or 12 bits (time_t being unsigned).
Well - that was a totally crap analysis.

1900

:-(

Arne
Johnny Billquist
2017-02-13 10:21:16 UTC
Permalink
Post by seasoned_geek
Post by Stephen Hoffman
"The range and precision of times representable in clock_t and time_t
are implementation-defined."
Which really changes nothing. Over on various Ubuntu forums they are telling people they have "fixed" the 2038 but when they have not. Changing the data type of time_t without changing the corresponding ability to display it in human form created a more vicious bug. An unsigned integer will wrap to epoch on overflow. While oddities may happen there will not be exceptions or hard errors and nulls returned for strings.
The 2038 problem has been fixed. There are still problems, yes. Both in
Ubuntu and VMS. But while for VMS, the problem is now at 2106, Ubuntu
pushed the problem out about 2 billion years. Funny thing is that you
are arguing that it was better to only push it out to 2106 than 2
billion years. Seems like you're the one arguing that the quick and
dirty fix is the better one.

I should probably point out that this is not Ubuntu specific, by the
way. This is pretty much anything except VMS.
Post by seasoned_geek
GNU has introduced a bug and the Linux world is crowing about no longer having a problem. The bulk of my work over the past decade has been embedded systems which will have multi-decade service lifespans and could cost lives if badness happens.
It was not GNU who did this. Blame Posix, if you want to blame someone.

Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: ***@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
seasoned_geek
2017-02-13 21:08:43 UTC
Permalink
Post by Johnny Billquist
Post by seasoned_geek
GNU has introduced a bug and the Linux world is crowing about no longer having a problem. The bulk of my work over the past decade has been embedded systems which will have multi-decade service lifespans and could cost lives if badness happens.
It was not GNU who did this. Blame Posix, if you want to blame someone.
I will blame GNU for implementation failure. Thumping time_t to 64-bit should have also had numeric_limits specialization.
Arne Vajhøj
2017-02-13 22:09:59 UTC
Permalink
Post by seasoned_geek
Post by Johnny Billquist
Post by seasoned_geek
GNU has introduced a bug and the Linux world is crowing about no
longer having a problem. The bulk of my work over the past decade
has been embedded systems which will have multi-decade service
lifespans and could cost lives if badness happens.
It was not GNU who did this. Blame Posix, if you want to blame someone.
I will blame GNU for implementation failure. Thumping time_t to
64-bit should have also had numeric_limits specialization.
Why?

time_t is a concept in the C and OS space.

The people working on that does not update C++ STL headers.

And as I have already mentioned a couple of times, then I am
not even sure that it is possible to do a specialization on a typedef.

Arne
Loading...