[...]
Post by Syren Baranhttp://en.wikipedia.org/wiki/CLDC#Typical_requirements
Choose for yourself if you consider that limited or not.
See (second) explanation below of what I was refering too (which you
have now completey ignored for the second time).
Post by Syren BaranAnd I meant (still) 'less RAM than would be needed for the system
to work without OOM-caused failures'. The difference would be the
difference between 'enough' and 'not enough'. I was arguably
imprecise when selecting the initial wording, but since I have now
explained this to you twice, you can hardly claim that you still
misunderstand me by accident.
And there again there was no reason to check memory allocations (not
mallocs obviously) since the total amount of RAM was well known and
the underlying OS wasnt even multi-tasking capable.
This doesn't refer to my text above anyhow.
[...]
Post by Syren BaranPost by Rainer WeikusatPost by Syren BaranYou argue that malloc has to be checked while allocating arrays in
stack space in fine.
This is wrong. I have argued that the caller should be
able to allocate buffers in some convenient location the callee
cannot possibly know
And my argument for not doing so has remained the same from the
start. I prefer to dynamicly allocate the memory than check for
overruns every time.
There is no reference to the caller/ callee issue I mentioned in you
text. Indepedently of this, it doesn't make sense: A size is need to
dynamically allocate a block of memory and computing this size to use
it for an allocation isn't anyhow different than computing the same
size and use it to compare with some other size. The caller can, of
course, dynamically allocate the buffer somewhere too and interface
enabling this, eg snprintf, have already been mentioned in this text.
Post by Syren Baranand (completely independent of this), that the return values of
calls which can fail, specifically, malloc, need to be checked.
Ok, so lets assume a malloc(1) fails. What does this mean?
The OS can not allocate a further page for the heap. Further
programm flow is no longer well defined, since every function
call might require allocating a page for the stack.
Is the above TRUE or FALSE?
It's trash. Eg, on Linux 2.4, some mallocs fail when the maximum
amount of sbreakable space has been exceeded, ie the break has been
moved up to the so-called 'unmapped base' of a task, despite the fact
that there is still plenty free RAM available and that the stack,
sitting in a completely different part of the address space, can still
be expanded. The amount of stack space needed is bound by the maximum
possible callchain depth of some program, including all automatic
allocations needed along this callchain. The memory needed for this is
shared among all possible callchains. This means that once the maximum
callchain depth has been reached once, the stack will not grow
anymore. Depending on the application, stacks possibly don't
grow at all, but are preallocated (multi-threading). The Linux kernel
can, for instance, be configured to use only a 4K kernel stack per
thread and doesn't have any stack overrun checking (AFAIK), suggesting
that requirements for stack memory are really modest, even in 'large'
programs. That they actually are can be verified, eg, by running pmap
on a few long-running processes on some Linux systems.
Ignoring the factual errors in your statement, that operation X could
fail, too, is not an argument for ignoring possible failures of
operation Y. And we were discussing Y and not X.
Post by Syren BaranAfter all you want to the discussion to lead to the discovery of the truth.
And the truth is that you have no clue about what you are talking
about. Not that anybody could possibly have guessed that ...
Post by Syren BaranYou counterargument against the latter was 'this would
Post by Rainer Weikusathardly be feasible in a large application'
" Checking every single malloc in a bigger application for possible
solutions to ENOMEM's is hardly feasible."
I stand to that.
Checking a large malloc, sure. Checking a small malloc,
bullshit. Rather catch SIGSEGV. That adds less overhead than a wrapper
function for malloc.
The behaviour of a program which has caught SIGSEGV and the signal
handler has returned is undefined for UNIX(*). Restateting that you
rather rely on the availability of RAM and accept that the program may
'crash' because it wasn't doesn't make this a more sensible strategy
to cope with actual ressource shortage (remember the scheduled retry)
than it was last time.
Post by Syren Baran(depending on the person
Post by Rainer Weikusatusing this adjective, this can mean anything between 100 and n * 10^6
lines of code) and I assume that the less-than-complimentary "I (here
refering to you) am much too lazy to write all this text" won't be
much 'off base' as an alternate description of the same phenomenon.
Since you couldn't possibly get away with that when there would be a
'high' chance of malloc actually failing, I further concluded that you
have never really been forced to deal with this possiblity.
Ok a high chance of malloc failing means allocating a "large" amount
of memory, right?
Why would it?
Post by Syren BaranPost by Rainer WeikusatAssuming this as given, your 'judgement' in this respect can hardly
be trusted, IOW 'there will always be an Escimo knowing exactly
how the inhabitants of the Kongo should deal with heat waves'.
I am not aware of any inuit who have written their doctorol thesis in
meteorology on that topic. But since i have no knowledge of that field
of study i can neither refute nor confirm that.
Translating this into simpler language: Opinions are like
a*holes. Everybody has one.
Got it this time?