Discussion:
More on C++ stack arrays
bearophile
2013-10-20 14:25:35 UTC
Permalink
More discussions about variable-sized stack-allocated arrays in
C++, it seems there is no yet a consensus:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf

I'd like variable-sized stack-allocated arrays in D.

Bye,
bearophile
Adam D. Ruppe
2013-10-20 15:50:18 UTC
Permalink
Post by bearophile
I'd like variable-sized stack-allocated arrays in D.
I think I would too, though it'd be pretty important, at least
for @safe, to get scope working right.

Ideally, the stack allocated array would be a different type than
a normal array, but offer the slice operator, perhaps on alias
this, to give back a normal T[] in a scope storage class (the
return value could only be used in a context where references
cannot escape).

This way, the owner is clear and you won't be accidentally
storing it somewhere.



An alternative to a stack allocated array would be one made from
a thread-local region allocator, which returns a Unique!T or
similar, which frees it when it goes out of scope. Such an
allocator would be substantially similar to the system stack,
fast to allocate and free, although probably not done in
registers and perhaps not as likely to be in cpu cache. But that
might not matter much anyway, I don't actually know.
Lionello Lunesu
2013-10-20 16:18:08 UTC
Permalink
More discussions about variable-sized stack-allocated arrays in C++, it
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
Bye,
bearophile
Good read, but many of the problems don't apply to D ;)

The problem is that it'll probably be like using alloca, which doesn't
get cleaned up until after the function exits. Using it within a loop is
bound to cause a stack overflow. I wonder if there's something we can do
to 'fix' alloca in that respect.

L.
Walter Bright
2013-10-20 16:33:36 UTC
Permalink
More discussions about variable-sized stack-allocated arrays in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.

Just use:

auto a = new T[n];

Stack allocated arrays are far more trouble than they're worth. But what about
efficiency? Here's what I often do something along the lines of:

T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;

The size of the static array is selected so the dynamic allocation is almost
never necessary.
Andrei Alexandrescu
2013-10-20 16:57:03 UTC
Permalink
Post by Walter Bright
Stack allocated arrays are far more trouble than they're worth. But what
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic allocation is
almost never necessary.
Fallback allocators will make it easy to define an allocator on top of a
fixed array, backed by another allocator when capacity is exceeded. BTW
I'm scrambling to make std.allocator available for people to look at and
experiment with.


Andrei
Joseph Rushton Wakeling
2013-10-20 19:15:00 UTC
Permalink
Fallback allocators will make it easy to define an allocator on top of a fixed
array, backed by another allocator when capacity is exceeded. BTW I'm scrambling
to make std.allocator available for people to look at and experiment with.
Great to hear, I'm looking forward to seeing that. :-)
Namespace
2013-10-20 16:56:49 UTC
Permalink
Post by Walter Bright
Post by bearophile
More discussions about variable-sized stack-allocated arrays
in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're worth.
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
But delete is deprecated. ;)
Walter Bright
2013-10-20 17:10:50 UTC
Permalink
Post by Namespace
But delete is deprecated. ;)
I know. But I wanted to show where to put the free, in the case where you're
doing manual allocation.
Adam D. Ruppe
2013-10-20 16:58:26 UTC
Permalink
Post by Walter Bright
Stack allocated arrays are far more trouble than they're worth.
But what about efficiency? Here's what I often do something
Aye, that's a pretty good solution too.
Post by Walter Bright
scope (exit) if (a != tmp) delete a;
but I think you meant if(a is tmp) :)

Though, even that isn't necessarily right since you might use a
to iterate through it (e.g. a = a[1 .. $]), so I'd use a separate
flag variable for it.
bearophile
2013-10-20 17:46:12 UTC
Permalink
Post by Walter Bright
auto a = new T[n];
Sometimes I don't want to do that.
Post by Walter Bright
Stack allocated arrays are far more trouble than they're worth.
I don't believe that.
Post by Walter Bright
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
That's 7 lines of bug-prone code that uses a deprecated
functionality and sometimes over-allocates on the stack. And I
think you have to compare just the .ptr of those arrays at the
end. And if you return one of such arrays you will produce
nothing good. And what if you need 2D arrays? The code becomes
even more complex. (You can of course create a matrix struct for
that).

Dynamically sized stack allocated arrays are meant to solve all
those problems: to offer a nice, compact, clean, easy to remember
and safe syntax. To be usable for 2D arrays too; and when you
pass or return one of them the data is copied by the compiler on
the heap (sometimes this doesn't happen if the optimizing
compiler allocates the array in the stack frame of the caller, as
sometimes done for structs).

D dynamic array usage should decrease and D should encourage much
more the usage of small stack-allocated arrays. This is what
languages as Ada and Rust teach us. Heap allocation of arrays
should be much less common, almost a special case.

----------------
Post by Walter Bright
Fallback allocators will make it easy to define an allocator on
top of a
fixed array,

This over-allocates on the stack, and sometimes needlessly
allocates on the heap or in an arena. Dynamic stack arrays avoid
those downsides.

Bye,
bearophile
Froglegs
2013-10-20 18:17:40 UTC
Permalink
One of my most anticipated C++14 features actually, hope they
don't dawdle too much with the TS it apparently got pushed back
into:(
Walter Bright
2013-10-20 18:42:05 UTC
Permalink
That's 7 lines of bug-prone code that uses a deprecated functionality and
sometimes over-allocates on the stack. And I think you have to compare just the
.ptr of those arrays at the end. And if you return one of such arrays you will
produce nothing good. And what if you need 2D arrays? The code becomes even more
complex. (You can of course create a matrix struct for that).
to offer a nice, compact, clean, easy to remember and safe syntax. To be usable
for 2D arrays too; and when you pass or return one of them the data is copied by
the compiler on the heap (sometimes this doesn't happen if the optimizing
compiler allocates the array in the stack frame of the caller, as sometimes done
for structs).
If your optimizing compiler is that good, it can optimize "new T[n]" to be on
the stack as well.

I'm not particularly enamored with the compiler inserting silent copying to the
heap - D programmers tend to not like such things.
D dynamic array usage should decrease and D should encourage much more the usage
of small stack-allocated arrays. This is what languages as Ada and Rust teach
us. Heap allocation of arrays should be much less common, almost a special case.
Rust is barely used at all, and constantly changes. I saw a Rust presentation
recently by one of its developers, and he said his own slides showing pointer
stuff were obsolete. I don't think there's enough experience with Rust to say it
teaches us how to do things.
This over-allocates on the stack,
I use this technique frequently. Allocating a few extra bytes on the stack
generally costs nothing unless you're in a recursive function. Of course, if
you're in a recursive function, stack allocated dynamic arrays can have
unpredictable stack overflow issues.
and sometimes needlessly allocates on the heap
or in an arena. Dynamic stack arrays avoid those downsides.
The technique I showed is also generally faster than dynamic stack allocation.
David Nadlinger
2013-10-20 19:15:19 UTC
Permalink
Post by Walter Bright
If your optimizing compiler is that good, it can optimize "new
T[n]" to be on the stack as well.
Just a side note: LDC actually does this if it can prove
statically that the size is bounded. Unfortunately, the range
detection is rather conservative (unless your allocation size
turns out to be a constant due to inlining, LLVM is unlikely to
get it).

One idea that might be interesting to think about is to insert a
run-time check for the size if an allocation is known not to be
escaped, but the size is not yet determined. As a GC allocation
is very expensive anyway, this probably wouldn't even be much of
a pessimization in the general case.
Post by Walter Bright
I'm not particularly enamored with the compiler inserting
silent copying to the heap - D programmers tend to not like
such things.
Well, this is exactly what happens with closures, so one could
argue that there is precedent. In general, I agree with you,
though.
Post by Walter Bright
I use this technique frequently. Allocating a few extra bytes
on the stack generally costs nothing unless you're in a
recursive function. Of course, if you're in a recursive
function, stack allocated dynamic arrays can have unpredictable
stack overflow issues.
I also find this pattern to be very useful. The LLVM support
libraries even package it up into a nice llvm::SmallVector<T, n>
template that allocates space for n elements inside the object,
falling back to heap allocation only if that threshold has been
exceeded (a tunable small string optimization, if you want).

David
Walter Bright
2013-10-20 19:27:47 UTC
Permalink
Post by Walter Bright
I'm not particularly enamored with the compiler inserting silent copying to
the heap - D programmers tend to not like such things.
Well, this is exactly what happens with closures, so one could argue that there
is precedent.
Not at all. The closure code does not *copy* the data to the heap. It is
allocated on the heap to start with.
I also find this pattern to be very useful. The LLVM support libraries even
package it up into a nice llvm::SmallVector<T, n> template that allocates space
for n elements inside the object, falling back to heap allocation only if that
threshold has been exceeded (a tunable small string optimization, if you want).
Nice!
Iain Buclaw
2013-10-21 16:24:21 UTC
Permalink
Post by Walter Bright
If your optimizing compiler is that good, it can optimize "new T[n]" to be
on the stack as well.
Just a side note: LDC actually does this if it can prove statically that the
size is bounded. Unfortunately, the range detection is rather conservative
(unless your allocation size turns out to be a constant due to inlining,
LLVM is unlikely to get it).
One idea that might be interesting to think about is to insert a run-time
check for the size if an allocation is known not to be escaped, but the size
is not yet determined. As a GC allocation is very expensive anyway, this
probably wouldn't even be much of a pessimization in the general case.
David, can you check the code generation of:

http://dpaste.dzfl.pl/3e333df6


PS: Walter, looks the above causes an ICE in DMD?
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
Walter Bright
2013-10-21 17:42:19 UTC
Permalink
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
All ICE's should be filed in bugzilla:

http://d.puremagic.com/issues/show_bug.cgi?id=11315
Iain Buclaw
2013-10-21 17:53:47 UTC
Permalink
Post by Walter Bright
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My
intention wasn't to find a bug in DMD though when I pasted that link.
;-)

I was more curious what LDC does if it stack allocates array literals
assigned to static arrays in that program. My guess is that the
dynamic array will get the address of the stack allocated array
literal, and it's values will be lost after calling fill();

If so, this is another bug that needs to be filled and fixed.
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
Timon Gehr
2013-10-21 20:24:20 UTC
Permalink
Post by Iain Buclaw
Post by Walter Bright
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My
intention wasn't to find a bug in DMD though when I pasted that link.
;-)
I was more curious what LDC does if it stack allocates array literals
assigned to static arrays in that program. My guess is that the
dynamic array will get the address of the stack allocated array
literal, and it's values will be lost after calling fill();
If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
Iain Buclaw
2013-10-21 20:32:26 UTC
Permalink
Post by Timon Gehr
Post by Iain Buclaw
Post by Walter Bright
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My
intention wasn't to find a bug in DMD though when I pasted that link.
;-)
I was more curious what LDC does if it stack allocates array literals
assigned to static arrays in that program. My guess is that the
dynamic array will get the address of the stack allocated array
literal, and it's values will be lost after calling fill();
If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and
do a _d_arraycopy.
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
Timon Gehr
2013-10-21 20:41:06 UTC
Permalink
Post by Iain Buclaw
Post by Timon Gehr
Post by Iain Buclaw
Post by Walter Bright
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My
intention wasn't to find a bug in DMD though when I pasted that link.
;-)
I was more curious what LDC does if it stack allocates array literals
assigned to static arrays in that program. My guess is that the
dynamic array will get the address of the stack allocated array
literal, and it's values will be lost after calling fill();
If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and
do a _d_arraycopy.
This code:

int[] x;
int[3] y;

x = y = [1,2,3];

Is equivalent to this code:

int[] x;
int[3] y;

y = [1,2,3];
x = y; // <-- here

Are you saying the line marked with "here" should perform an implicit
allocation and copy the contents of y to the heap?
Iain Buclaw
2013-10-21 21:07:37 UTC
Permalink
Post by Timon Gehr
Post by Iain Buclaw
Post by Timon Gehr
Post by Iain Buclaw
Post by Walter Bright
Post by Iain Buclaw
http://dpaste.dzfl.pl/3e333df6
PS: Walter, looks the above causes an ICE in DMD?
http://d.puremagic.com/issues/show_bug.cgi?id=11315
I've told enough people to raise bugs in GDC to know this. My
intention wasn't to find a bug in DMD though when I pasted that link.
;-)
I was more curious what LDC does if it stack allocates array literals
assigned to static arrays in that program. My guess is that the
dynamic array will get the address of the stack allocated array
literal, and it's values will be lost after calling fill();
If so, this is another bug that needs to be filled and fixed.
Why? AFAICS it is the expected behaviour in any case.
It's an assignment to a dynamic array, so it should invoke the GC and
do a _d_arraycopy.
int[] x;
int[3] y;
x = y = [1,2,3];
int[] x;
int[3] y;
y = [1,2,3];
x = y; // <-- here
Are you saying the line marked with "here" should perform an implicit
allocation and copy the contents of y to the heap?
In GDC, the allocation currently is:

y = [1,2,3]; // <--- here

So it is a safe to not copy.

But yes. I think a GC memcopy should be occuring, as dynamic arrays
aren't passed by value, so are expected to last the lifetime of the
reference to the address.
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
David Nadlinger
2013-10-21 21:24:22 UTC
Permalink
Post by Iain Buclaw
But yes. I think a GC memcopy should be occuring, as dynamic
arrays
aren't passed by value, so are expected to last the lifetime of
the
reference to the address.
This doesn't produce a heap copy (neither according to the spec
nor to actual DMD/LDC behaviour):

---
void foo() {
int[3] a;
int[] b = a;
}
---

Thus, your example will not copy any data either, as due to
associativity, it is equivalent to an assignment to y followed by
an assignment of y to x. x simply is a slice of the
stack-allocated static array.

David
Iain Buclaw
2013-10-21 21:41:16 UTC
Permalink
Post by Iain Buclaw
But yes. I think a GC memcopy should be occuring, as dynamic arrays
aren't passed by value, so are expected to last the lifetime of the
reference to the address.
This doesn't produce a heap copy (neither according to the spec nor to
---
void foo() {
int[3] a;
int[] b = a;
}
---
Thus, your example will not copy any data either, as due to associativity,
it is equivalent to an assignment to y followed by an assignment of y to x.
x simply is a slice of the stack-allocated static array.
I know this, but it does deter me against changing gdc over to stack
allocating array literals. :-)

I'll mull on it over night.
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
David Nadlinger
2013-10-21 21:48:08 UTC
Permalink
Post by Iain Buclaw
I know this, but it does deter me against changing gdc over to
stack
allocating array literals. :-)
I'll mull on it over night.
There is no change in behaviour due to stack-allocating the
literal in the static array assignment (or just not emitting it
at all), at least if GDC correctly implements slice <- sarray
assignment. The dynamic array never "sees" the literal at all, as
shown by Timon.

In the general case (i.e. when assigned to dynamic arrays), you
obviously can't stack-allocate literals, but I don't think we
disagree here.

David
Iain Buclaw
2013-10-21 21:54:18 UTC
Permalink
Post by Iain Buclaw
I know this, but it does deter me against changing gdc over to stack
allocating array literals. :-)
I'll mull on it over night.
There is no change in behaviour due to stack-allocating the literal in the
static array assignment (or just not emitting it at all), at least if GDC
correctly implements slice <- sarray assignment. The dynamic array never
"sees" the literal at all, as shown by Timon.
In the general case (i.e. when assigned to dynamic arrays), you obviously
can't stack-allocate literals, but I don't think we disagree here.
That we do not. :o)
--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';
bearophile
2013-10-20 19:23:05 UTC
Permalink
Post by Walter Bright
If your optimizing compiler is that good, it can optimize "new
T[n]" to be on the stack as well.
That's escape analysis, and it returns a failure as soon as you
return the array, unless you also analyze the caller, and
allocate in the caller stack frame, but this can't be done if the
length of the array is computed in the middle of the called
function.

From what I've seen escape analysis is not bringing Java close to
D performance when you use 3D vectors implemented as small class
instances. We need something that guarantees stack allocation if
there's enough space on the stack.
Post by Walter Bright
I'm not particularly enamored with the compiler inserting
silent copying to the heap - D programmers tend to not like
such things.
An alternative solution is to statically require a ".dup" if you
want to return one of such arrays (so it becomes a normal dynamic
array). This makes the heap allocation visible.

An alternative solution is to copy the data to the stack frame of
the caller. But if you do this there are some cases where one of
such arrays can't be put (as in the C++ proposals), but this is
not too much bad.
Post by Walter Bright
Rust is barely used at all,
Right, there is only an experimental compiler written in it, and
little more, like most of the compiler.

On the other hand Ada is around since a lot of time. And in the
Ada 2005 standard library they have added bounded containers, so
you can even allocate max-sized associative arrays on the stack
:-) This shows how they care to not use heap. I think that
generally Ada code allocates much less often on the heap compared
to the D code.
Post by Walter Bright
Allocating a few extra bytes on the stack generally costs
nothing unless you're in a recursive function.
If you over-allocate you are using more stack space than
necessary, this means you are moving away from cache-warm parts
of the stack to parts that are outside the L1 or L2 cache. This
costs you time. Saving stack saves some run-time.

Another problem is that D newbies and normal usage of D tends to
stick to the simplest coding patterns. Your coding pattern is
bug-prone even for you and it's not what programmers will use in
casual D code. Stack allocation of (variable sized) arrays should
become much simpler, otherwise most people in most cases will use
heap allocation. Such allocation is not silent, but it's not
better than the "silent heap allocations" discussed above.
Post by Walter Bright
Of course, if you're in a recursive function, stack allocated
dynamic arrays can have unpredictable stack overflow issues.
Unless you are using a segmented stack as Go or Rust.
Post by Walter Bright
The technique I showed is also generally faster than dynamic
stack allocation.
Do you have links to benchmarks?

Bye,
bearophile
Walter Bright
2013-10-20 19:42:30 UTC
Permalink
Post by bearophile
Post by Walter Bright
If your optimizing compiler is that good, it can optimize "new T[n]" to be on
the stack as well.
That's escape analysis,
Yes, I know :-)
Post by bearophile
and it returns a failure as soon as you return the
array, unless you also analyze the caller, and allocate in the caller stack
frame, but this can't be done if the length of the array is computed in the
middle of the called function.
Yes. I know you don't believe me :-) but I am familiar with data flow analysis
and what it can achieve.
Post by bearophile
Another problem is that D newbies and normal usage of D tends to stick to the
simplest coding patterns. Your coding pattern is bug-prone even for you
I haven't had bugs with my usage of it.
Post by bearophile
and it's not what programmers will use in casual D code. Stack allocation of (variable
sized) arrays should become much simpler, otherwise most people in most cases
will use heap allocation. Such allocation is not silent, but it's not better
than the "silent heap allocations" discussed above.
Post by Walter Bright
Of course, if you're in a recursive function, stack allocated dynamic arrays
can have unpredictable stack overflow issues.
Unless you are using a segmented stack as Go or Rust.
Segmented stacks have performance problems and do not interface easily with C
functions. Go is not known for high performance execution, and we'll see about Rust.
Post by bearophile
Post by Walter Bright
The technique I showed is also generally faster than dynamic stack allocation.
Do you have links to benchmarks?
No. But I do know that alloca() causes pessimizations in the code generation,
and it costs many instructions to execute. Allocating fixed size things on the
stack executes zero instructions.
Tove
2013-10-20 20:39:12 UTC
Permalink
Post by Walter Bright
No. But I do know that alloca() causes pessimizations in the
code generation, and it costs many instructions to execute.
Allocating fixed size things on the stack executes zero
instructions.
1) Alloca allows allocating in the parent context, which is
guaranteed to elide copying, without relying on a "sufficiently
smart compiler".

ref E stalloc(E)(ref E mem = *(cast(E*)alloca(E.sizeof)))
{
return mem;
}

2) If only accessing the previous function parameter was
supported(which is just an arbitrary restriction), it would be
sufficient to create a helper-function to implement VLA:s.

3) Your "fixed size stack allocation" could be combined with
alloca also, in which case it likely would be faster still.
Nick Treleaven
2013-10-22 11:34:08 UTC
Permalink
Post by Tove
ref E stalloc(E)(ref E mem = *(cast(E*)alloca(E.sizeof)))
{
return mem;
}
Another trick is to use a template alias parameter for array length:

T[] stackArray(T, alias N)(void* m = alloca(T.sizeof * N))
{
return (cast(T*)m)[0 .. N];
}

void main(string[] args)
{
auto n = args.length;
int[] arr = stackArray!(int, n)();
}

Note: The built-in length property couldn't be aliased when I tested
this, hence 'n'. Reference:

http://forum.dlang.org/post/aepqtotvkjyausrlsmad at forum.dlang.org
Bruno Medeiros
2013-10-22 12:26:02 UTC
Permalink
From what I've seen escape analysis is not bringing Java close to D
performance when you use 3D vectors implemented as small class
instances. We need something that guarantees stack allocation if there's
enough space on the stack.
If my recollection and understanding are correct, that's not due to a
limitation in the algorithm itself of Java's escape analysis, but
because Java arrays are allocated using a native call (even within the
Java bytecode layer that is), and the escape analysis does not see
beyond any native call. Even if it originates from a Java operation with
well-known semantics (with regards to escape analysis).
Thefore it can't ellide the allocations... :/
--
Bruno Medeiros - Software Engineer
Paulo Pinto
2013-10-22 15:07:54 UTC
Permalink
Post by Bruno Medeiros
From what I've seen escape analysis is not bringing Java close to D
performance when you use 3D vectors implemented as small class
instances. We need something that guarantees stack allocation if there's
enough space on the stack.
If my recollection and understanding are correct, that's not due to a
limitation in the algorithm itself of Java's escape analysis, but
because Java arrays are allocated using a native call (even within the
Java bytecode layer that is), and the escape analysis does not see
beyond any native call. Even if it originates from a Java operation with
well-known semantics (with regards to escape analysis).
Thefore it can't ellide the allocations... :/
Just thinking out loud, I would say it is JVM specific how much the
implementors have improved escape analysis.

--
Paulo
deadalnix
2013-10-22 17:51:48 UTC
Permalink
Post by Paulo Pinto
Just thinking out loud, I would say it is JVM specific how much
the implementors have improved escape analysis.
Even better, some does it even when escape analysis isn't proven,
just noticed at runtime. If it turns out the JVM is wrong, the
object is moved on head at the escape point.
Paulo Pinto
2013-10-22 18:02:57 UTC
Permalink
Post by Paulo Pinto
Just thinking out loud, I would say it is JVM specific how much the
implementors have improved escape analysis.
Even better, some does it even when escape analysis isn't proven, just
noticed at runtime. If it turns out the JVM is wrong, the object is
moved on head at the escape point.
Yep, I must confess I keep jumping between both sides of the fence about
the whole JIT vs AOT compilation, depending on the use case and
deployment scenario.

For example, as language geek it was quite interesting to discover how
OS/400 has a kernel JIT with a bytecode based userspace. Or that there
were Native Oberon ports that used JIT on module load for the whole OS,
instead of AOT. Only the boot loader, some critical drivers and the
kernel module were AOT.

--
Paulo
Jonathan M Davis
2013-10-21 00:59:14 UTC
Permalink
Post by Walter Bright
More discussions about variable-sized stack-allocated arrays in C++, it
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're worth. But what
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic allocation is almost
never necessary.
If that paradigm is frequent enough, it might be worth wrapping it in a
struct. Then, you'd probably get something like

StaticArray!(int, 10) tmp(n);
int[] a = tmp[];

which used T[10] if n was 10 or less and allocated T[] otherwise. The
destructor could then deal with freeing the memory.

- Jonathan M Davis
Walter Bright
2013-10-21 01:48:58 UTC
Permalink
Post by Jonathan M Davis
If that paradigm is frequent enough, it might be worth wrapping it in a
struct. Then, you'd probably get something like
StaticArray!(int, 10) tmp(n);
int[] a = tmp[];
which used T[10] if n was 10 or less and allocated T[] otherwise. The
destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's nascent allocator
design.
Manu
2013-10-21 10:30:42 UTC
Permalink
Post by Walter Bright
Post by Jonathan M Davis
If that paradigm is frequent enough, it might be worth wrapping it in a
struct. Then, you'd probably get something like
StaticArray!(int, 10) tmp(n);
int[] a = tmp[];
which used T[10] if n was 10 or less and allocated T[] otherwise. The
destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's nascent
allocator design.
I use this pattern all over the place.
I don't love it though. It doesn't feel elegant at all and it wastes stack
space, but it's acceptable, and I'd really like to see this pattern
throughout phobos, especially where strings and paths are concerned.
System interface functions that pass zero-terminated strings through to the
OS are the primary offender, needless garbage, those should be on the stack.

I like to use alloca too where it's appropriate. I'd definitely like if D
had a variable-sized static array syntax for pretty-ing alloca.
I thought about something similar using alloca via a mixin template, but
that feels really hackey!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20131021/9fd9a9c8/attachment.html>
Denis Shelomovskij
2013-10-21 11:24:03 UTC
Permalink
Post by Manu
System interface functions that pass zero-terminated strings through to
the OS are the primary offender, needless garbage, those should be on
the stack.
I like to use alloca too where it's appropriate. I'd definitely like if
D had a variable-sized static array syntax for pretty-ing alloca.
I thought about something similar using alloca via a mixin template, but
that feels really hackey!
No hacks needed. See `unstd.c.string` module from previous post:
http://forum.dlang.org/thread/lqdktyndevxfcewgthcj at forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com
--
????? ?. ???????????
Denis V. Shelomovskij
Manu
2013-10-21 13:04:38 UTC
Permalink
Post by Manu
System interface functions that pass zero-terminated strings through to
Post by Manu
the OS are the primary offender, needless garbage, those should be on
the stack.
I like to use alloca too where it's appropriate. I'd definitely like if
D had a variable-sized static array syntax for pretty-ing alloca.
I thought about something similar using alloca via a mixin template, but
that feels really hackey!
http://forum.dlang.org/thread/**lqdktyndevxfcewgthcj at forum.**
dlang.org?page=2#post-l42evp:**241ok7:241:40digitalmars.com<http://forum.dlang.org/thread/lqdktyndevxfcewgthcj at forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com>
Super awesome! Phobos devs should be encouraged to use these in
non-recursive functions (particularly OS pass-through's).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20131021/623cc9d1/attachment.html>
dennis luehring
2013-10-21 14:04:22 UTC
Permalink
Post by Manu
Post by Manu
System interface functions that pass zero-terminated strings through to
Post by Manu
the OS are the primary offender, needless garbage, those should be on
the stack.
I like to use alloca too where it's appropriate. I'd definitely like if
D had a variable-sized static array syntax for pretty-ing alloca.
I thought about something similar using alloca via a mixin template, but
that feels really hackey!
http://forum.dlang.org/thread/**lqdktyndevxfcewgthcj at forum.**
dlang.org?page=2#post-l42evp:**241ok7:241:40digitalmars.com<http://forum.dlang.org/thread/lqdktyndevxfcewgthcj at forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com>
Super awesome! Phobos devs should be encouraged to use these in
non-recursive functions (particularly OS pass-through's).
looks like Walters solution - but cleaner

"...Implementation note:
For small strings tempCString will use stack allocated buffer, for
large strings (approximately 1000 characters and more) it will allocate
temporary one from unstd.memory.allocation.threadHeap..."

does that mean that tempCString reserves minimum 1000 bytes on stack
else using heap?

if so i would prefer a template based version where i can put in the size
Denis Shelomovskij
2013-10-21 15:26:24 UTC
Permalink
Post by dennis luehring
For small strings tempCString will use stack allocated buffer, for
large strings (approximately 1000 characters and more) it will allocate
temporary one from unstd.memory.allocation.threadHeap..."
does that mean that tempCString reserves minimum 1000 bytes on stack
else using heap?
if so i would prefer a template based version where i can put in the size
Yes, `tempCString` allocates `1024 * To.sizeof` bytes on the stack. Note
that it doesn't initialize the data so it is O(1) operation which will
just do ~1 KiB move of stack pointer. As function stack frame can easily
eat 50-100 bytes it is like 10-20 function calls. IIRC typical stack
size is ~1 MiB and `tempCString` isn't expected to be used in some deep
recursion or be ~1000 times used in one function.

So I'd prefer to change default stack allocation size if needed and not
confuse user with manual choice.
--
????? ?. ???????????
Denis V. Shelomovskij
Wyatt
2013-10-21 16:25:20 UTC
Permalink
On Monday, 21 October 2013 at 15:26:33 UTC, Denis Shelomovskij
Post by Denis Shelomovskij
So I'd prefer to change default stack allocation size if needed
and not confuse user with manual choice.
Wouldn't it work to make it optional then? Something like this,
I think:
auto tempCString(To = char, From, Length = 1024)(in From[] str)
if (isSomeChar!To && isSomeChar!From);

Choosing a sane default but allowing specialist users an easy way
to fine-tune it for their needs while keeping the basic usage
simple is something I'd advocate for. (Personally, I think 1K
sounds quite high; I'd probably make it 256 (one less than the
max length of filenames on a whole bunch of filesystems)).

-Wyatt
Lionello Lunesu
2013-10-22 21:05:36 UTC
Permalink
On 21 October 2013 21:24, Denis Shelomovskij
System interface functions that pass zero-terminated strings through to
the OS are the primary offender, needless garbage, those should be on
the stack.
I like to use alloca too where it's appropriate. I'd definitely like if
D had a variable-sized static array syntax for pretty-ing alloca.
I thought about something similar using alloca via a mixin template, but
that feels really hackey!
http://forum.dlang.org/thread/__lqdktyndevxfcewgthcj at forum.__dlang.org?page=2#post-l42evp:__241ok7:241:40digitalmars.com
<http://forum.dlang.org/thread/lqdktyndevxfcewgthcj at forum.dlang.org?page=2#post-l42evp:241ok7:241:40digitalmars.com>
Super awesome! Phobos devs should be encouraged to use these in
non-recursive functions (particularly OS pass-through's).
Careful! Alloca doesn't get cleaned up when used in loops!

foreach(t; 0..1000) { int[t] stack_overflow; }
Denis Shelomovskij
2013-10-23 19:36:18 UTC
Permalink
Post by Lionello Lunesu
Careful! Alloca doesn't get cleaned up when used in loops!
And I don't use `alloca`.
--
????? ?. ???????????
Denis V. Shelomovskij
Lionello Lunesu
2013-10-24 01:33:24 UTC
Permalink
Post by Denis Shelomovskij
Post by Lionello Lunesu
Careful! Alloca doesn't get cleaned up when used in loops!
And I don't use `alloca`.
Ah, indeed. I got your post mixed up with the one using alloca.
John Colvin
2013-10-23 21:30:24 UTC
Permalink
On Tuesday, 22 October 2013 at 21:05:33 UTC, Lionello Lunesu
Post by Lionello Lunesu
Careful! Alloca doesn't get cleaned up when used in loops!
scope(exit) works in a loop, so you can automatically clean it up
like that.

Destructors are also called on each iteration so RAII is an
option.
Lionello Lunesu
2013-10-24 12:07:13 UTC
Permalink
Post by Lionello Lunesu
Careful! Alloca doesn't get cleaned up when used in loops!
scope(exit) works in a loop, so you can automatically clean it up like
that.
Destructors are also called on each iteration so RAII is an option.
You can't clean up alloca'ed memory, AFAIK.
Tove
2013-10-21 22:00:38 UTC
Permalink
Post by Walter Bright
Post by Jonathan M Davis
If that paradigm is frequent enough, it might be worth
wrapping it in a
struct. Then, you'd probably get something like
StaticArray!(int, 10) tmp(n);
int[] a = tmp[];
which used T[10] if n was 10 or less and allocated T[]
otherwise. The
destructor could then deal with freeing the memory.
Sounds like a good idea - and it should fit in with Andrei's
nascent allocator design.
Hmmm, it gave me a weird idea...

void smalloc(T)(ushort n, void function(T[]) statement)
{
if(n <= 256)
{
if(n <= 16)
{
T[16] buf = void;
statement(buf[0..n]);
}
else
{
T[256] buf = void;
statement(buf[0..n]);
}
}
else
{
if(n <= 4096)
{
T[4096] buf = void;
statement(buf[0..n]);
}
else
{
T[65536] buf = void;
statement(buf[0..n]);
}
}
}

smalloc(256, (int[] buf)
{
});
PauloPinto
2013-10-21 11:40:00 UTC
Permalink
On Monday, 21 October 2013 at 00:59:38 UTC, Jonathan M Davis
Post by Jonathan M Davis
Post by Walter Bright
Post by bearophile
More discussions about variable-sized stack-allocated arrays
in C++, it
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're
worth. But what
about efficiency? Here's what I often do something along the
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost
never necessary.
If that paradigm is frequent enough, it might be worth wrapping
it in a
struct. Then, you'd probably get something like
StaticArray!(int, 10) tmp(n);
int[] a = tmp[];
which used T[10] if n was 10 or less and allocated T[]
otherwise. The
destructor could then deal with freeing the memory.
- Jonathan M Davis
Well that's the approach taken by std::array (C++11), if I am not
mistaken.

--
Paulo
Namespace
2013-10-23 13:59:45 UTC
Permalink
Post by Walter Bright
Post by bearophile
More discussions about variable-sized stack-allocated arrays
in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're worth.
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
Another idea would be to use something like this:
http://dpaste.dzfl.pl/8613c9be
It has a syntax similar to T[n] and is likely more efficient
because the memory is freed when it is no longer needed. :)
dennis luehring
2013-10-23 14:35:13 UTC
Permalink
Post by Namespace
Post by Walter Bright
Post by bearophile
More discussions about variable-sized stack-allocated arrays
in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're worth.
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
http://dpaste.dzfl.pl/8613c9be
It has a syntax similar to T[n] and is likely more efficient
because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template parameter
maybe defaulted to 4096 :)
Namespace
2013-10-23 14:41:09 UTC
Permalink
On Wednesday, 23 October 2013 at 14:35:12 UTC, dennis luehring
Post by dennis luehring
On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright
Post by Walter Bright
More discussions about variable-sized stack-allocated arrays in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're
worth.
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
http://dpaste.dzfl.pl/8613c9be
It has a syntax similar to T[n] and is likely more efficient
because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template
parameter maybe defaulted to 4096 :)
That is true. ;) And can be easily done. :)
dennis luehring
2013-10-23 14:54:22 UTC
Permalink
Post by Namespace
On Wednesday, 23 October 2013 at 14:35:12 UTC, dennis luehring
Post by dennis luehring
On Sunday, 20 October 2013 at 16:33:35 UTC, Walter Bright
Post by Walter Bright
More discussions about variable-sized stack-allocated arrays in C++, it seems
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
They're far more trouble than they're worth.
auto a = new T[n];
Stack allocated arrays are far more trouble than they're
worth.
But what about efficiency? Here's what I often do something
T[10] tmp;
T[] a;
if (n <= 10)
a = tmp[0..n];
else
a = new T[n];
scope (exit) if (a != tmp) delete a;
The size of the static array is selected so the dynamic
allocation is almost never necessary.
http://dpaste.dzfl.pl/8613c9be
It has a syntax similar to T[n] and is likely more efficient
because the memory is freed when it is no longer needed. :)
but it would be still nice to change the 4096 size by template
parameter maybe defaulted to 4096 :)
That is true. ;) And can be easily done. :)
can't you remove the if(this.ptr is null) return; checks everywhere -
how should that happen - without exception at creation time
Namespace
2013-10-23 15:19:44 UTC
Permalink
Post by dennis luehring
can't you remove the if(this.ptr is null) return; checks
everywhere - how should that happen - without exception at
creation time
Yes, this is somehow true. Here, the adjusted version.
http://dpaste.dzfl.pl/e4dcc2ea
Namespace
2013-11-06 16:52:44 UTC
Permalink
Post by Namespace
Post by dennis luehring
can't you remove the if(this.ptr is null) return; checks
everywhere - how should that happen - without exception at
creation time
Yes, this is somehow true. Here, the adjusted version.
http://dpaste.dzfl.pl/e4dcc2ea
What if D would support variable-sized stack-allocated arrays
through syntax sugar?
----
int n = 128;
int[n] arr;
----
would be rewritten with:
----
int n = 128;
int* __tmpptr = Type!int[n];
scope(exit) Type!int.deallocate(__tmpptr);
int[] arr = __tmpptr[0 .. n];
----

Where 'Type' is a struct like that:
----
struct Type(T) {
static {
enum Limit = 4096;

void[Limit] _buffer = void;
size_t _bufferLength;
}

static void deallocate(ref T* ptr) {
.free(ptr);
ptr = null;
}

static T* opIndex(size_t N) {
if ((this._bufferLength + N) <= Limit) {
scope(exit) this._bufferLength += N;

return cast(T*)(&this._buffer[this._bufferLength]);
}

return cast(T*) .malloc(N * T.sizeof);
}
}
----
which could be placed in std.typecons.

I think this should be easy to implement. What do you think?

deadalnix
2013-10-23 17:28:27 UTC
Permalink
On Wednesday, 23 October 2013 at 14:54:22 UTC, dennis luehring
Post by dennis luehring
can't you remove the if(this.ptr is null) return; checks
everywhere - how should that happen - without exception at
creation time
Struct.init must be a valid state according to D specs, and it is
pretty much unavoidable considering we have no default
constructor for structs.
Jonathan M Davis
2013-10-23 19:49:24 UTC
Permalink
Post by deadalnix
On Wednesday, 23 October 2013 at 14:54:22 UTC, dennis luehring
Post by dennis luehring
can't you remove the if(this.ptr is null) return; checks
everywhere - how should that happen - without exception at
creation time
Struct.init must be a valid state according to D specs, and it is
pretty much unavoidable considering we have no default
constructor for structs.
And what do you mean by valid? It's perfectly legal to have fields initialized
to void so that the init state is effectively garbage. That can cause problems
in some scenarios (particularly any case where something assumes that init is
useable without calling a function which would make the state valid), but it's
legal. And you can disable init if you want to - which also causes its own set
of problems, but technically, you don't even have to have an init value
(though it can certainly be restrictive if you don't - particularly when
arrays get involved).

You also have cases where the struct's init is in a completely valid and yet
unusable state. For instance, SysTime.init is useless unless you set its
timezone and will segfault if you try and use it (since the timezone is null),
but thanks to the limitations of CTFE, you _can't_ have a fully valid
SysTime.init (though that's not invalid in the sense that part of the struct
is garbage - just that it blows up when you use it).

I don't know why you think that the spec requires that a struct's init value
be valid. It just causes issues with some uses of the struct if its init value
isn't valid.

- Jonathan M Davis
Walter Bright
2013-10-24 05:40:55 UTC
Permalink
'void' initialization means uninitialized. This applies to fields, as well,
meaning that the .init value of an aggregate with void initializations will have
unreliable values in those locations.

This is why 'void' initializers don't belong in safe code, and reading 'void'
initialized data will get you implementation defined data.
Jonathan M Davis
2013-10-24 18:22:38 UTC
Permalink
Post by Walter Bright
'void' initialization means uninitialized. This applies to fields, as well,
meaning that the .init value of an aggregate with void initializations will
have unreliable values in those locations.
This is why 'void' initializers don't belong in safe code, and reading
'void' initialized data will get you implementation defined data.
Agreed. But there's a significant difference between @system and illegal, and
deadalnix was claiming that such init values were illegal per the language
spec, which is what I was objecting to.

- Jonathan M Davis


P.S. Please quote at least _some_ of the message when replying. Without that,
if the threading gets screwed up, or if someone doesn't use a threaded view,
it's a guessing game as to which post you're replying to. Thanks.
deadalnix
2013-10-24 19:07:16 UTC
Permalink
On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
Post by Jonathan M Davis
and illegal, and
deadalnix was claiming that such init values were illegal per
the language
spec, which is what I was objecting to.
- Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it
is, must be considered as valid. This is only loosly coupled with
void as init value.
Jonathan M Davis
2013-10-24 20:04:28 UTC
Permalink
Post by deadalnix
On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
Post by Jonathan M Davis
and illegal, and
deadalnix was claiming that such init values were illegal per
the language
spec, which is what I was objecting to.
- Jonathan M Davis
I never claimed that. I claimed that the init value, whatever it
is, must be considered as valid. This is only loosly coupled with
void as init value.
Then what do you mean by valid?

- Jonathan M Davis
deadalnix
2013-10-24 22:06:52 UTC
Permalink
On Thursday, 24 October 2013 at 20:04:38 UTC, Jonathan M Davis
Post by Jonathan M Davis
Post by deadalnix
On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
Post by Jonathan M Davis
and illegal, and
deadalnix was claiming that such init values were illegal per
the language
spec, which is what I was objecting to.
- Jonathan M Davis
I never claimed that. I claimed that the init value, whatever
it
is, must be considered as valid. This is only loosly coupled
with
void as init value.
Then what do you mean by valid?
Code operating on the struct must handle that case. It is a valid
state for the struct to be in.
Jonathan M Davis
2013-10-25 00:27:38 UTC
Permalink
Post by deadalnix
On Thursday, 24 October 2013 at 20:04:38 UTC, Jonathan M Davis
Post by Jonathan M Davis
Post by deadalnix
On Thursday, 24 October 2013 at 18:22:49 UTC, Jonathan M Davis
Post by Jonathan M Davis
and illegal, and
deadalnix was claiming that such init values were illegal per
the language
spec, which is what I was objecting to.
- Jonathan M Davis
I never claimed that. I claimed that the init value, whatever
it
is, must be considered as valid. This is only loosly coupled
with
void as init value.
Then what do you mean by valid?
Code operating on the struct must handle that case. It is a valid
state for the struct to be in.
As in all the functions will work on it without blowing up (e.g. segfault due
to a null pointer)? That's definitely desirable, but it's definitely not
required by the spec, and there are times that it can't be done without adding
overhead to the struct in general.

For instance, SysTime.init will blow up on many of its function calls due to a
null TimeZone. The only way that I could make that not blow up would be to put
a lot of null checks in the code and then give the TimeZone a default value.
The alternative is to do what I've done and make it so that you have to assign
it a new value (or assign it a TimeZone) if you want to actually use it.
Sometimes, that might be annoying, but no one has ever even reported it as a
bug, and SysTime.init isn't a particularly useful value anyway, even if it had
a valid TimeZone (since it's midnight January 1st, 1 A.D.). I could disable
the init value, but that would make SysTime useless in a bunch of settings
where it currently works just fine so long as you assign it a real value later.

I agree that ideally Foo.init wouldn't do things like segfault if you used it,
but that's not really possible with all types, and IMHO not having an init is
far worse than having a bad one in most cases, since there are so many things
than need an init value but don't necessarily call any functions on it (e.g.
allocating dynamic arrays). Regardless, the language spec makes no
requirements that Foo.init do anything useful. It's basically just a default
state that the compiler/runtime can assign to stuff when it needs a default
value. It doesn't have to actually work, just be a consistent set of bits that
don't include any pointers or references which refer to invalid memory.

- Jonathan M Davis
Denis Shelomovskij
2013-10-21 05:44:15 UTC
Permalink
More discussions about variable-sized stack-allocated arrays in C++, it
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
I'd say the most common case one need a stack-allocated array is a
temporary allocation which isn't going to survive end of scope. Even
more in such cases for too large for stack data one want to allocate
from thread local heap instead of shared one to prevent needless
locking. `unstd.memory.allocation.tempAlloc` [1] will do the job. As the
one of the most common subcases is a temporary C string creation
`unstd.c.string.tempCString` will help here.

[1]
http://denis-sh.bitbucket.org/unstandard/unstd.memory.allocation.html#tempAlloc
[2] http://denis-sh.bitbucket.org/unstandard/unstd.c.string.html#tempCString
--
????? ?. ???????????
Denis V. Shelomovskij
deadalnix
2013-10-21 20:21:36 UTC
Permalink
Post by bearophile
More discussions about variable-sized stack-allocated arrays in
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3810.pdf
I'd like variable-sized stack-allocated arrays in D.
Bye,
bearophile
I think that is a job for the optimizer. Consider cases like :

auto foo() {
return new Foo();
}

void bar() {
auto f = foo();
f.someMethod();
}

This is an incredibly common pattern, and it won't be possible to
optimize it via added language design without dramatic increase
in language complexity.

However, once the inliner is passed, you'll end up with something
like :
auto foo() {
return new Foo();
}

void bar() {
auto f = new Foo();
f.someMethod();
}

And if the optimizer is aware of GC calls (LDC is already aware
of them, even if only capable of limited optimizations, it is
already a good start and show the feasibility of the idea).

Obviously Foo is a struct or an class here, but that is the exact
same problem as for arrays.
Loading...