Discussion:
[boost] [next gen future-promise] What to call the monadic return type?
Niall Douglas
2015-05-25 09:37:26 UTC
Permalink
Dear list,

As AFIO looks likely to be finally getting a community review soon,
I've made a start on a final non-allocating constexpr-collapsing next
generation future-promise such that the AFIO you review is "API
final". You may remember my experimentations on those from:

http://boost.2283326.n4.nabble.com/Non-allocating-future-promise-td466
8339.html.

Essentially the win is that future-promise generates no code at all
on recent C++ 11 compilers unless it has to [1], and when it does it
generates an optimally minimal set with no memory allocation unless T
does so. This should make these future-promises several orders of
magnitude faster than the current ones in the C++ standard and solve
their scalability problems for use with things like ASIO. They also
have major wins for resumable functions which currently always
construct a promise every resumable function entry - these next gen
future-promises should completely vanish if the resumable function
never suspends, saving a few hundred cycles each call.

Anyway, my earlier experiments were all very promising, but they all
had one big problem: the effect on compile time. My final design is
therefore ridiculously simple: a future<T> can return only these
options:

* A T.
* An error_code (i.e. non-type erased error, optimally lightweight)
* An exception_ptr (i.e. type erased exception type, allocates
memory, you should avoid this if you want performance)

In other words, it's a fixed function monad where the expected return
is T, and the unexpected return can be either exception_ptr or
error_code. The next gen future provides Haskell type monadic
operations similar to Boost.Thread + Boost.Expected, and thanks to
the constexpr collapse this:

future<int> test() {
future<int> f(5);
return f;
}
test().get();

... turns into a "mov $5, %eax", so future<T> is now also a
lightweight monadic return transport capable of being directly
constructed.

In case you might want to know why a monadic return transport might
be so useful as to be a whole new design idiom for C++ 11, try
reading
https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:S
tronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstat
esfromfunctions.

However, future<T> doesn't seem named very "monadic", so I am
inclined to turn future<T> into a subclass of a type better named.
Options are:

* result<T>
* maybe<T>

Or anything else you guys can think of? future<T> is then a very
simple subclass of the monadic implementation type, and is simply
some type sugar for promise<T> to use to construct a future<T>.

Let the bike shedding begin! And my thanks in advance.

Niall


[1]: Compiler optimiser bugs notwithstanding. Currently only very
recent GCCs get this pattern right. clang isn't bad and spills a
dozen or so completely unnecessary opcodes, and I'll follow up with
Chandler with some bug reports to get clang fixed. Poor old MSVC
spits out about 3000 lines of assembler, it can't do constexpr
folding at all yet.

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Rob Stewart
2015-05-25 12:27:27 UTC
Permalink
On May 25, 2015 5:37:26 AM EDT, Niall Douglas <***@nedprod.com> wrote:
>
> My final design is
> therefore ridiculously simple: a future<T> can return only these
> options:
>
> * A T.
> * An error_code (i.e. non-type erased error, optimally lightweight)
> * An exception_ptr (i.e. type erased exception type, allocates
> memory, you should avoid this if you want performance)
>
> In other words, it's a fixed function monad where the expected return
> is T, and the unexpected return can be either exception_ptr or
> error_code. The next gen future provides Haskell type monadic
> operations similar to Boost.Thread + Boost.Expected, and thanks to
> the constexpr collapse this:
>
> future<int> test() {
> future<int> f(5);
> return f;
> }
> test().get();
>
> ... turns into a "mov $5, %eax", so future<T> is now also a
> lightweight monadic return transport capable of being directly
> constructed.
>
> In case you might want to know why a monadic return transport might
> be so useful as to be a whole new design idiom for C++ 11, try
> reading
> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Stronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstatesfromfunctions.

Make the examples in that correct WRT shared_ptr. As written, the shared_ptrs will delete file handles rather than close them.

> However, future<T> doesn't seem named very "monadic", so I am
> inclined to turn future<T> into a subclass of a type better named.
> Options are:
>
> * result<T>
> * maybe<T>
>
> Or anything else you guys can think of? future<T> is then a very
> simple subclass of the monadic implementation type, and is simply
> some type sugar for promise<T> to use to construct a future<T>.

The idea has merit, but I have some concerns. The best practice you've linked states that there are problems with Boost.Threads' expected type, but you only mention compile times specifically. I'd like to better understand why expected cannot be improved for your case. That is, can't expected be specialized to be lighter for cases like yours?

Put another way, I'd rather one type be smart enough to handle common use cases with aplomb than to have many specialized types. I realize that doesn't solve your immediate need, but you have your own namespace so you can just create your own "expected" type until the other is improved. Then, if improvement isn't possible, you can explore another name for your expected type.

___
Rob

(Sent from my portable computation engine)

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 14:51:17 UTC
Permalink
On 25 May 2015 at 8:27, Rob Stewart wrote:

> > In case you might want to know why a monadic return transport might
> > be so useful as to be a whole new design idiom for C++ 11, try
> > reading
> > https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Stronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstatesfromfunctions.
>
> Make the examples in that correct WRT shared_ptr. As written, the
> shared_ptrs will delete file handles rather than close them.

I would imagine destroying an open file handle would close it.
Anyway, that really isn't important to the code examples given the
topic, it was never mentioned what a handle_type is, I had an
afio::async_io_handle in mind but it doesn't matter.

> > However, future<T> doesn't seem named very "monadic", so I am
> > inclined to turn future<T> into a subclass of a type better named.
> > Options are:
> >
> > * result<T>
> > * maybe<T>
> >
> > Or anything else you guys can think of? future<T> is then a very
> > simple subclass of the monadic implementation type, and is simply
> > some type sugar for promise<T> to use to construct a future<T>.
>
>
> The idea has merit, but I have some concerns. The best practice you've
> linked states that there are problems with Boost.Threads' expected type,
> but you only mention compile times specifically. I'd like to better
> understand why expected cannot be improved for your case. That is, can't
> expected be specialized to be lighter for cases like yours?

I am one of few here who has seen Expected deployed onto a large code
base. The other guy is Lee, who has even more experience than I do.
Ours was not a positive experience in a real world large code base
use scenario.

Expected is a great library. Lovely piece of work. However, it is
also enormous. For a simple monadic return transport with traditional
unexpected type exception_ptr, it is enormously overkill. It does
category theory, huge quantities of constexpr machinery, and a fair
chunk of what a variant implementation would need. This is why it is
so slow on compile times.

When we were using expected, 98% of the time we didn't need any of
that. We just wanted a simple Either-Or object which imposed optimal
compile and runtime overhead. We just wanted to return some expected
type T, or some error, and the monadic operations to work as
described without any extra fluff like comparison operations,
hashing, container semantics, automatic conversion, variant semantics
etc.

Also, in terms of it now being 2015, we know WG21 are working on an
official variant type - it was the talk of C++ Now. Such a variant is
an obvious base class for any expected implementation rather than
reinventing the wheel separately.

Similarly, the future-ishy parts of expected would make more sense to
come from a base future-ishy type which is then composed with a
variant type to make an expected type which does all the singing and
dancing anyone could desire. That means rewriting expected from
scratch sometime later on.

> Put another way, I'd rather one type be smart enough to handle common
> use cases with aplomb than to have many specialized types. I realize
> that doesn't solve your immediate need, but you have your own namespace
> so you can just create your own "expected" type until the other is
> improved. Then, if improvement isn't possible, you can explore another
> name for your expected type.

Mine isn't an expected implementation. It doesn't even do optional
semantics, or even value semantics. It simply provides the simple
future API on a fixed variant wrapper type - three quarters of a
std::future - nothing more.

If you have ever done a lot of make_ready_future() in your code as a
quick and dirty Maybe implementation (I know I have), this use is
exactly what I am trying to formalise into an enormously more
efficient implementation.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Rob Stewart
2015-05-25 19:29:38 UTC
Permalink
On May 25, 2015 10:51:17 AM EDT, Niall Douglas <***@nedprod.com> wrote:
> On 25 May 2015 at 8:27, Rob Stewart wrote:
>
> > > In case you might want to know why a monadic return transport
> might
> > > be so useful as to be a whole new design idiom for C++ 11, try
> > > reading
> > >
> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Stronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstatesfromfunctions.
> >
> > Make the examples in that correct WRT shared_ptr. As written, the
> > shared_ptrs will delete file handles rather than close them.
>
> I would imagine destroying an open file handle would close it.
> Anyway, that really isn't important to the code examples given the
> topic, it was never mentioned what a handle_type is, I had an
> afio::async_io_handle in mind but it doesn't matter.

Your example calls ::open() and fd is an int.

___
Rob

(Sent from my portable computation engine)

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 22:26:48 UTC
Permalink
On 25 May 2015 at 15:29, Rob Stewart wrote:

> > I would imagine destroying an open file handle would close it.
> > Anyway, that really isn't important to the code examples given the
> > topic, it was never mentioned what a handle_type is, I had an
> > afio::async_io_handle in mind but it doesn't matter.
>
> Your example calls ::open() and fd is an int.

Sorry, I must be missing something really obvious here.

I was assuming that handle_type consumes a valid fd and takes
ownership of it. Are not the examples correct then, or if a
particular example (of the four) is wrong, can you say which one?

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Rob Stewart
2015-05-26 08:40:58 UTC
Permalink
On May 25, 2015 6:26:48 PM EDT, Niall Douglas <***@nedprod.com> wrote:
> On 25 May 2015 at 15:29, Rob Stewart wrote:
>
> > > I would imagine destroying an open file handle would close it.
> > > Anyway, that really isn't important to the code examples given the
> > > topic, it was never mentioned what a handle_type is, I had an
> > > afio::async_io_handle in mind but it doesn't matter.
> >
> > Your example calls ::open() and fd is an int.
>
> Sorry, I must be missing something really obvious here.
>
> I was assuming that handle_type consumes a valid fd and takes
> ownership of it. Are not the examples correct then, or if a
> particular example (of the four) is wrong, can you say which one?

The problem was in my inferences when reading the code. I wasn't thinking of the make_shared() expression as creating a handle_type from an implicit constructor taking an int when I read the examples. I saw, in effect, make_shared<int>(fd), with handle_type as a typedef for int.

Since the reader isn't necessarily familiar with a class like your async_io_handle, it might be worth a sentence to indicate that handle_type takes ownership of the file descriptor to ensure people don't repeat my mistake.

___
Rob

(Sent from my portable computation engine)

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Lee Clagett
2015-05-26 14:02:41 UTC
Permalink
On Mon, May 25, 2015 at 10:51 AM, Niall Douglas <***@nedprod.com>
wrote:

> On 25 May 2015 at 8:27, Rob Stewart wrote:
> >
> > The idea has merit, but I have some concerns. The best practice you've
> > linked states that there are problems with Boost.Threads' expected type,
> > but you only mention compile times specifically. I'd like to better
> > understand why expected cannot be improved for your case. That is, can't
> > expected be specialized to be lighter for cases like yours?
>
> I am one of few here who has seen Expected deployed onto a large code
> base. The other guy is Lee, who has even more experience than I do.
> Ours was not a positive experience in a real world large code base
> use scenario.
>

My experience with Expected was positive in that codebase. However, I
really don't care about compile times, and would often switch between
machines after requesting a build (so I rarely knew how long it would
take). I did have a header-heavy portion (using ASIO stackless), and I know
that test file took a noticeable amount of time to compile. I don't know
how much was related to Expected. I recall other files, with less
header-heavy code, being much more reasonable. These files were also under
300 lines each.

The number of macro switches in Expected was the concern for me, which
could be reduced if C++14 only compilers were supported.

Also, in terms of it now being 2015, we know WG21 are working on an
> official variant type - it was the talk of C++ Now. Such a variant is
> an obvious base class for any expected implementation rather than
> reinventing the wheel separately.
>
> Similarly, the future-ishy parts of expected would make more sense to
> come from a base future-ishy type which is then composed with a
> variant type to make an expected type which does all the singing and
> dancing anyone could desire. That means rewriting expected from
> scratch sometime later on.
>

I've read through the remainder of this thread - are you hoping to replace
use-cases of Expected with this new future_result type? So that a codebase
would use this one type for future and immediate related results?
Synchronous functions could be made asynchronous without changing the API,
which is an interesting property.

But if the major complaint about Expected is the compile-time, what is
causing that compile-time, and how will this new type avoid that?

Lee

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 14:47:37 UTC
Permalink
On 26 May 2015 at 10:02, Lee Clagett wrote:

> But if the major complaint about Expected is the compile-time, what is
> causing that compile-time, and how will this new type avoid that?

This monad is ridiculously simple because it's fixed function
variant, and instantiates exactly two types per instance, and
currently two free function overloads. No partial specialisations at
all. There will be R& and void specialisations later, but they are
all hash lookups, not shortlisting lookups in the compiler.

Expected instantiates dozens of types and functions per instance.
Lots of partial specialisations too. All of that is very slow and
involves lots of generating shortlists and iterating them for closest
overload match. Slow.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-26 21:26:33 UTC
Permalink
Le 26/05/15 16:47, Niall Douglas a écrit :
> On 26 May 2015 at 10:02, Lee Clagett wrote:
>
>> But if the major complaint about Expected is the compile-time, what is
>> causing that compile-time, and how will this new type avoid that?
> This monad is ridiculously simple because it's fixed function
> variant, and instantiates exactly two types per instance, and
> currently two free function overloads. No partial specialisations at
> all. There will be R& and void specialisations later, but they are
> all hash lookups, not shortlisting lookups in the compiler.
>
> Expected instantiates dozens of types and functions per instance.
> Lots of partial specialisations too. All of that is very slow and
> involves lots of generating shortlists and iterating them for closest
> overload match. Slow.
>
>
IMO what is important is not is the implementation of Boost.Expected
slow the compile time considerably (it is just a POC and can be improved
a lot) but if the interface fores it. How can it be compared to the
compile time while using optional?

What I've observed is that making expected a literal type (constexpr
evrywhere) resulted in compile time much slower.
Now we have C++14, I'm sure there are others that have a better
implementation.

So the question is, do we have the proposed interface for expected, or
something else?
Would be the 'to be named type" a literal type?

Vicente

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 23:41:27 UTC
Permalink
On 26 May 2015 at 23:26, Vicente J. Botet Escriba wrote:

> So the question is, do we have the proposed interface for expected, or
> something else?
> Would be the 'to be named type" a literal type?

I would far prefer if Expected were layered with increasing amounts
of complexity, so you only pay for what you use.

I also think you need to await the WG21 variant design to become
close to completion. It makes no sense to have an Expected not using
that variant implementation, you're just duplicating work.

Other than that, I found the interface generally good. I only ever
needed about 10% of it personally, but it would be nice if the
remainder were available by switching it on maybe with an explicit
conversion to a more intricate subclass. The more intricate subclass
would of course live in a separate header. Pay only for what you use.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-27 05:02:55 UTC
Permalink
Le 27/05/15 01:41, Niall Douglas a écrit :
> On 26 May 2015 at 23:26, Vicente J. Botet Escriba wrote:
>
>> So the question is, do we have the proposed interface for expected, or
>> something else?
>> Would be the 'to be named type" a literal type?
> I would far prefer if Expected were layered with increasing amounts
> of complexity, so you only pay for what you use.
>
> I also think you need to await the WG21 variant design to become
> close to completion. It makes no sense to have an Expected not using
> that variant implementation, you're just duplicating work.
My apologies. No seriously, this are implementation details. Could you
describe the interface changes that expected need.
> Other than that, I found the interface generally good. I only ever
> needed about 10% of it personally, but it would be nice if the
> remainder were available by switching it on maybe with an explicit
> conversion to a more intricate subclass. The more intricate subclass
> would of course live in a separate header. Pay only for what you use.
>
>
Could we define this interface?

Vicente

P.S. Expected is a open source project. Anyone can participate, either
by creating issues, sending pull request, ...

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-27 10:08:10 UTC
Permalink
On 27 May 2015 at 7:02, Vicente J. Botet Escriba wrote:

> >> So the question is, do we have the proposed interface for expected, or
> >> something else?
> >> Would be the 'to be named type" a literal type?
> > I would far prefer if Expected were layered with increasing amounts
> > of complexity, so you only pay for what you use.
> >
> > I also think you need to await the WG21 variant design to become
> > close to completion. It makes no sense to have an Expected not using
> > that variant implementation, you're just duplicating work.
>
> My apologies. No seriously, this are implementation details. Could you
> describe the interface changes that expected need.

That's a huge question Vicente. And it's very hard to answer in
detail, because I don't really know.

I don't think variant is an implementation detail. I suspect people
will want to explicitly convert from an ordered variant into an
expected and vice versa for example. They will also want to "repack"
a variant from one ordering of type options into another without
actually copying around any data.

I can also see a Hana heterogeneous sequence or std::tuple being
reduced into variant and/or into an expected. And so on.

Until all this ecosystem stuff becomes more final, it is hard to
imagine a new expected interface.

> > Other than that, I found the interface generally good. I only ever
> > needed about 10% of it personally, but it would be nice if the
> > remainder were available by switching it on maybe with an explicit
> > conversion to a more intricate subclass. The more intricate subclass
> > would of course live in a separate header. Pay only for what you use.
> >
> >
> Could we define this interface?

If I remember rightly, in your WG21 paper you had a table somewhere
where you compared futures to optional to expected with a tick list
of features shared and features not shared.

I think you need an orthogonal table of:

1. Lightweight monad.
2. Intermediate monad.
3. Featureful monad.

... and another table showing the progression from minimal monad to
maximum monad.

If you examine
https://github.com/ned14/boost.spinlock/blob/master/include/boost/spin
lock/future.hpp you should see that monad<>, from which future<>
derives, implements simple then(), bind() and map(). future<> only
implements then(), but I am planning for it to allow a then(F(monad))
which would allow future to also provide bind() and map().

My idea is you can eventually switch freely between asynchronous
monadic programming and synchronous monadic programming using a
simple cast, so I am implementing a "two layer" monad where the first
is a synchronous monad and the second is an asynchronous monad. But
I'm a long way away from any of that right now.

> In other words, would the "to be named" type appear on the AFIO
> interface? Is for this reason that you need to name it? or it is really an
> implementation detail. This merits clarification.

Right now AFIO's synchronisation object is a struct async_io_op which
contains a shared_future<shared_ptr<async_io_handle>>.

I'm going to replace async_io_op with a custom afio::future<T> which
*always* carries a shared_ptr<async_io_handle> plus some T (or void).
Internally that converts into a future<tuple<async_io_handle, T>> but
that isn't important to know.

That custom afio::future<T> will subclass the lightweight future<T> I
am currently building.

Strictly speaking, there is nothing stopping me building the same
custom afio::future<T> right now using std::shared_future. However, I
would need to implement continuations via .then(), and if I am to
bother with that, I might as well do the whole lightweight future
because as you know from Boost.Thread, most of the tricky hard to not
deadlock work is in the continuations implementation.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-28 06:00:29 UTC
Permalink
Le 27/05/15 12:08, Niall Douglas a écrit :
> On 27 May 2015 at 7:02, Vicente J. Botet Escriba wrote:
>
>>>> So the question is, do we have the proposed interface for expected, or
>>>> something else?
>>>> Would be the 'to be named type" a literal type?
>>> I would far prefer if Expected were layered with increasing amounts
>>> of complexity, so you only pay for what you use.
>>>
>>> I also think you need to await the WG21 variant design to become
>>> close to completion. It makes no sense to have an Expected not using
>>> that variant implementation, you're just duplicating work.
>> My apologies. No seriously, this are implementation details. Could you
>> describe the interface changes that expected need.
> That's a huge question Vicente. And it's very hard to answer in
> detail, because I don't really know.
In other words, beside the slow compile time of expected, what you don't
need from expected?
I see that you want to be able to store error_code or exception_ptr.
Wouldn't the interface of expected<T, variant<error_code,
exception_ptr>> be enough?
>
> I don't think variant is an implementation detail. I suspect people
> will want to explicitly convert from an ordered variant into an
> expected and vice versa for example. They will also want to "repack"
> a variant from one ordering of type options into another without
> actually copying around any data.
This is IMO a strong requirement, it should be enough to be able to
convert from one to the other.
Anyway, if people consider this is a must, a concrete proposal of a
expected interface on top of variant supporting this cast will be
needed. And also for optional, I guess.
> I can also see a Hana heterogeneous sequence or std::tuple being
> reduced into variant and/or into an expected. And so on.
tuple -> product
variant -> sum

I don't see how would you like to convert one to the other.
>
> Until all this ecosystem stuff becomes more final, it is hard to
> imagine a new expected interface.
>
>>> Other than that, I found the interface generally good. I only ever
>>> needed about 10% of it personally, but it would be nice if the
>>> remainder were available by switching it on maybe with an explicit
>>> conversion to a more intricate subclass. The more intricate subclass
>>> would of course live in a separate header. Pay only for what you use.
>>>
>>>
>> Could we define this interface?
> If I remember rightly, in your WG21 paper you had a table somewhere
> where you compared futures to optional to expected with a tick list
> of features shared and features not shared.
This table was comparing type interfaces.
>
> I think you need an orthogonal table of:
>
> 1. Lightweight monad.
> 2. Intermediate monad.
> 3. Featureful monad.
>
> ... and another table showing the progression from minimal monad to
> maximum monad.

I believe that we should stop using the word monad in this way. Just
wondering what would be the operations of these monads.
I'm aware of the operations of a Monad and the operations of an Error
Monad. I'm aware of the operations of a Functor, an Applicative, ....
These operations have nothing to be with the operations the concrete
type provides.
>
> If you examine
> https://github.com/ned14/boost.spinlock/blob/master/include/boost/spin
> lock/future.hpp you should see that monad<>,
FWIK monad is a concept (type_class), not a type. Any type implementing
bind/unit becomes a monad.
Your concrete monad class template provides don't provides the monad
interface. It merits another name.
> from which future<>
> derives, implements simple then(), bind() and map().
I only see comments.

And it implements get() and all the getters, and swap, and assignments.
BTW, why the setters don't need to be redefined? These operations need
to be thread-safe, inst It? How would you reach to implement .then(), if
you don't redefine the setter operations?

> future<> only
> implements then(), but I am planning for it to allow a then(F(monad))

I've implemented then() in Boost.Thread as it was on the C++ proposals.
But this doesn't make future<T> a monad. The continuation of the monadic
bind function has a T as parameter, not a future<T>.

I proposed to the C++ standard a function future<T>::next (bind) that
takes a continuation having T as parameter (future<R>(T)), and and
future<T>::catch_error that takes a as parameter a continuation that
takes an errror as parameter. The proposal was not accepted. I have not
added them to Boost.Thread, and will not add it as members as both can
be implemented on top of then(). The alternative to then() is
bind()+catch_error(). Note that future is an Error Monad.

One advantage of a member implementation respect to a non-member one is
the syntax

f.next(...).next(...).catch_error(...)

However if uniform syntax is adopted in the next C++ standard, the
non_member function will gain this syntactic advantage.

The other advantage is that a member function doesn't introduce a new
name at a more global scope. I'm looking for a way to introduce
non-member functions at a more restricted scope. I have not found any
yet without changing the language (see Explicit namespaces).

I'm all for having non-member function for map/bind/.... implemented on
top of the concrete classes interface. The proposed expected contains
more than needed. The map/bind/catch_error/catch_exception member
functions could and should be non-members. The major question I have is
how these non-member functions should be customized. The C++ standard
committee is not a fan of non-member functions.
> which would allow future to also provide bind() and map().
>
> My idea is you can eventually switch freely between asynchronous
> monadic programming and synchronous monadic programming using a
> simple cast, so I am implementing a "two layer" monad where the first
> is a synchronous monad and the second is an asynchronous monad. But
> I'm a long way away from any of that right now.
IMO, we don't need to add more operations than needed to these classes.
They have already too much.
We need to define the minimal interface (data) that allows to define
other non-member functions (algorithms). This is where monads, functors,
... abstraction have a sense. Neither expected/optional/..../nor your
synchronized monad class need to have bind/map/... member functions.
These functions can be defined as non-member using the specific
expected/optional/.... interface. This decoupling is essential,
otherwise we will finish including too much member functions.

future<T> been asynchronous, needs a specific function to add a
continuation when the future is ready. This is not the case for the
synchronous classes, that can make use of the synchronous getter
interface. I see now future<T>::then() as a specific interface for
future than something that need to be generalized.

>
>> In other words, would the "to be named" type appear on the AFIO
>> interface? Is for this reason that you need to name it? or it is really an
>> implementation detail. This merits clarification.
> Right now AFIO's synchronisation object is a struct async_io_op which
> contains a shared_future<shared_ptr<async_io_handle>>.
>
> I'm going to replace async_io_op with a custom afio::future<T> which
> *always* carries a shared_ptr<async_io_handle> plus some T (or void).
> Internally that converts into a future<tuple<async_io_handle, T>> but
> that isn't important to know.

My question was about the "to be named" type. Would this type be visible
on the AFIO interface?

I don't know nothing at all about AFIO, but why the following is not
good for you, then?

template <class T>
using your_name = future<tuple<async_io_handle, T>>;

It is only due to the performances?
Is it because you need to store an error_code?
>
> That custom afio::future<T> will subclass the lightweight future<T> I
> am currently building.
>
> Strictly speaking, there is nothing stopping me building the same
> custom afio::future<T> right now using std::shared_future. However, I
> would need to implement continuations via .then(), and if I am to
> bother with that, I might as well do the whole lightweight future
> because as you know from Boost.Thread, most of the tricky hard to not
> deadlock work is in the continuations implementation.
>
Yes, I'm aware of the difficulty. Any help to make it more robust is
welcome. As I said in another thread (Do we can a non backward
compatible version with non-blocking futures?), I was working on a
branch for non-blocking futures and a branch for lightweight executors.
I must take the time to merge both together.

Any PR providing optimizations that improve performances are also welcome.

Vicente

P.S. Sorry to repeat myself everywhere.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-28 10:29:07 UTC
Permalink
On 28 May 2015 at 8:00, Vicente J. Botet Escriba wrote:

> >> My apologies. No seriously, this are implementation details. Could you
> >> describe the interface changes that expected need.
> > That's a huge question Vicente. And it's very hard to answer in
> > detail, because I don't really know.
> In other words, beside the slow compile time of expected, what you don't
> need from expected?

I would prefer to not have to think when writing code with expected.
I think a monadic return type should be so primitive as to require no
thought in usage even for the completely unfamiliar.

This is why I borrow the API and semantics from future for the monad.

> I see that you want to be able to store error_code or exception_ptr.
> Wouldn't the interface of expected<T, variant<error_code,
> exception_ptr>> be enough?

One of the big attractions for me to the fixed variant design is
implicit conversion from T, error_code and exception_ptr. No extra
typing. No extra thought. Just return what you mean from a function,
and let the compiler sort it out.

> > ... and another table showing the progression from minimal monad to
> > maximum monad.
>
> I believe that we should stop using the word monad in this way. Just
> wondering what would be the operations of these monads.
> I'm aware of the operations of a Monad and the operations of an Error
> Monad. I'm aware of the operations of a Functor, an Applicative, ....
> These operations have nothing to be with the operations the concrete
> type provides.

I do agree - a monad is a monad or else it is not.

Ok, I am being fuzzy now, but my feelings - rather than opinion -
about Expected was it is too monolithic somehow. It's like this big
thing when I don't want a big thing, at least not usually. I want
Expected to be a programming primitive, and it never felt to me it
was a primitive.

I am sorry I am not much help here. Perhaps Lee can help? He had a
different opinion of using Expected in big code bases to me.

> My question was about the "to be named" type. Would this type be visible
> on the AFIO interface?

Yes. As a custom afio::future<T>. Users will notice they can't feed
those futures into normal when_all/when_any.

> I don't know nothing at all about AFIO, but why the following is not
> good for you, then?
>
> template <class T>
> using your_name = future<tuple<async_io_handle, T>>;

I unfortunately need to carry more state than this. I suspect I am
not the first to have this problem with futures, hence my desire for
a lightweight future library easily extended with custom future
types.

> > That custom afio::future<T> will subclass the lightweight future<T> I
> > am currently building.
> >
> > Strictly speaking, there is nothing stopping me building the same
> > custom afio::future<T> right now using std::shared_future. However, I
> > would need to implement continuations via .then(), and if I am to
> > bother with that, I might as well do the whole lightweight future
> > because as you know from Boost.Thread, most of the tricky hard to not
> > deadlock work is in the continuations implementation.
> >
> Yes, I'm aware of the difficulty. Any help to make it more robust is
> welcome. As I said in another thread (Do we can a non backward
> compatible version with non-blocking futures?), I was working on a
> branch for non-blocking futures and a branch for lightweight executors.
> I must take the time to merge both together.

Up until now I have had the enormous advantage over you of being able
to assume that there is an ASIO available to push work onto. That
makes my implementation far easier, because when I need to tie break
I just defer work onto ASIO.

That said, I'm going to attempt an executor-less implementation this
time round. We'll see how it goes.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-28 11:38:29 UTC
Permalink
Vicente J. Botet Escriba wrote:

> I see that you want to be able to store error_code or exception_ptr.
> Wouldn't the interface of expected<T, variant<error_code, exception_ptr>>
> be enough?

future<T> already adds the ability to store exception_ptr. There is no need
to store another exception_ptr in the value. Niall's type is exactly
equivalent to future<expected<T, error_code>>, as I already said (twice).

> I proposed to the C++ standard a function future<T>::next (bind) that
> takes a continuation having T as parameter (future<R>(T)), and and
> future<T>::catch_error that takes a as parameter a continuation that takes
> an errror as parameter. The proposal was not accepted.

future<T>::next( [](T){...} ) is actually pretty useful, catch_error() much
less so. You should have proposed just the former. (Sean Parent's future
implementation's ::then has ::next's semantics IIRC.)


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-28 13:05:51 UTC
Permalink
On 28 May 2015 at 14:38, Peter Dimov wrote:

> > I see that you want to be able to store error_code or exception_ptr.
> > Wouldn't the interface of expected<T, variant<error_code, exception_ptr>>
> > be enough?
>
> future<T> already adds the ability to store exception_ptr. There is no need
> to store another exception_ptr in the value. Niall's type is exactly
> equivalent to future<expected<T, error_code>>, as I already said (twice).

exception_ptr unfortunately must always do an atomic write to memory.
That always forces code to be generated, and very substantially
reduces optimisation opportunities because state must be reloaded
around the exception_ptr.

This, and the mandatory memory allocation, is a big reason current
futures are not suitable for high performance ASIO.

I also want a big semantic change that error returns are not
exceptional. We hugely underuse std::error_code in STL C++
unfortunately. I am not one of those people who believes exceptions
are evil, and ban them in the language as all the new system
languages seem to. I also have no love for forcing everything through
the return code as with C and Rust, but I do think there is a middle
ground between good outcomes, bad outcomes, and exceptional outcomes
which is easy to program, easy to conceptualise, and easy on the
compiler.

I therefore think some mongrel impure future/monad/optional type
might just be the ticket. But I think I need to deliver real code
used in a real world library to demonstrate its effectiveness first.
If I don't persuade people of that simple effectiveness, then
somebody else's solution is better.

> > I proposed to the C++ standard a function future<T>::next (bind) that
> > takes a continuation having T as parameter (future<R>(T)), and and
> > future<T>::catch_error that takes a as parameter a continuation that takes
> > an errror as parameter. The proposal was not accepted.
>
> future<T>::next( [](T){...} ) is actually pretty useful, catch_error() much
> less so. You should have proposed just the former. (Sean Parent's future
> implementation's ::then has ::next's semantics IIRC.)

I was going to have my continuations specialise on their parameter
type, so a continuation might take T, error_code, exception_ptr,
monad<T>, future<T> as consuming continuations. To make those
non-consuming continuations simply make those a const lvalue ref.

My only worry really on that feature is on compile times, so I may
yet strip some of the automatedness. Also if I'm not going to
regularly use a feature, then I'm removing that feature, I am aiming
for a least possible implementation. We'll see how it goes.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-28 13:25:29 UTC
Permalink
Niall Douglas wrote:

> > future<T> already adds the ability to store exception_ptr. There is no
> > need to store another exception_ptr in the value. Niall's type is
> > exactly equivalent to future<expected<T, error_code>>, as I already said
> > (twice).
>
> exception_ptr unfortunately must always do an atomic write to memory. That
> always forces code to be generated, and very substantially reduces
> optimisation opportunities because state must be reloaded around the
> exception_ptr.
>
> This, and the mandatory memory allocation, is a big reason current futures
> are not suitable for high performance ASIO.
>
> I also want a big semantic change that error returns are not exceptional.
> We hugely underuse std::error_code in STL C++ unfortunately. I am not one
> of those people who believes exceptions are evil, and ban them in the
> language as all the new system languages seem to. I also have no love for
> forcing everything through the return code as with C and Rust, but I do
> think there is a middle ground between good outcomes, bad outcomes, and
> exceptional outcomes which is easy to program, easy to conceptualise, and
> easy on the compiler.

There could be a misunderstanding. When I say that your type is
future<expected<T, error_code>>, I don't mean any specific future or
expected implementations such as std::future or boost::expected. What I mean
is a future-like type, having the interface of std/boost::future, and an
expected-like type, having (a subset of) the interface of
std/boost::expected.

You could still implement your own future<> and expected<> if the existing
ones are unfit. My point is purely that these are independent concepts and
there is no real need to couple them into one type, with a hardcoded
error_code to boot.

So far, you've stated that you like that your type is constructible from T,
error_code or exception_ptr. I suppose a future<expected<T, error_code>> may
not be as convenient. Are there other ways in which the interface of
future<expected<T, error_code>> is deficient?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-28 14:19:13 UTC
Permalink
On 28 May 2015 at 16:25, Peter Dimov wrote:

> > I also want a big semantic change that error returns are not exceptional.
> > We hugely underuse std::error_code in STL C++ unfortunately. I am not one
> > of those people who believes exceptions are evil, and ban them in the
> > language as all the new system languages seem to. I also have no love for
> > forcing everything through the return code as with C and Rust, but I do
> > think there is a middle ground between good outcomes, bad outcomes, and
> > exceptional outcomes which is easy to program, easy to conceptualise, and
> > easy on the compiler.
>
> There could be a misunderstanding. When I say that your type is
> future<expected<T, error_code>>, I don't mean any specific future or
> expected implementations such as std::future or boost::expected. What I mean
> is a future-like type, having the interface of std/boost::future, and an
> expected-like type, having (a subset of) the interface of
> std/boost::expected.

For reference, I originally began all of this nine months ago with a
future<expected<shared_ptr<async_io_handle>, error_code>>. I found it
obtusive and irritating to work with because of the nested layers of
API e.g. if(f.has_value()) f.get().value_or(...) etc. I found myself
wanting to wrap future<expected<shared_ptr<async_io_handle>,
error_code>> into a class which implements a flat convenience API.

> You could still implement your own future<> and expected<> if the existing
> ones are unfit. My point is purely that these are independent concepts and
> there is no real need to couple them into one type, with a hardcoded
> error_code to boot.

error_code is pretty great in fact. I've tried to come up with an
error transmitting type which is better for the same light weight
spec, and I failed after several months of effort.

I actually now believe error_code has an optimal design which cannot
be improved upon without losing that its storage is just an int and a
pointer and is therefore a completely trivial type. That is a very
rare opinion for me to hold on any part of the STL, but I became
quite impressed with the design once I failed to improve on it.

My sole thing I would change if I could is that I really wish it
stored a ssize_t not an int. An additional void * in its storage
could also be useful.

> So far, you've stated that you like that your type is constructible from T,
> error_code or exception_ptr. I suppose a future<expected<T, error_code>> may
> not be as convenient.

It wasn't fun and convenient to use when I started out down this
path. I also didn't like how many opcodes were generated by the
compiler for very simple operations, especially on MSVC.

> Are there other ways in which the interface of future<expected<T,
> error_code>> is deficient?

I very much like the expected API and design. Why I choose not to use
it comes down to the lack of a flat API as mentioned above, and
compile times.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-28 17:56:04 UTC
Permalink
Le 28/05/15 13:38, Peter Dimov a écrit :
> Vicente J. Botet Escriba wrote:
>
>> I see that you want to be able to store error_code or exception_ptr.
>> Wouldn't the interface of expected<T, variant<error_code,
>> exception_ptr>> be enough?
>
> future<T> already adds the ability to store exception_ptr. There is no
> need to store another exception_ptr in the value. Niall's type is
> exactly equivalent to future<expected<T, error_code>>, as I already
> said (twice).
I was talking of the to be named type, from which Nial want to derive
its future ;-)
>
>> I proposed to the C++ standard a function future<T>::next (bind) that
>> takes a continuation having T as parameter (future<R>(T)), and and
>> future<T>::catch_error that takes a as parameter a continuation that
>> takes an errror as parameter. The proposal was not accepted.
>
> future<T>::next( [](T){...} ) is actually pretty useful, catch_error()
> much less so. You should have proposed just the former. (Sean Parent's
> future implementation's ::then has ::next's semantics IIRC.)
We never know what we should do. Without catch_error(), you need yet
.then() as you want to be able to recover from errors in the same way.

Vicente


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-28 18:14:34 UTC
Permalink
Vicente J. Botet Escriba wrote:

> Without catch_error(), you need yet .then() as you want to be able to
> recover from errors in the same way.

I wasn't suggesting removing .then, just adding .next. It's useful when I
need to perform the same action regardless of whether I got a value or an
exception.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-28 18:27:56 UTC
Permalink
On 28 May 2015 at 21:14, Peter Dimov wrote:

> > Without catch_error(), you need yet .then() as you want to be able to
> > recover from errors in the same way.
>
> I wasn't suggesting removing .then, just adding .next. It's useful when I
> need to perform the same action regardless of whether I got a value or an
> exception.

I'm a bit confused. If you want some action to occur regardless, you
surely ignore the future you are passed in your continuation?

Or by next(), do you mean that the continuation must fire even if no
value nor exception is ever set, and is therefore fired on future
destruction?

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-28 18:34:47 UTC
Permalink
Niall Douglas wrote:

> On 28 May 2015 at 21:14, Peter Dimov wrote:
>
> > I wasn't suggesting removing .then, just adding .next. It's useful when
> > I need to perform the same action regardless of whether I got a value or
> > an exception.
>
> I'm a bit confused. If you want some action to occur regardless, you
> surely ignore the future you are passed in your continuation?
>
> Or by next(), do you mean that the continuation must fire even if no value
> nor exception is ever set, and is therefore fired on future destruction?

What I mean is:

.then( []( future<T> ft ) { /*...*/ } )

Used when I want to, for instance, send ft to someone via message
passing

.next( []( T t ) { /* ... */ } )

Fires when ready() && has_value(). Used when I only care about results.

So, for example, when I do

auto r = async(f1).next(f2).next(f3).next(f4).get();

I don't care which of the four steps has failed with an exception.

That's as if I write

auto r = f4(f3(f2(f1())));

If one of these functions throws, I just get an exception, I don't care from
where. It's the same with .next.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-28 20:03:53 UTC
Permalink
> .next( []( T t ) { /* ... */ } )
> Fires when ready() && has_value(). Used when I only care about results.

And, to clarify, when ready() and has_exception(), returns a future having
the same exception, without invoking the function passed to .next.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-29 01:21:01 UTC
Permalink
On 28 May 2015 at 21:34, Peter Dimov wrote:

> > > I wasn't suggesting removing .then, just adding .next. It's useful when
> > > I need to perform the same action regardless of whether I got a value or
> > > an exception.
> >
> > I'm a bit confused. If you want some action to occur regardless, you
> > surely ignore the future you are passed in your continuation?
> >
> > Or by next(), do you mean that the continuation must fire even if no value
> > nor exception is ever set, and is therefore fired on future destruction?
>
> What I mean is:
>
> .then( []( future<T> ft ) { /*...*/ } )
>
> Used when I want to, for instance, send ft to someone via message
> passing
>
> .next( []( T t ) { /* ... */ } )
>
> Fires when ready() && has_value(). Used when I only care about results.
>
> So, for example, when I do
>
> auto r = async(f1).next(f2).next(f3).next(f4).get();
>
> I don't care which of the four steps has failed with an exception.

Ok, let me rewrite that so we know we are on the same page:

future<T>.next(R(T)) is the same effect as:

future<T>.then([](future<T> f){
return R(f.get()); // f.get() rethrows any exception, doesn't
execute R(T)
});

Your .next() is the same as Vicente's expected.bind() right?

If so, I was going to have .then() cope with a R(T) callable, but
that requires metaprogramming to examine the input and determine the
overload. A .next(R(T)) might be easier on build times.

I think I saw you didn't like Vicente's catch_error(E)? Was there a
reason why? Is there anything wrong with a .next(R(E))?

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-29 08:05:23 UTC
Permalink
Niall Douglas wrote:

> Ok, let me rewrite that so we know we are on the same page:
>
> future<T>.next(R(T)) is the same effect as:
>
> future<T>.then([](future<T> f){
> return R(f.get()); // f.get() rethrows any exception, doesn't execute
> R(T)
> });

Something like this, if by R(T) you mean a function object taking T and
returning an R, and by R(f.get()) you mean executing this function object
with f.get() as an argument. But the implementation of .next will not need
to rethrow on exception and will not need to execute any user code in this
case, it just moves the exception_ptr into the future<T> returned by .next.

> Your .next() is the same as Vicente's expected.bind() right?

My idea of .next was the same as his .map, but now that you bring it up, I'm
not quite sure whether .bind won't be better.

> If so, I was going to have .then() cope with a R(T) callable, but that
> requires metaprogramming to examine the input and determine the overload.

You're not going to be able to tell the difference between .then and .next
when given [](auto x) { ... }.

> I think I saw you didn't like Vicente's catch_error(E)? Was there a reason
> why? Is there anything wrong with a .next(R(E))?

It's not that I don't like it, I don't see sufficient motivation for it. I'm
also not quite sure how it's to be used. Vicente's wording in N4048 is a bit
unclear.

future<X> f1 = async( ... );
future<Y> f2 = f1.catch_error( []( exception_ptr e ) { return Y(); } );

What happens when f1 has a value?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-29 21:46:06 UTC
Permalink
On 29 May 2015 at 11:05, Peter Dimov wrote:

> > If so, I was going to have .then() cope with a R(T) callable, but that
> > requires metaprogramming to examine the input and determine the overload.
>
> You're not going to be able to tell the difference between .then and .next
> when given [](auto x) { ... }.

Surely with Expression SFINAE you can?

But sure, it's sounding slower than a simple overload. And I'd like
to retain VS2015 compatibility.

> > I think I saw you didn't like Vicente's catch_error(E)? Was there a reason
> > why? Is there anything wrong with a .next(R(E))?
>
> It's not that I don't like it, I don't see sufficient motivation for it.

Oh okay. In my situation, T and error_code are considered equals,
only exception_ptr is exceptional.

> I'm
> also not quite sure how it's to be used. Vicente's wording in N4048 is a bit
> unclear.
>
> future<X> f1 = async( ... );
> future<Y> f2 = f1.catch_error( []( exception_ptr e ) { return Y(); } );
>
> What happens when f1 has a value?

Surely for catch_error() one must always return the same type of
future as the input? Otherwise it couldn't work when f1 has a value.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-29 22:26:00 UTC
Permalink
Niall Douglas wrote:

>> > It's not that I don't like it, I don't see sufficient motivation for
>> > it.

Actually I've changed my mind and now I don't like it. It's a catch(...)
without a rethrow, and while those are occasionally necessary, they are not
likable.

To be clear, I'm talking about the N4048's .catch_error that takes an
exception_ptr, not an error_code.

> > I'm also not quite sure how it's to be used. Vicente's wording in N4048
> > is a bit unclear.
> >
> > future<X> f1 = async( ... );
> > future<Y> f2 = f1.catch_error( []( exception_ptr e ) { return
> > Y(); } );
> >
> > What happens when f1 has a value?
>
> Surely for catch_error() one must always return the same type of future as
> the input? Otherwise it couldn't work when f1 has a value.

Vicente in N4048 says that it's not required to return the same type.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-30 12:43:45 UTC
Permalink
On 30 May 2015 at 1:26, Peter Dimov wrote:

> >> > It's not that I don't like it, I don't see sufficient motivation for
> >> > it.
>
> Actually I've changed my mind and now I don't like it. It's a catch(...)
> without a rethrow, and while those are occasionally necessary, they are not
> likable.
>
> To be clear, I'm talking about the N4048's .catch_error that takes an
> exception_ptr, not an error_code.

I am sympathetic to this line of thought. Once you allow an
error_code, it becomes hard to imagine where treating exception_ptr
outcomes as just another value returned can be wise.

Ok, so if .next() filters for T, would .next_error() filter for
error_code?

I can't say I like the name .next_error(). It suggests "the next
error", not the current one.

.on_error() maybe? But then .next() should be .on_value() instead
right?

> > > I'm also not quite sure how it's to be used. Vicente's wording in N4048
> > > is a bit unclear.
> > >
> > > future<X> f1 = async( ... );
> > > future<Y> f2 = f1.catch_error( []( exception_ptr e ) { return
> > > Y(); } );
> > >
> > > What happens when f1 has a value?
> >
> > Surely for catch_error() one must always return the same type of future as
> > the input? Otherwise it couldn't work when f1 has a value.
>
> Vicente in N4048 says that it's not required to return the same type.

Oh. I don't know how that can work then. Maybe Vicente can clarify?

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Avi Kivity
2015-05-25 12:44:36 UTC
Permalink
On 05/25/2015 12:37 PM, Niall Douglas wrote:
> Dear list,
>
> As AFIO looks likely to be finally getting a community review soon,
> I've made a start on a final non-allocating constexpr-collapsing next
> generation future-promise such that the AFIO you review is "API
> final". You may remember my experimentations on those from:
>
> http://boost.2283326.n4.nabble.com/Non-allocating-future-promise-td466
> 8339.html.
>
> Essentially the win is that future-promise generates no code at all
> on recent C++ 11 compilers unless it has to [1], and when it does it
> generates an optimally minimal set with no memory allocation unless T
> does so. This should make these future-promises several orders of
> magnitude faster than the current ones in the C++ standard and solve
> their scalability problems for use with things like ASIO. They also
> have major wins for resumable functions which currently always
> construct a promise every resumable function entry - these next gen
> future-promises should completely vanish if the resumable function
> never suspends, saving a few hundred cycles each call.
>
> Anyway, my earlier experiments were all very promising, but they all
> had one big problem: the effect on compile time. My final design is
> therefore ridiculously simple: a future<T> can return only these
> options:
>
> * A T.
> * An error_code (i.e. non-type erased error, optimally lightweight)
> * An exception_ptr (i.e. type erased exception type, allocates
> memory, you should avoid this if you want performance)

I believe error_code is unneeded. Exceptions are expected to be slow.
If you want another type of variant return, let the user encapsulate it
in T (could be optional<T>, or expected<T, E>, or whatever).

>
> In other words, it's a fixed function monad where the expected return
> is T, and the unexpected return can be either exception_ptr or
> error_code. The next gen future provides Haskell type monadic
> operations similar to Boost.Thread + Boost.Expected, and thanks to
> the constexpr collapse this:
>
> future<int> test() {
> future<int> f(5);
> return f;
> }
> test().get();
>
> ... turns into a "mov $5, %eax", so future<T> is now also a
> lightweight monadic return transport capable of being directly
> constructed.

Can you post the code? I'd be very interested in comparing it with
seastar's non-allocating futures.

>
> In case you might want to know why a monadic return transport might
> be so useful as to be a whole new design idiom for C++ 11, try
> reading
> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:S
> tronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstat
> esfromfunctions.
>
> However, future<T> doesn't seem named very "monadic", so I am
> inclined to turn future<T> into a subclass of a type better named.
> Options are:
>
> * result<T>
> * maybe<T>
expected<T, E> was proposed (where E = std::exception_ptr).

> Or anything else you guys can think of? future<T> is then a very
> simple subclass of the monadic implementation type, and is simply
> some type sugar for promise<T> to use to construct a future<T>.
>
> Let the bike shedding begin! And my thanks in advance.
>
> Niall
>
>
> [1]: Compiler optimiser bugs notwithstanding. Currently only very
> recent GCCs get this pattern right. clang isn't bad and spills a
> dozen or so completely unnecessary opcodes, and I'll follow up with
> Chandler with some bug reports to get clang fixed. Poor old MSVC
> spits out about 3000 lines of assembler, it can't do constexpr
> folding at all yet.
>


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 14:57:30 UTC
Permalink
On 25 May 2015 at 15:44, Avi Kivity wrote:

> I believe error_code is unneeded. Exceptions are expected to be slow.
> If you want another type of variant return, let the user encapsulate it
> in T (could be optional<T>, or expected<T, E>, or whatever).

Please read the rationale at
https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:S
tronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstat
esfromfunctions as was requested.

In particular, error_code is fast, and unexpected returns are not
exceptional and must be as fast as expected returns.

Also, any monadic transport would default construct to an unexpected
state of a null error_code in fact, which is constexpr. This lets one
work around a number of exception safety irritations where move
constructor of T is not noexcept more easily.

> > ... turns into a "mov $5, %eax", so future<T> is now also a
> > lightweight monadic return transport capable of being directly
> > constructed.
>
> Can you post the code? I'd be very interested in comparing it with
> seastar's non-allocating futures.

I may do so once I've got the functioning constexpr reduction being
unit tested per commit.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Avi Kivity
2015-05-25 15:33:06 UTC
Permalink
On 05/25/2015 05:57 PM, Niall Douglas wrote:
> On 25 May 2015 at 15:44, Avi Kivity wrote:
>
>> I believe error_code is unneeded. Exceptions are expected to be slow.
>> If you want another type of variant return, let the user encapsulate it
>> in T (could be optional<T>, or expected<T, E>, or whatever).
> Please read the rationale at
> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:S
> tronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstat
> esfromfunctions as was requested.
>
> In particular, error_code is fast, and unexpected returns are not
> exceptional and must be as fast as expected returns.

As I mentioned, in this case the user can use expected<> or similar
themselves. Otherwise, what's the type of error_code? There could be an
infinite amount of error_code types to choose from (starting with simple
'enum class'es and continuing with error codes that include more
information about the error (nonscalar objects).

> Also, any monadic transport would default construct to an unexpected
> state of a null error_code in fact, which is constexpr. This lets one
> work around a number of exception safety irritations where move
> constructor of T is not noexcept more easily.

I'm not sure how the default constructor of future<> and the move
constructor of T are related.

I'm not even sure why future<> would require a default constructor.
Seastar's doesn't have one.

>
>>> ... turns into a "mov $5, %eax", so future<T> is now also a
>>> lightweight monadic return transport capable of being directly
>>> constructed.
>> Can you post the code? I'd be very interested in comparing it with
>> seastar's non-allocating futures.
> I may do so once I've got the functioning constexpr reduction being
> unit tested per commit.
>
>

I'm looking forward to it! I've been bitten by the same compile time
explosion problems and I'm curious to see how you solved them.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 16:37:55 UTC
Permalink
On 25 May 2015 at 18:33, Avi Kivity wrote:

> > In particular, error_code is fast, and unexpected returns are not
> > exceptional and must be as fast as expected returns.
>
> As I mentioned, in this case the user can use expected<> or similar
> themselves.

As I mentioned, expected<> is too hard on compile times for large
code bases. It's also way overkill for what 98% of use cases need.

> Otherwise, what's the type of error_code? There could be an
> infinite amount of error_code types to choose from (starting with simple
> 'enum class'es and continuing with error codes that include more
> information about the error (nonscalar objects).

It's std::error_code. Same as ASIO uses. If being compiled as part of
Boost, I expect boost::error_code and boost::exception_ptr will work
too as additional variant options.

> > Also, any monadic transport would default construct to an unexpected
> > state of a null error_code in fact, which is constexpr. This lets one
> > work around a number of exception safety irritations where move
> > constructor of T is not noexcept more easily.
>
> I'm not sure how the default constructor of future<> and the move
> constructor of T are related.

Well, let's assume we're really talking about a maybe<T>, and
future<T> subclasses maybe<T> with additional thread safety stuff.

In this situation a maybe<T> doesn't need a default constructor, but
because it's a fixed variant we always know that error_code is
available, and error_code (a) doesn't allocate memory and (b) is STL
container friendly, so it seems sensible to make maybe<T> also STL
container friendly by letting it default to error_code.

The problem, as with the WG21 proposed variant, is getting move
assignment to not undefine behaviour the existing contents if the
throwing move constructor throws. Boost Variant's copy constructor
dynamically allocates a temporary copy of itself internally to give
that strong guarantee - this is unacceptable overhead for mine. So I
need some well defined state to default to if during move assignment
my move constructor throws after I have destructed my current state.
Defaulting to an error_code saying the move constructor threw is a
reasonable well defined outcome.

> I'm not even sure why future<> would require a default constructor.
> Seastar's doesn't have one.

My future promise is as close to a strict superset of the Concurrency
TS as is possible. It should be drop in replaceable in 99% of use
cases, with the only point of failure being if you are trying to use
allocators with your futures.

My future promise is also intended to enter the Boost.Thread rewrite
as the next gen future promise, if it proves popular.

> >>> ... turns into a "mov $5, %eax", so future<T> is now also a
> >>> lightweight monadic return transport capable of being directly
> >>> constructed.
>
> I'm looking forward to it! I've been bitten by the same compile time
> explosion problems and I'm curious to see how you solved them.

With a great deal of concentrated study of compiler diagnostics and
trial and error!

Once they are working and they are being unit tested per commit, I'll
get a CI failure every time I break it. That should make things
enormously easier going. Lots of machinery and scripting to come
before that though.

I've got everything working except the sequence:

promise<int> p;
p.set_value(5);
return p.get_future().get();

This should reduce to a mov $5, %eax, but currently does not for an
unknown reason. I'm just about to go experiment and see why.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-25 18:04:26 UTC
Permalink
Niall Douglas wrote:

> I've got everything working except the sequence:
>
> promise<int> p;
> p.set_value(5);
> return p.get_future().get();
>
> This should reduce to a mov $5, %eax, but currently does not for an
> unknown reason. I'm just about to go experiment and see why.

I'm really struggling to see how all that could work. Where is the result
stored? In the promise? Wouldn't this require the promise to outlive the
future<>? This doesn't hold in general. How is it to be guaranteed?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Avi Kivity
2015-05-25 18:21:03 UTC
Permalink
On 05/25/2015 09:04 PM, Peter Dimov wrote:
> Niall Douglas wrote:
>
>> I've got everything working except the sequence:
>>
>> promise<int> p;
>> p.set_value(5);
>> return p.get_future().get();
>>
>> This should reduce to a mov $5, %eax, but currently does not for an
>> unknown reason. I'm just about to go experiment and see why.
>
> I'm really struggling to see how all that could work. Where is the
> result stored? In the promise? Wouldn't this require the promise to
> outlive the future<>? This doesn't hold in general. How is it to be
> guaranteed?
>


In seastar, we achieved this by reserving space for the result in both
future and promise, and by having future and promise track each other
(so they can survive moves).

https://github.com/cloudius-systems/seastar/blob/master/core/future.hh


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-25 20:11:34 UTC
Permalink
Avi Kivity wrote:
> On 05/25/2015 09:04 PM, Peter Dimov wrote:
> > Niall Douglas wrote:
> >
> >> I've got everything working except the sequence:
> >>
> >> promise<int> p;
> >> p.set_value(5);
> >> return p.get_future().get();
> >>
> >> This should reduce to a mov $5, %eax, but currently does not for an
> >> unknown reason. I'm just about to go experiment and see why.
> >
> > I'm really struggling to see how all that could work. Where is the
> > result stored? In the promise? Wouldn't this require the promise to
> > outlive the future<>? This doesn't hold in general. How is it to be
> > guaranteed?
>
> In seastar, we achieved this by reserving space for the result in both
> future and promise, and by having future and promise track each other (so
> they can survive moves).

Doesn't this introduce a race between ~promise and ~future? ~future checks
if( promise_ ), ~promise checks if( future_ ), and things get ugly.

Fixing that requires synchronization, which makes mov $5, %eax impossible.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Avi Kivity
2015-05-25 20:45:26 UTC
Permalink
On 05/25/2015 11:11 PM, Peter Dimov wrote:
> Avi Kivity wrote:
>> On 05/25/2015 09:04 PM, Peter Dimov wrote:
>> > Niall Douglas wrote:
>> >
>> >> I've got everything working except the sequence:
>> >>
>> >> promise<int> p;
>> >> p.set_value(5);
>> >> return p.get_future().get();
>> >>
>> >> This should reduce to a mov $5, %eax, but currently does not for
>> an >> unknown reason. I'm just about to go experiment and see why.
>> >
>> > I'm really struggling to see how all that could work. Where is the
>> > result stored? In the promise? Wouldn't this require the promise to
>> > outlive the future<>? This doesn't hold in general. How is it to be
>> > guaranteed?
>>
>> In seastar, we achieved this by reserving space for the result in
>> both future and promise, and by having future and promise track each
>> other (so they can survive moves).
>
> Doesn't this introduce a race between ~promise and ~future? ~future
> checks if( promise_ ), ~promise checks if( future_ ), and things get
> ugly.
>
> Fixing that requires synchronization, which makes mov $5, %eax
> impossible.
>

Ah, seastar futures are thread-unsafe. All computation is core-local,
with multiprocessing achieved by explicit message passing (using a local
future to represent the remote computation).



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 22:46:35 UTC
Permalink
On 25 May 2015 at 21:04, Peter Dimov wrote:

> > I've got everything working except the sequence:
> >
> > promise<int> p;
> > p.set_value(5);
> > return p.get_future().get();
> >
> > This should reduce to a mov $5, %eax, but currently does not for an
> > unknown reason. I'm just about to go experiment and see why.
>
> I'm really struggling to see how all that could work. Where is the result
> stored? In the promise? Wouldn't this require the promise to outlive the
> future<>? This doesn't hold in general. How is it to be guaranteed?

The promise storage is an unrestricted union holding any of the types
returnable by the future, or a future<T> * or a shared_ptr<future<T>>
(for shared_future). Ideally you'd fold the internal spinlock into
the unrestricted union too as you only need it when the storage is
selected to the future<T> *, but it complicates the source code very
significantly, so for now I've left that out.

Peter Dimov wrote:

> > In seastar, we achieved this by reserving space for the result in both
> > future and promise, and by having future and promise track each other (so
> > they can survive moves).
>
> Doesn't this introduce a race between ~promise and ~future? ~future checks
> if( promise_ ), ~promise checks if( future_ ), and things get ugly.
>
> Fixing that requires synchronization, which makes mov $5, %eax impossible.

In my implementation, if you never call promise.get_future() you
never get synchronisation. The constexpr folding has the compiler
elide all that. Even though clang spills a lot of ops, it still
spills no synchronisation. MSVC unfortunately spills everything, but
no synchronisation would ever be executed by the CPU.

Also if you call promise.set_value() before promise.get_future(), you
never get synchronisation as futures are single shot.

I took a very simple synchronisation solution - a spinlock in both
promise and future. Both are locked before any changes happen in
either, if synchronisation has been switched on.

Another optimisation I have yet to do is to detach the future-promise
after the value has been set to avoid unnecessary synchronisation.

Giovanni Piero Deretta wrote:

> So, the future/promise pair can be optimized out if the work can be
> completed synchronously (i.e. immediately or at get time). But then,
> why use a future at all? What is the use case you are trying to
> optimize for? do you have an example?

For me my main purpose is making AFIO allocate four malloc/frees per
op instead of eight. I also make heavy use of make_ready_future(),
and by definition that is now optimally fast.

Also, as I mentioned, the lion's share of the future implementation
is actually reusable as a monadic transport. That's currently a
monad<T, consuming> base class in my code, but I am asking for bike
shedding here on what to name a user facing specialisation.

> I believe that trying to design a future that can fulfill everybody's
> requirements is a lost cause. The c++ way is to define concepts and
> algorithms that work on concepts. The types we want to generalize are
> std::future, expected, possibly optional and all the other futures that have
> been cropping up in the meantime. The algorithms are of course those
> required for composition: then, when_all, when_any plus probably get and
> wait.

I think it might not be a lost cause in a world with concepts and
especially modules. Until then it's going to be unacceptably slow.
And that's years away, and I need this now.

Regarding heterogeneous future type wait composition, yes this is
hoped to be the foundation for such an eventual outcome. That's more
Vicente's boat than mine though.

> We could just take a page from Haskell and call the concept Monad, but
> maybe we want something more specific, like Result.

I'll take that as a vote for result<T>.

Bjorn suggested holder<T> and value<T> as well. I think the former
too generic, and the latter suggests the thing is a value and can
decay into one without get(). Or maybe, that some might think that.

> Then, in addition to the algorithms themselves, there is certainly
> space in boost for a library for helping build custom futures, a-la
> boost.iterator.

I originally intended that, my second round of experiments showed it
could be very exciting. But I choose to bow out in favour of
something I can deliver in weeks rather than months. I need all this
ready mid-June so I can start retrofitting AFIO.

Besides, simpler I think may well win the race. More complex means
more compiler quirks. I am dealing enough with those already!

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-25 23:36:18 UTC
Permalink
Niall Douglas wrote:
> In my implementation, if you never call promise.get_future() you never get
> synchronisation.

That's an interesting use case. When does it occur?

> The constexpr folding has the compiler elide all that.

I suspect that what you call "constexpr folding" has nothing to do with
constexpr, it's just inlining and ordinary non-constexpr optimization. KAI
C++ was famous for doing such folding miracles decades ago.

> Also if you call promise.set_value() before promise.get_future(), you
> never get synchronisation as futures are single shot.

Another interesting case for which I've trouble imagining a practical use.
:-)

> I took a very simple synchronisation solution - a spinlock in both promise
> and future. Both are locked before any changes happen in either, if
> synchronisation has been switched on.

Not quite sure how this would work. Seems to me that future and promise both
first need to take their spinlock and then the other's, which creates the
potential for deadlock. But I may be missing something.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-25 23:58:33 UTC
Permalink
> Seems to me that future and promise both first need to take their spinlock
> and then the other's, which creates the potential for deadlock.

You could fix that by preallocating a sufficiently big static array of
spinlocks, then generating an index when creating the promise and using
spinlock_[i] in both. Collisions are possible but harmless.

This non-allocating implementation is an interesting argument in favor of
the current "unique future", which I've long disliked. I prefer futures to
be shared_futures. That may too be possible to implement without allocation,
but it's kind of complicated.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Gottlob Frege
2015-05-26 03:50:36 UTC
Permalink
Can't use a spinlock pool. Because you might hold the spin lock when
moving/copying T, and T could be anything, then, in theory at least, T's
constructor might also use a future/promise, etc, etc, such that a
collision could cause a deadlock.

And extremely rare deadlock. That a user would never be able to diagnose.

Alternatively, with a lock in each and pointers pointing to each other, you
avoid deadlock by first setting (via CAS) your own state to be "I'm moving"
then (if successful) setting your partner's flag to "see ya later", then
(if successful) moving.

No one moves without telling the other first. You can get a live lock, but
not a dead lock. The live Lock can be dealt with (particularly easily since
the relationship (promise vs future) is asymmetrical - just say future
always goes first, for example).

At least that's one way:

http://2013.cppnow.org/session/non-allocating-stdfuturepromise/


Sent from my portable Analytical Engine

------------------------------
*From:* "Peter Dimov" <***@pdimov.com>
*To:* "***@lists.boost.org" <***@lists.boost.org>
*Sent:* 25 May, 2015 7:59 PM
*Subject:* Re: [boost] [next gen future-promise] What to call
themonadicreturntype?

> Seems to me that future and promise both first need to take their spinlock
> and then the other's, which creates the potential for deadlock.

You could fix that by preallocating a sufficiently big static array of
spinlocks, then generating an index when creating the promise and using
spinlock_[i] in both. Collisions are possible but harmless.

This non-allocating implementation is an interesting argument in favor of
the current "unique future", which I've long disliked. I prefer futures to
be shared_futures. That may too be possible to implement without allocation,
but it's kind of complicated.
Niall Douglas
2015-05-26 09:29:57 UTC
Permalink
On 26 May 2015 at 3:50, Gottlob Frege wrote:

> Alternatively, with a lock in each and pointers pointing to each other, you
> avoid deadlock by first setting (via CAS) your own state to be "I'm moving"
> then (if successful) setting your partner's flag to "see ya later", then
> (if successful) moving.
>
> No one moves without telling the other first. You can get a live lock, but
> not a dead lock. The live Lock can be dealt with (particularly easily since
> the relationship (promise vs future) is asymmetrical - just say future
> always goes first, for example).

I found via empirical testing that the number of occasions when two
threads both access promise and future concurrently is extremely
rare. The stupidly simple method of locking both for every access I
didn't find was a problem. And it's easy to test, and verify with the
thread sanitiser.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Gottlob Frege
2015-05-27 16:21:52 UTC
Permalink
On Tue, May 26, 2015 at 5:29 AM, Niall Douglas
<***@nedprod.com> wrote:
> On 26 May 2015 at 3:50, Gottlob Frege wrote:
>
>> Alternatively, with a lock in each and pointers pointing to each other, you
>> avoid deadlock by first setting (via CAS) your own state to be "I'm moving"
>> then (if successful) setting your partner's flag to "see ya later", then
>> (if successful) moving.
>>
>> No one moves without telling the other first. You can get a live lock, but
>> not a dead lock. The live Lock can be dealt with (particularly easily since
>> the relationship (promise vs future) is asymmetrical - just say future
>> always goes first, for example).
>
> I found via empirical testing that the number of occasions when two
> threads both access promise and future concurrently is extremely
> rare. The stupidly simple method of locking both for every access I
> didn't find was a problem. And it's easy to test, and verify with the
> thread sanitiser.
>

Same algorithm though, right?

Lock yourself, then try to lock your partner, unlock yourself if you
can't lock your partner, repeat?

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-28 10:07:55 UTC
Permalink
On 27 May 2015 at 12:21, Gottlob Frege wrote:

> >> Alternatively, with a lock in each and pointers pointing to each other, you
> >> avoid deadlock by first setting (via CAS) your own state to be "I'm moving"
> >> then (if successful) setting your partner's flag to "see ya later", then
> >> (if successful) moving.
> >>
> >> No one moves without telling the other first. You can get a live lock, but
> >> not a dead lock. The live Lock can be dealt with (particularly easily since
> >> the relationship (promise vs future) is asymmetrical - just say future
> >> always goes first, for example).
> >
> > I found via empirical testing that the number of occasions when two
> > threads both access promise and future concurrently is extremely
> > rare. The stupidly simple method of locking both for every access I
> > didn't find was a problem. And it's easy to test, and verify with the
> > thread sanitiser.
> >
>
> Same algorithm though, right?
>
> Lock yourself, then try to lock your partner, unlock yourself if you
> can't lock your partner, repeat?

I think yours has bias towards the future? I found the logic for
handling that given the rarity of live lock wasn't worth it.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-26 10:36:10 UTC
Permalink
Gottlob Frege wrote:

> Can't use a spinlock pool. Because you might hold the spin lock when
> moving/copying T, and T could be anything, then, in theory at least, T's
> constructor might also use a future/promise, etc, etc, such that a
> collision could cause a deadlock.

Yes, good point. Need recursive spinlocks.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Gottlob Frege
2015-05-26 18:27:23 UTC
Permalink
On Tue, May 26, 2015 at 6:36 AM, Peter Dimov <***@pdimov.com> wrote:
> Gottlob Frege wrote:
>
>> Can't use a spinlock pool. Because you might hold the spin lock when
>> moving/copying T, and T could be anything, then, in theory at least, T's
>> constructor might also use a future/promise, etc, etc, such that a collision
>> could cause a deadlock.
>
>
> Yes, good point. Need recursive spinlocks.
>

I don't think a pool of recursive spinlocks works, because T could
decide to wait on another thread, which is waiting on a future, which
could - behind the scenes - be using the same spinlock as the original
thread.

You could probably still use a pool somehow, with more management, but
it gets complicated. You probably either need at least as many locks
as threads, or a fall back of allocating a lock if none are available
- but the original point was to not allocate...

(Or a way to know that Thread A is waiting on Thread B...)

Tony

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 06:25:23 UTC
Permalink
On 26 May 2015 12:59 am, "Peter Dimov" <***@pdimov.com> wrote:
>>
[...]
>
> This non-allocating implementation is an interesting argument in favor of
the current "unique future", which I've long disliked. I prefer futures to
be shared_futures.

Interesting, why do you dislike the unique future design? A shared future
pretty much requires holding a shared pointer and needs heavy weight
synchronisation ( a muted+condvar or equivalent). On the other hand a
unique future need no internal mutual exclusion and the only
synchronisation is needed for the handoff between producer and consumer;
the reference count is implicit (just have the consumer always deallocate
the object). The implementation can be significantly more light weight.

-- gpd

_______________________________________________
> Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 10:47:57 UTC
Permalink
Giovanni Piero Deretta wrote:
> On 26 May 2015 12:59 am, "Peter Dimov" <***@pdimov.com> wrote:
> > This non-allocating implementation is an interesting argument in favor
> > of the current "unique future", which I've long disliked. I prefer
> > futures to be shared_futures.
>
> Interesting, why do you dislike the unique future design?

I dislike the unique/shared future split, which requires all algorithms to
be duplicated/manyplicated.

auto f = when_any( f1, f2, f3, f4 ); // 2^4 options

I prefer a single future<T> type, and I want this single type to be shared
because I want the above when_any to not std::move my futures, leaving me
with empty shells.

> A shared future pretty much requires holding a shared pointer and needs
> heavy weight synchronisation ( a muted+condvar or equivalent). On the
> other hand a unique future need no internal mutual exclusion and the only
> synchronisation is needed for the handoff between producer and consumer;
> the reference count is implicit (just have the consumer always deallocate
> the object). The implementation can be significantly more light weight.

I already acknowledged that these implementations are an argument in favor
of unique future.

But I don't think I agree with what you're saying. I don't see how
mutex+condvar is required by shared future but not by unique future (how do
you implement wait() is not related to sharedness), neither do I see how
unique futures require no synchronization (they obviously do in Niall's
implementation.)


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 12:31:25 UTC
Permalink
On 26 May 2015 11:54 am, "Peter Dimov" <***@pdimov.com> wrote:
>
> Giovanni Piero Deretta wrote:
>>
>> On 26 May 2015 12:59 am, "Peter Dimov" <***@pdimov.com> wrote:
>> > This non-allocating implementation is an interesting argument in favor
> of the current "unique future", which I've long disliked. I prefer >
futures to be shared_futures.
>>
>> Interesting, why do you dislike the unique future design?
>
>
> I dislike the unique/shared future split, which requires all algorithms
to be duplicated/manyplicated.
>
> auto f = when_any( f1, f2, f3, f4 ); // 2^4 options
>

The algorithm can be generic of course. You fear the template instantiation
explosion? You'll have the same problem if you want to mix different
futures from separate libraries.

>
>> A shared future pretty much requires holding a shared pointer and needs
heavy weight synchronisation ( a muted+condvar or equivalent). On the other
hand a unique future need no internal mutual exclusion and the only
synchronisation is needed for the handoff between producer and consumer;
the reference count is implicit (just have the consumer always deallocate
the object). The implementation can be significantly more light weight.
>
>
> I already acknowledged that these implementations are an argument in
favor of unique future.
>
> But I don't think I agree with what you're saying. I don't see how
mutex+condvar is required by shared future but not by unique future (how do
you implement wait() is not related to sharedness), neither do I see how
unique futures require no synchronization (they obviously do in Niall's
implementation.)

You are right of course. The readiness notification is orthogonal, but in a
shared future you need a mutex to synchronise the various consumers access
to the shared state, so a condition variable becomes the obvious choice,
while you can be more creative with plain futures.

Synchronisation with unique futures is of course necessary, but is much
simpler: the producer will set the future to ready exactly once, then never
touches it again, while the consumer won't access the future state until it
is ready. Basically the producer need a single strong CAS on an atomic
object to see the readiness and check for waiters, similarly the waiter
need a CAS to test and wait. No explicit mutual exclusion is necessary.

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 12:42:49 UTC
Permalink
Giovanni Piero Deretta wrote:
> > auto f = when_any( f1, f2, f3, f4 ); // 2^4 options
>
> The algorithm can be generic of course.

Can it be generic? Do you have a generic implementation of when_any?

> You'll have the same problem if you want to mix different futures from
> separate libraries.

Yes, I know. Hence my preference for a single std::future that is _the_
future type.

> You are right of course. The readiness notification is orthogonal, but in
> a shared future you need a mutex to synchronise the various consumers
> access to the shared state, so a condition variable becomes the obvious
> choice, while you can be more creative with plain futures.

No, in fact I do not need a mutex to synchronize consumers, as far I can
see. Where would I need it? Consumers don't interfere with one another.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 13:15:10 UTC
Permalink
On 26 May 2015 1:44 pm, "Peter Dimov" <***@pdimov.com> wrote:
>
> Giovanni Piero Deretta wrote:
>>
>> > auto f = when_any( f1, f2, f3, f4 ); // 2^4 options
>>
>> The algorithm can be generic of course.
>
>
> Can it be generic? Do you have a generic implementation of when_any?
>

You do need to standardise a generic asynchronous wait protocol of course.

I have a generic 'wait_any' (it blocks, it doesn't return a future), and a
somewhat generic 'then' (the returned future is not generic, but can wait
on multiple waitables). I do not have a generic when_any, but should be a
matter of putting those two together.

Wait_any/all (toward the end of the file):
https://github.com/gpderetta/libtask/blob/master/event.hpp

Generic then (also at the end of the file):
https://github.com/gpderetta/libtask/blob/master/future.hpp

>
>> You'll have the same problem if you want to mix different futures from
separate libraries.
>
>
> Yes, I know. Hence my preference for a single std::future that is _the_
future type.
>

Good luck with that :)

>
>> You are right of course. The readiness notification is orthogonal, but
in a shared future you need a mutex to synchronise the various consumers
access to the shared state, so a condition variable becomes the obvious
choice, while you can be more creative with plain futures.
>
>
> No, in fact I do not need a mutex to synchronize consumers, as far I can
see. Where would I need it? Consumers don't interfere with one another.
>

You need to protect concurrent calls to then and also to wait if you allow
consumers to provide their own wait object (which you need at least for
wait any). Possibly also concurrent when_any/all, but i haven't tried
implementing that. You might as well use it to protect the refcount.

I concede that it might not be required with the current limited
std::future interface.
_______________________________________________
> Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 13:31:21 UTC
Permalink
Giovanni Piero Deretta wrote:

> You need to protect concurrent calls to then...

Hm, good point, I hadn't thought of that.

Note however that if remove promise::get_future and create a promise/future
pair in one go, I can then in principle check that the future is unique and
omit the mutex lock.

> You might as well use it to protect the refcount.

Eh, no. Refcounts have been protecting themselves for decades. :-)


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 13:55:22 UTC
Permalink
On 26 May 2015 2:32 pm, "Peter Dimov" <***@pdimov.com> wrote:
>
> Giovanni Piero Deretta wrote:
>
>> You need to protect concurrent calls to then...
>
>
> Hm, good point, I hadn't thought of that.
>
> Note however that if remove promise::get_future and create a
promise/future pair in one go, I can then in principle check that the
future is unique and omit the mutex lock.
>

Yes, also it is feasible to chain all callbacks in a lock free lists as
there are no deletions until signal time.

Doesn't work for wait any and timed waits when waiters enter and leave all
the time. Also it might me desirable to to unify the waiter list and the
callback list. I have a few ideas (basically putting all the overhead in
retire wait in the multiple waiter case), but I haven't gotten around to
implement them.

>
>> You might as well use it to protect the refcount.
>
>
> Eh, no. Refcounts have been protecting themselves for decades. :-)

The idea is that might be able to deduce the refcount from the shared state
itself so you do not need an explicit counter.

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 14:53:37 UTC
Permalink
Giovanni Piero Deretta wrote:
> > Can it be generic? Do you have a generic implementation of when_any?
>
> You do need to standardise a generic asynchronous wait protocol of course.

Or, I can make promises shared as well, and then use .then.

when_any( f1, f2, f3 )
{
future<size_t> f;
promise<size_t> p(f);

f1.then( [p](auto){ p.set_value(0); } );
f2.then( [p](auto){ p.set_value(1); } );
f3.then( [p](auto){ p.set_value(2); } );

return f.then( []( future<size_t> index )
{
return make_pair(index.get(), make_tuple(f1, f2, f3));
});
}

Something like that.

(I still don't need a mutex for the multiple set_value calls, by the way -
if(rn_++ == 0) { set the value; publish/notify; }.)


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 15:43:41 UTC
Permalink
On 26 May 2015 3:54 pm, "Peter Dimov" <***@pdimov.com> wrote:
>
> Giovanni Piero Deretta wrote:
>>
>> > Can it be generic? Do you have a generic implementation of when_any?
>>
>> You do need to standardise a generic asynchronous wait protocol of
course.
>
>
> Or, I can make promises shared as well, and then use .then.
>

Yeah 'then' itself works fine as a wait protocol although I prefer
something a bit lower level and dismissible (a 'cancel_then').

> when_any( f1, f2, f3 )
> {
> future<size_t> f;
> promise<size_t> p(f);
>
> f1.then( [p](auto){ p.set_value(0); } );
> f2.then( [p](auto){ p.set_value(1); } );
> f3.then( [p](auto){ p.set_value(2); } );
>
> return f.then( []( future<size_t> index )
> {
> return make_pair(index.get(), make_tuple(f1, f2, f3));
> });
> }
>
> Something like that.
>
> (I still don't need a mutex for the multiple set_value calls, by the way
- if(rn_++ == 0) { set the value; publish/notify; }.)
>

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 00:58:18 UTC
Permalink
> Not quite sure how this would work. Seems to me that future and promise
> both first need to take their spinlock and then the other's, which creates
> the potential for deadlock. But I may be missing something.

I saw the code, you try_lock and back off.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 09:20:01 UTC
Permalink
On 26 May 2015 at 2:36, Peter Dimov wrote:

> > In my implementation, if you never call promise.get_future() you never get
> > synchronisation.
>
> That's an interesting use case. When does it occur?

make_ready_future().

> > The constexpr folding has the compiler elide all that.
>
> I suspect that what you call "constexpr folding" has nothing to do with
> constexpr, it's just inlining and ordinary non-constexpr optimization. KAI
> C++ was famous for doing such folding miracles decades ago.

You're right it's not in the standard. Well, actually it is, but
indirectly.

Let me explain. If you read what is allowed for constexpr at
http://en.cppreference.com/w/cpp/language/constexpr, and then write
logic which provides outcome paths which do nothing not constexpr,
then the compiler will completely elide code entirely at compile time
when those paths are followed. If you examine my code closely, you'll
see I always predicate the construction of anything with a
non-trivial destructor (e.g. exception_ptr) because non-trivial
destructors appear to force code output in current compiler
technology, though it may also be the atomic writes that
exception_ptr does.

This probably is not by accident. The machinery inside the compiler
to implement constexpr is probably reused for optimisation, or
rather, vice versa. Maybe a compiler vendor might chime in here to
tell us?

> > Also if you call promise.set_value() before promise.get_future(), you
> > never get synchronisation as futures are single shot.
>
> Another interesting case for which I've trouble imagining a practical use.
> :-)

Its main benefit for me is as a unit test smoke test. However
resumable functions, as currently proposed, should hugely benefit
from this pattern. I've emailed Gor about it.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Avi Kivity
2015-05-26 10:35:15 UTC
Permalink
On 05/26/2015 12:20 PM, Niall Douglas wrote:
>>> The constexpr folding has the compiler elide all that.
>> I suspect that what you call "constexpr folding" has nothing to do with
>> constexpr, it's just inlining and ordinary non-constexpr optimization. KAI
>> C++ was famous for doing such folding miracles decades ago.
> You're right it's not in the standard. Well, actually it is, but
> indirectly.
>
> Let me explain. If you read what is allowed for constexpr at
> http://en.cppreference.com/w/cpp/language/constexpr, and then write
> logic which provides outcome paths which do nothing not constexpr,
> then the compiler will completely elide code entirely at compile time
> when those paths are followed. If you examine my code closely, you'll
> see I always predicate the construction of anything with a
> non-trivial destructor (e.g. exception_ptr) because non-trivial
> destructors appear to force code output in current compiler
> technology, though it may also be the atomic writes that
> exception_ptr does.
>
> This probably is not by accident. The machinery inside the compiler
> to implement constexpr is probably reused for optimisation, or
> rather, vice versa. Maybe a compiler vendor might chime in here to
> tell us?

I believe it is an accident. While the compiler is required to fold a
constexpr expression in a constexpr context (like an array dimension or
non-type template argument), it isn't required to do so in non-constexpr
contexts, and won't do so if optimization is not enabled or if it makes
a bad inlining decision (say, because the function was too large).

The optimization machinery for folding constants is actually a superset
of constexpr, and as a result you will sometimes get a constexpr
expression not optimized, and sometimes a non-constexpr expression will
be folded to nothing.

>
>>> Also if you call promise.set_value() before promise.get_future(), you
>>> never get synchronisation as futures are single shot.
>> Another interesting case for which I've trouble imagining a practical use.
>> :-)
> Its main benefit for me is as a unit test smoke test. However
> resumable functions, as currently proposed, should hugely benefit
> from this pattern. I've emailed Gor about it.
>

I am also waiting for resumables (and concepts, and modules, and
ranges...) with bated breath.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 10:50:58 UTC
Permalink
Niall Douglas wrote:

> On 26 May 2015 at 2:36, Peter Dimov wrote:
>
> > > In my implementation, if you never call promise.get_future() you never
> > > get synchronisation.
> >
> > That's an interesting use case. When does it occur?
>
> make_ready_future().

I'd expect make_ready_future to not need to create a promise at all. It just
creates a ready future.

The use case I was wondering above is that you create a promise, but never
get a future from it.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 14:34:54 UTC
Permalink
On 26 May 2015 at 13:50, Peter Dimov wrote:

> The use case I was wondering above is that you create a promise, but never
> get a future from it.

The present resumable functions proposal requires a promise to be
created on entry to every resumable function. The first time the
function is suspended promise.get_future() is called, and the future
returned immediately.

If the resumable function never suspends, promise.get_future() ought
to be called straight after the set_value() OR make_ready_future will
be used instead. At least, that will be the case of Gor agrees to
tweak the operation ordering in the TS as I have requested (I can't
see it being a problem). If that ordering is implemented, then with
these future-promises a resumable function which was never suspended
has identical runtime overheads to a non-resumable function, and that
is a HUGE win over the present situation which is much less
efficient.

All this assumes I understand the current TS before the committee
correctly and its exact boilerplate expansion from the await/yield
etc keywords.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-26 18:07:39 UTC
Permalink
Le 26/05/15 11:20, Niall Douglas a écrit :
> On 26 May 2015 at 2:36, Peter Dimov wrote:
>
>>> In my implementation, if you never call promise.get_future() you never get
>>> synchronisation.
>> That's an interesting use case. When does it occur?
> make_ready_future().
Really. This is an implementation detail. The library implementer can do
whatever is needed without using a promise here. This was the intent of
the function.


Vicente

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Giovanni Piero Deretta
2015-05-26 10:40:56 UTC
Permalink
On 26 May 2015 12:09 am, "Niall Douglas" <***@nedprod.com> wrote:
>
> Giovanni Piero Deretta wrote:
>
> > So, the future/promise pair can be optimized out if the work can be
> > completed synchronously (i.e. immediately or at get time). But then,
> > why use a future at all? What is the use case you are trying to
> > optimize for? do you have an example?
>
> For me my main purpose is making AFIO allocate four malloc/frees per
> op instead of eight. I also make heavy use of make_ready_future(),
> and by definition that is now optimally fast.
>

The ready future path is taken unconditionally or with a condition often
known at compile time? If yes that seems strange to me and would love to
see an example. If no then you are probably optimising for the wrong case.

> Also, as I mentioned, the lion's share of the future implementation
> is actually reusable as a monadic transport. That's currently a
> monad<T, consuming> base class in my code, but I am asking for bike
> shedding here on what to name a user facing specialisation.
>

I think you are selling your code wrongly. What you have is a potentially
zero overhead expected-like result wrapper; this is something people would
want. What you are advertising is a zero overhead future in the
uninteresting case and people can't see the point of it.

The fact that your future is internally implemented using your result
wrapper is nice but not critical.

Also why does the future inherit from the result object? I would expect it
to contain something like variant<result<T>, result<T>*>.

> > I believe that trying to design a future that can fulfill everybody's
> > requirements is a lost cause. The c++ way is to define concepts and
> > algorithms that work on concepts. The types we want to generalize are
> > std::future, expected, possibly optional and all the other futures that
have
> > been cropping up in the meantime. The algorithms are of course those
> > required for composition: then, when_all, when_any plus probably get and
> > wait.
>
> I think it might not be a lost cause in a world with concepts and
> especially modules. Until then it's going to be unacceptably slow.

I'm not following. How do concepts and modules allows to have a catch all
future?

> And that's years away, and I need this now.
>
> Regarding heterogeneous future type wait composition, yes this is
> hoped to be the foundation for such an eventual outcome. That's more
> Vicente's boat than mine though.
>
> > We could just take a page from Haskell and call the concept Monad, but
> > maybe we want something more specific, like Result.
>
> I'll take that as a vote for result<T>.
>

Eh, I meant it as a concept name, not actual class, but sure.

> Bjorn suggested holder<T> and value<T> as well. I think the former
> too generic, and the latter suggests the thing is a value and can
> decay into one without get(). Or maybe, that some might think that.
>

You could leave the return type unspecified and just list the type
requirements.

> > Then, in addition to the algorithms themselves, there is certainly
> > space in boost for a library for helping build custom futures, a-la
> > boost.iterator.
>
> I originally intended that, my second round of experiments showed it
> could be very exciting. But I choose to bow out in favour of
> something I can deliver in weeks rather than months. I need all this
> ready mid-June so I can start retrofitting AFIO.

If you need this week just use boost.thread future and get afio reviewed
with that. Have the magic future as an optional unstable preview only
interface.

Boost reviews usually focus on interfaces more than implementation. As long
as the reviewers believe that your interface can be implemented efficiently
you'll be fine.

In fact having yet another non interoperable future in afio might be a
negative point during review.

After afio is accepted, you can lobby to get boost.thread future do what
you want, or better add generalised future composition to boost.

>
> Besides, simpler I think may well win the race. More complex means
> more compiler quirks. I am dealing enough with those already!
>
> Niall
>
> --
> ned Productions Limited Consulting
> http:// <http://www.nedproductions.biz/>www.nedproductions.biz/
<http://www.nedproductions.biz/>
> http:// <http://ie.linkedin.com/in/nialldouglas/>ie.linkedin.com
<http://ie.linkedin.com/in/nialldouglas/>/in/
<http://ie.linkedin.com/in/nialldouglas/>nialldouglas
<http://ie.linkedin.com/in/nialldouglas/>/
<http://ie.linkedin.com/in/nialldouglas/>
>
>
>
>
> _______________________________________________
> Unsubscribe & other changes: http://
<http://lists.boost.org/mailman/listinfo.cgi/boost>lists.boost.org
<http://lists.boost.org/mailman/listinfo.cgi/boost>/mailman/
<http://lists.boost.org/mailman/listinfo.cgi/boost>listinfo.cgi
<http://lists.boost.org/mailman/listinfo.cgi/boost>/boost
<http://lists.boost.org/mailman/listinfo.cgi/boost>

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 14:28:27 UTC
Permalink
On 26 May 2015 at 11:40, Giovanni Piero Deretta wrote:

> > For me my main purpose is making AFIO allocate four malloc/frees per
> > op instead of eight. I also make heavy use of make_ready_future(),
> > and by definition that is now optimally fast.
>
> The ready future path is taken unconditionally or with a condition often
> known at compile time? If yes that seems strange to me and would love to
> see an example. If no then you are probably optimising for the wrong case.

If an op is scheduled with a precondition which has already
completed, a make_ready_future is scheduled to be executed on
function exit that makes the outcome the same as if the precondition
were continued. That happens a good chunk of the time.

> > Also, as I mentioned, the lion's share of the future implementation
> > is actually reusable as a monadic transport. That's currently a
> > monad<T, consuming> base class in my code, but I am asking for bike
> > shedding here on what to name a user facing specialisation.
> >
>
> I think you are selling your code wrongly. What you have is a potentially
> zero overhead expected-like result wrapper; this is something people would
> want. What you are advertising is a zero overhead future in the
> uninteresting case and people can't see the point of it.

TBH I don't really care about what other people *think* they want.
Most of the time what people think they want before they *need* it is
misplaced. I've seen that already in some of the comments here,
people are bikeshedding what they think a future/expected/monad ought
to be without actually needing a better future/expected/monad for
their specific problem use case.

I *do* have a pressing use case, and better futures makes my
immediate pressing use case go away. I've been working on the correct
redesign since October 2014, so none of these design decisions were
taken without very ample reflection. I also waited to attend Thomas
Heller's C++ Now presentation on his replacement futures before
starting my own. I think my solution to my problems solves all the
major problems ASIO has with futures too, as my problems are those
also of ASIO. That could eventually help bridge the travesty the
Networking TS is currently being degraded into :(

That may mean this design could be useful in Boost.Thread too, and
therefore eventual standardisation as part of repairing the
Networking TS for the next standard release after they no doubt cock
up the first attempt. We'll see.

> The fact that your future is internally implemented using your result
> wrapper is nice but not critical.

It *is* critical if you want a future to efficiently convert into a
result, and vice versa.

> Also why does the future inherit from the result object? I would expect it
> to contain something like variant<result<T>, result<T>*>.

It won't inherit from result, but from the same base class (currently
"monad"). No variants at all in this design (too expensive).

> > > I believe that trying to design a future that can fulfill everybody's
> > > requirements is a lost cause. The c++ way is to define concepts and
> > > algorithms that work on concepts. The types we want to generalize are
> > > std::future, expected, possibly optional and all the other futures that
> have
> > > been cropping up in the meantime. The algorithms are of course those
> > > required for composition: then, when_all, when_any plus probably get and
> > > wait.
> >
> > I think it might not be a lost cause in a world with concepts and
> > especially modules. Until then it's going to be unacceptably slow.
>
> I'm not following. How do concepts and modules allows to have a catch all
> future?

If we had C++ Modules and Concepts now, my guess would be that
Expected would be enormously faster to build. Of course, we'd simply
push out the complexity still further until even Modules wasn't
enough. GCC will be in the way of ubiquitous Modules though :(

> If you need this week just use boost.thread future and get afio reviewed
> with that. Have the magic future as an optional unstable preview only
> interface.

AFIO already uses either boost::future or std::future. Has done since
v0.1. Indeed Vicente patched extra APIs into boost::future for me to
improve performance :)

But the community here said they wanted a "final" API before review,
so I'll deliver exactly that. These futures will be custom subclassed
by AFIO into a special afio::future, and that makes a ton of API
cruft dealing with the limitations of std::future go away.

> Boost reviews usually focus on interfaces more than implementation. As long
> as the reviewers believe that your interface can be implemented efficiently
> you'll be fine.
>
> In fact having yet another non interoperable future in afio might be a
> negative point during review.
>
> After afio is accepted, you can lobby to get boost.thread future do what
> you want, or better add generalised future composition to boost.

c.f. earlier discussion here asking for community preferences. tl;dr;
the community prefers a "final" API to review. They shall get it.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Avi Kivity
2015-05-25 18:24:21 UTC
Permalink
On 05/25/2015 07:37 PM, Niall Douglas wrote:
> On 25 May 2015 at 18:33, Avi Kivity wrote:
>
>>> In particular, error_code is fast, and unexpected returns are not
>>> exceptional and must be as fast as expected returns.
>> As I mentioned, in this case the user can use expected<> or similar
>> themselves.
> As I mentioned, expected<> is too hard on compile times for large
> code bases. It's also way overkill for what 98% of use cases need.
>
>> Otherwise, what's the type of error_code? There could be an
>> infinite amount of error_code types to choose from (starting with simple
>> 'enum class'es and continuing with error codes that include more
>> information about the error (nonscalar objects).
> It's std::error_code. Same as ASIO uses. If being compiled as part of
> Boost, I expect boost::error_code and boost::exception_ptr will work
> too as additional variant options.
>
>>> Also, any monadic transport would default construct to an unexpected
>>> state of a null error_code in fact, which is constexpr. This lets one
>>> work around a number of exception safety irritations where move
>>> constructor of T is not noexcept more easily.
>> I'm not sure how the default constructor of future<> and the move
>> constructor of T are related.
> Well, let's assume we're really talking about a maybe<T>, and
> future<T> subclasses maybe<T> with additional thread safety stuff.
>
> In this situation a maybe<T> doesn't need a default constructor, but
> because it's a fixed variant we always know that error_code is
> available, and error_code (a) doesn't allocate memory and (b) is STL
> container friendly, so it seems sensible to make maybe<T> also STL
> container friendly by letting it default to error_code.
>
> The problem, as with the WG21 proposed variant, is getting move
> assignment to not undefine behaviour the existing contents if the
> throwing move constructor throws. Boost Variant's copy constructor
> dynamically allocates a temporary copy of itself internally to give
> that strong guarantee - this is unacceptable overhead for mine. So I
> need some well defined state to default to if during move assignment
> my move constructor throws after I have destructed my current state.
> Defaulting to an error_code saying the move constructor threw is a
> reasonable well defined outcome.
>
>> I'm not even sure why future<> would require a default constructor.
>> Seastar's doesn't have one.
> My future promise is as close to a strict superset of the Concurrency
> TS as is possible. It should be drop in replaceable in 99% of use
> cases, with the only point of failure being if you are trying to use
> allocators with your futures.
>
> My future promise is also intended to enter the Boost.Thread rewrite
> as the next gen future promise, if it proves popular.
>
>>>>> ... turns into a "mov $5, %eax", so future<T> is now also a
>>>>> lightweight monadic return transport capable of being directly
>>>>> constructed.
>> I'm looking forward to it! I've been bitten by the same compile time
>> explosion problems and I'm curious to see how you solved them.
> With a great deal of concentrated study of compiler diagnostics and
> trial and error!
>
> Once they are working and they are being unit tested per commit, I'll
> get a CI failure every time I break it. That should make things
> enormously easier going. Lots of machinery and scripting to come
> before that though.
>
> I've got everything working except the sequence:
>
> promise<int> p;
> p.set_value(5);
> return p.get_future().get();
>
> This should reduce to a mov $5, %eax, but currently does not for an
> unknown reason. I'm just about to go experiment and see why.
>
>

I managed to get very close to this by sprinking always_inline
attributes, mostly at destructors.

As soon as the compiler makes a bad inlining decision, it loses track of
the values it propagated and basically has to undo all optimization.

It's still not perfect (one extra instruction) though.

(using gcc 5).

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Hartmut Kaiser
2015-05-26 01:48:51 UTC
Permalink
> I've got everything working except the sequence:
>
> promise<int> p;
> p.set_value(5);
> return p.get_future().get();
>
> This should reduce to a mov $5, %eax, but currently does not for an
> unknown reason. I'm just about to go experiment and see why.

I asked this question many times before: what is this good for in practice
except for demonstrating some impressive compiler optimization capabilities?
If I need to return the number '5' I'd usually write

return 5;

in the first place...

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Gavin Lambert
2015-05-26 02:19:25 UTC
Permalink
On 26/05/2015 13:48, Hartmut Kaiser wrote:
>
>> I've got everything working except the sequence:
>>
>> promise<int> p;
>> p.set_value(5);
>> return p.get_future().get();
>>
>> This should reduce to a mov $5, %eax, but currently does not for an
>> unknown reason. I'm just about to go experiment and see why.
>
> I asked this question many times before: what is this good for in practice
> except for demonstrating some impressive compiler optimization capabilities?
> If I need to return the number '5' I'd usually write
>
> return 5;
>
> in the first place...

Generally this sort of pattern comes up when implementing a generic
method that is constrained (by base class or concept) to return a
future<T>, but where the actual implementation can complete
synchronously without waiting (eg. a filesystem API that normally reads
data asynchronously, but in this particular case is being implemented by
an in-memory filesystem that can reply instantly).

I'm assuming that the code Niall posted isn't literally written like
that but is instead produced after inlining such generic method calls
for a particular test case.



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 09:27:07 UTC
Permalink
On 26 May 2015 at 14:19, Gavin Lambert wrote:

> >> This should reduce to a mov $5, %eax, but currently does not for an
> >> unknown reason. I'm just about to go experiment and see why.
> >
> > I asked this question many times before: what is this good for in practice
> > except for demonstrating some impressive compiler optimization capabilities?
> > If I need to return the number '5' I'd usually write
> >
> > return 5;
> >
> > in the first place...
>
> Generally this sort of pattern comes up when implementing a generic
> method that is constrained (by base class or concept) to return a
> future<T>, but where the actual implementation can complete
> synchronously without waiting (eg. a filesystem API that normally reads
> data asynchronously, but in this particular case is being implemented by
> an in-memory filesystem that can reply instantly).
>
> I'm assuming that the code Niall posted isn't literally written like
> that but is instead produced after inlining such generic method calls
> for a particular test case.

Spot on.

The key part is that it is possible for the compiler to reduce it to
a mov $5, %eax if *and only if* the compiler knows that is safe.

In other words, no non-trivial destructors, or atomic writes, or
anything else forcing the compiler to emit unnecessary code even in
the most unrealistic use cases. My earlier bug was that I was
constructing an unnecessary exception_ptr, and that forced generation
of a few hundred unnecessary opcodes. Most of the time in real code
that will need to be generated anyway, but it doesn't absolve me from
eliminating it when it's possible to do so especially if it's a one
line fix (which it was).

The main purpose of the above is for smoke unit testing to have the
CI tell me when I've written code which interferes with maximum
optimisation. And, I would assume, whatever poor fellow within the
clang optimiser team who gets assigning to fix the clang optimiser -
this code example is a good thing to fix compiler bugs against.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Hartmut Kaiser
2015-05-26 17:24:56 UTC
Permalink
> On 26/05/2015 13:48, Hartmut Kaiser wrote:
> >
> >> I've got everything working except the sequence:
> >>
> >> promise<int> p;
> >> p.set_value(5);
> >> return p.get_future().get();
> >>
> >> This should reduce to a mov $5, %eax, but currently does not for an
> >> unknown reason. I'm just about to go experiment and see why.
> >
> > I asked this question many times before: what is this good for in
> practice
> > except for demonstrating some impressive compiler optimization
> capabilities?
> > If I need to return the number '5' I'd usually write
> >
> > return 5;
> >
> > in the first place...
>
> Generally this sort of pattern comes up when implementing a generic
> method that is constrained (by base class or concept) to return a
> future<T>, but where the actual implementation can complete
> synchronously without waiting (eg. a filesystem API that normally reads
> data asynchronously, but in this particular case is being implemented by
> an in-memory filesystem that can reply instantly).

Ok, so what? Just use make_ready_future - done. Do we even know how large
the overheads of this are? Or do you just 'assume' for the overheads to be
unacceptable? Can I see some numbers from a real world application?

> I'm assuming that the code Niall posted isn't literally written like
> that but is instead produced after inlining such generic method calls
> for a particular test case.

I doubt any speedup based on all of this fancy-pants optimizations would be
even measurable in the context of file system operations. I'm still highly
doubtful of all of this.

As said, give me a real world use case with real world measurements showing
at least some speedup over 'conventional' futures. Otherwise all of this is
an empty exercise.

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Avi Kivity
2015-05-26 17:35:12 UTC
Permalink
On 05/26/2015 08:24 PM, Hartmut Kaiser wrote:
>
>> I'm assuming that the code Niall posted isn't literally written like
>> that but is instead produced after inlining such generic method calls
>> for a particular test case.
> I doubt any speedup based on all of this fancy-pants optimizations would be
> even measurable in the context of file system operations. I'm still highly
> doubtful of all of this.
>
> As said, give me a real world use case with real world measurements showing
> at least some speedup over 'conventional' futures. Otherwise all of this is
> an empty exercise.
>
>

It can be important for O_DIRECT AIO operations. I agree that for
buffered I/O, the filesystem overhead will dominate (and, on Linux, you
don't have a way to implement futures over buffered I/O without
resorting to threads, which will slow things down further).

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 23:21:05 UTC
Permalink
On 26 May 2015 at 20:35, Avi Kivity wrote:

> It can be important for O_DIRECT AIO operations. I agree that for
> buffered I/O, the filesystem overhead will dominate (and, on Linux, you
> don't have a way to implement futures over buffered I/O without
> resorting to threads, which will slow things down further).

Actually, on recent Linuces with ext4 reads from page cached data are
now kaio wait free. It makes a big difference for warm cache
filesystem when you're doing lots of small reads. Writes
unfortunately still lock, and moreover exclude readers. Linux has a
very long way to go to reach BSD and especially Windows for async
i/o.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Hartmut Kaiser
2015-05-27 00:29:15 UTC
Permalink
> > It can be important for O_DIRECT AIO operations. I agree that for
> > buffered I/O, the filesystem overhead will dominate (and, on Linux, you
> > don't have a way to implement futures over buffered I/O without
> > resorting to threads, which will slow things down further).
>
> Actually, on recent Linuces with ext4 reads from page cached data are
> now kaio wait free. It makes a big difference for warm cache
> filesystem when you're doing lots of small reads. Writes
> unfortunately still lock, and moreover exclude readers. Linux has a
> very long way to go to reach BSD and especially Windows for async
> i/o.

Optimizing away one allocation to create a promise/future pair (or just a
future for make_ready_future) will have no measurable impact in the context
of any I/O, be it wait free or asynchronous or both.

In general, all I'm hearing on this thread is 'it could be helpful', 'it
should be faster', 'it can be important', or 'makes a big difference', etc.
I was hoping that we as a Boost community can do better!

Nobody so far has shown the impact of this optimization technique on a real
world applications (measurements). Or at least, measurement results from
artificial benchmarks under heavy concurrency conditions (using decent
multi-threaded allocators like jemalloc or tcmalloc). I'd venture to say
that there will be no measurable speedup (unless proven otherwise).

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Emil Dotchevski
2015-05-27 01:01:31 UTC
Permalink
On Tue, May 26, 2015 at 5:29 PM, Hartmut Kaiser <***@gmail.com>
wrote:

>
> > > It can be important for O_DIRECT AIO operations. I agree that for
> > > buffered I/O, the filesystem overhead will dominate (and, on Linux, you
> > > don't have a way to implement futures over buffered I/O without
> > > resorting to threads, which will slow things down further).
> >
> > Actually, on recent Linuces with ext4 reads from page cached data are
> > now kaio wait free. It makes a big difference for warm cache
> > filesystem when you're doing lots of small reads. Writes
> > unfortunately still lock, and moreover exclude readers. Linux has a
> > very long way to go to reach BSD and especially Windows for async
> > i/o.
>
> Optimizing away one allocation to create a promise/future pair (or just a
> future for make_ready_future) will have no measurable impact in the context
> of any I/O, be it wait free or asynchronous or both.
>
> In general, all I'm hearing on this thread is 'it could be helpful', 'it
> should be faster', 'it can be important', or 'makes a big difference', etc.
> I was hoping that we as a Boost community can do better!
>
> Nobody so far has shown the impact of this optimization technique on a real
> world applications (measurements). Or at least, measurement results from
> artificial benchmarks under heavy concurrency conditions (using decent
> multi-threaded allocators like jemalloc or tcmalloc). I'd venture to say
> that there will be no measurable speedup (unless proven otherwise).
>

+1

I'd add that not only there should be a measurable improvement, but it
should be a measurable improvement compared to other reasonable
optimizations, for example allocations can be optimized through custom
allocators. As an analogy, it's not sufficient to show that shared_ptr is
"too slow" or "allocates too much" compared to some other smart pointer
type -- it also must be shown that the slowness can't be trivially dealt
with by implementing a custom allocator for some shared_ptr instances, if
needed.

--
Emil Dotchevski
Reverge Studios, Inc.
http://www.revergestudios.com/reblog/index.php?n=ReCode

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-27 01:41:30 UTC
Permalink
On 26 May 2015 at 19:29, Hartmut Kaiser wrote:

> Optimizing away one allocation to create a promise/future pair (or just a
> future for make_ready_future) will have no measurable impact in the context
> of any I/O, be it wait free or asynchronous or both.

No one here is claiming better futures make any effect on i/o
performance except you. You are reading only the parts of the thread
you want to in order to believe what you already believe (apparently
that I have some master plan of "taking over" Boost). You drew the
link here between i/o and futures, none of us claimed it. The thread
earlier was clearly about two entirely separate topics. You conflated
them to make your own personal point.

> In general, all I'm hearing on this thread is 'it could be helpful', 'it
> should be faster', 'it can be important', or 'makes a big difference', etc.
> I was hoping that we as a Boost community can do better!

Constant cherry picking of thread topics just to nay say and put down
any discussion of alternative idioms and designs isn't being positive
nor helpful.

> Nobody so far has shown the impact of this optimization technique on a real
> world applications (measurements). Or at least, measurement results from
> artificial benchmarks under heavy concurrency conditions (using decent
> multi-threaded allocators like jemalloc or tcmalloc). I'd venture to say
> that there will be no measurable speedup (unless proven otherwise).

Again nobody claimed that. I was quite clear I primarily want single
op code reduction as part of unit testing. That's its main purpose
for me as a per-commit CI test that I am writing perfectly optimal
code, not just mostly optimal code. A happy consequence is a
potential runtime cost optimal monadic transport, and that's what I
came here to bikeshed a name for, and see if there is interest in
such a development. Feedback on both has been both positive, and
useful, so I will proceed.

I have also been very clear that this new design solves my major
problems, not *the* major problems with existing futures. That's what
I designed it to do. I believe it also solves the same problems as
face futures in ASIO. Once finished and deployed, if others find it
solves their problems too then it has a great chance on becoming a
next gen Boost future. If it doesn't, then it won't.

I have been working on this replacement future design since October,
with multiple presentations of my code experiments here to gain
feedback. Others have presented their code experiments here too. We
have all reviewed each other's design ideas and code, and evolved our
own designs and code in response. If this exchange of code
experiments between people all with similar problems with existing
futures isn't what Boost is exactly all about, then I don't know what
negative and cynical vision of Boost you have. Multiple people here
have problems with futures, and multiple people are experimenting
with improvements. This is something that should be welcomed, not
constantly put down with negativity.

I would expect you'll see me present benchmarks here in due course
once the implementation is drop in replaceable. I am expecting about
a 5% performance improvement in AFIO as a drop in, and a 20%
improvement once I replace AFIO's continuations infrastructure with
.then() and remove the central spinlocked unordered_map. This should
help further close the gap between AFIO and ASIO which is currently
between 15% and 32%. That gain is what I am developing these futures
for after all - to solve my problems, and maybe as a happy
consequence solve other people's problems too.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Hartmut Kaiser
2015-05-28 22:11:31 UTC
Permalink
> > Optimizing away one allocation to create a promise/future pair (or just
> a
> > future for make_ready_future) will have no measurable impact in the
> context
> > of any I/O, be it wait free or asynchronous or both.
>
> No one here is claiming better futures make any effect on i/o
> performance except you.

From the AFIO docs:

AFIO is 'A C++ library which lets you schedule an ordered dependency graph
of file input/output operations...'
(http://rrsd.com/blincubator.com/bi_library/afio/?gform_post_id=938). And
you said more than once that a) the new 'futures' are 'much' faster and b)
that you want to use those for AFIO. Go figure.

You have mentioned wait free algorithms in the ext4 file system caches which
would make file system IO so fast that using your 'futures' would be
beneficial (see your mail:
http://thread.gmane.org/gmane.comp.lib.boost.devel/260307/focus=260404). You
even mention it further down in the mail Iäm answering to. Go figure.

Others have drawn direct connections on this thread between futures and IO
as well (see for instance Gavin Lambert:
http://thread.gmane.org/gmane.comp.lib.boost.devel/260307/focus=260345, or
Avi Kivity:
http://thread.gmane.org/gmane.comp.lib.boost.devel/260307/focus=260391). Go
figure.

You clearly should decide what you want besides trolling the Boost-devel
mailing list.

> You are reading only the parts of the thread
> you want to in order to believe what you already believe (apparently
> that I have some master plan of "taking over" Boost).

I have not said or implied anything like that in this thread. Also, you know
best what your 'master plan' is.

> You drew the
> link here between i/o and futures, none of us claimed it. The thread
> earlier was clearly about two entirely separate topics. You conflated
> them to make your own personal point.

I have not conflated anything.

> > In general, all I'm hearing on this thread is 'it could be helpful', 'it
> > should be faster', 'it can be important', or 'makes a big difference',
> etc.
> > I was hoping that we as a Boost community can do better!
>
> Constant cherry picking of thread topics just to nay say and put down
> any discussion of alternative idioms and designs isn't being positive
> nor helpful.

Constant? Really? Just because I believe that your ideas have flaws and your
solutions are wrong because you start from incorrect assumptions?

> > Nobody so far has shown the impact of this optimization technique on a
> real
> > world applications (measurements). Or at least, measurement results from
> > artificial benchmarks under heavy concurrency conditions (using decent
> > multi-threaded allocators like jemalloc or tcmalloc). I'd venture to say
> > that there will be no measurable speedup (unless proven otherwise).
>
> Again nobody claimed that. I was quite clear I primarily want single
> op code reduction as part of unit testing. That's its main purpose
> for me as a per-commit CI test that I am writing perfectly optimal
> code, not just mostly optimal code.

What is 'op code reduction' all about if not to achieve speedup? All I asked
is to give us numbers showing whether those 'op code reductions' have any
_significant_ impact on the overall performance of real world applications.
All you gave us so far are conjectures.

> A happy consequence is a
> potential runtime cost optimal monadic transport, and that's what I
> came here to bikeshed a name for, and see if there is interest in
> such a development. Feedback on both has been both positive, and
> useful, so I will proceed.

Sure. I'm off of this thread. Happy bikeshed-ing!

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu

> I have also been very clear that this new design solves my major
> problems, not *the* major problems with existing futures. That's what
> I designed it to do. I believe it also solves the same problems as
> face futures in ASIO. Once finished and deployed, if others find it
> solves their problems too then it has a great chance on becoming a
> next gen Boost future. If it doesn't, then it won't.
>
> I have been working on this replacement future design since October,
> with multiple presentations of my code experiments here to gain
> feedback. Others have presented their code experiments here too. We
> have all reviewed each other's design ideas and code, and evolved our
> own designs and code in response. If this exchange of code
> experiments between people all with similar problems with existing
> futures isn't what Boost is exactly all about, then I don't know what
> negative and cynical vision of Boost you have. Multiple people here
> have problems with futures, and multiple people are experimenting
> with improvements. This is something that should be welcomed, not
> constantly put down with negativity.
>
> I would expect you'll see me present benchmarks here in due course
> once the implementation is drop in replaceable. I am expecting about
> a 5% performance improvement in AFIO as a drop in, and a 20%
> improvement once I replace AFIO's continuations infrastructure with
> .then() and remove the central spinlocked unordered_map. This should
> help further close the gap between AFIO and ASIO which is currently
> between 15% and 32%. That gain is what I am developing these futures
> for after all - to solve my problems, and maybe as a happy
> consequence solve other people's problems too.
>
> Niall
>
> --
> ned Productions Limited Consulting
> http://www.nedproductions.biz/
> http://ie.linkedin.com/in/nialldouglas/
>



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-29 21:14:04 UTC
Permalink
On 28 May 2015 at 17:11, Hartmut Kaiser wrote:

> > No one here is claiming better futures make any effect on i/o
> > performance except you.
>
> From the AFIO docs:
>
> AFIO is 'A C++ library which lets you schedule an ordered dependency graph
> of file input/output operations...'
> (http://rrsd.com/blincubator.com/bi_library/afio/?gform_post_id=938). And
> you said more than once that a) the new 'futures' are 'much' faster and b)
> that you want to use those for AFIO. Go figure.

Perhaps you don't understand the context I meant.

AFIO, being built for the mediocre level of C++ 11 in VS2010
originally, implemented its own future continuations infrastructure.
It keeps an internal unordered_map which associates additional
metadata with each future-promise created for every asynchronous
operation scheduled. This was done to keep the user facing API and
ABI clean of implementation details - you could work with
shared_future<shared_ptr<async_io_handle>> and behind the scenes the
engine looked that op up in its hash table, and fetched the
additional metadata which includes the continuations for that future
amongst other things. It is however not particularly efficient,
introducing a ~15% overhead for non-continued ops over ASIO and ~30%
overhead for continued ops over ASIO.

If instead of keeping the per-future metadata in a hash table I could
keep the metadata in the future itself, I can eliminate the hash
table and the spinlock which surrounds that hash table. That is a big
win for the design - much simpler, cleaner implementation, and no
central locking at all.

Relative to async OS operations, the overhead of the continuations
implementation is usually, but not always unimportant relative to the
OS operation. However it is rare you just read and write raw data -
usually you want to do something to that data as it enters and exits
storage e.g. things like SECDED ECC or Blake2b hash rounds. This is
where the overhead introduced by doing a locked hash table lookup
every time you append a continuation becomes inefficient especially
if you append many continuations to each op. This is what I am
fixing. This is what I mean by performance improvements.

There is also cleanliness of design problem - over a year ago as a
thought experiment I wrote up a SHA256 implementation which issued a
promise-future once per SHA round. Performance was dreadful obviously
enough because a SHA round isn't big compared to a promise-future
cycle, and indeed your colleague Thomas Heller showed exactly the
same thing for HPX at C++ Now - granularity of task slice has to be
proportionate to the promise-future cycle overhead.

But it got me thinking about exactly what is so flawed with the
existing promise-future design, because they *should* be lightweight
enough that using a promise-future per SHA round isn't a stupid
design decision. I don't think I am alone in thinking this about
existing promise-futures.

> You clearly should decide what you want besides trolling the Boost-devel
> mailing list.

I suspect by trolling you mean me campaigning to persuade the
community of my vision for Boost's future.

It may involve making a lot of noise, and it is definitely much less
efficient than simply persuading Dave something needs to happen as it
used to be. I am also hardly alone in campaigning since Dave left the
scene. However I am being successful in this persuasion - the recent
changes in policy announced at C++ Now are all movements in the
direction of what I have been advocating for years. Perhaps this is
the real reason why you are pissed.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-29 21:26:17 UTC
Permalink
Niall Douglas wrote:

> However I am being successful in this persuasion - the recent changes in
> policy announced at C++ Now are all movements in the direction of what I
> have been advocating for years.

What are those recent changes in policy?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Gottlob Frege
2015-05-29 21:38:34 UTC
Permalink
Start a new thread!!!!

On Fri, May 29, 2015 at 5:26 PM, Peter Dimov <***@pdimov.com> wrote:
> Niall Douglas wrote:
>
>> However I am being successful in this persuasion - the recent changes in
>> policy announced at C++ Now are all movements in the direction of what I
>> have been advocating for years.
>
>
> What are those recent changes in policy?
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-30 01:57:59 UTC
Permalink
On 30 May 2015 at 0:26, Peter Dimov wrote:

> > However I am being successful in this persuasion - the recent changes in
> > policy announced at C++ Now are all movements in the direction of what I
> > have been advocating for years.
>
> What are those recent changes in policy?

Topic changed as per Tony's request.

https://groups.google.com/forum/#!forum/boost-steering

Read all posts from May 14th onwards. There is even more exciting,
possibly revolutionary, potential changes being pondered for the
future, also mentioned in passing on the above list.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-30 11:36:14 UTC
Permalink
Niall Douglas wrote:

> https://groups.google.com/forum/#!forum/boost-steering

Thanks, I didn't know about this list.

> Read all posts from May 14th onwards. There is even more exciting,
> possibly revolutionary, potential changes being pondered for the future,
> also mentioned in passing on the above list.

I read the posts, but was actually unable to deduce what these policy
changes will be. There must be something I'm missing. Perhaps you can
provide a summary.

On an unrelated note, it was very heart-warming to read this:

https://groups.google.com/forum/#!topic/boost-steering/WWM6nQ4szSY

and not see bpm (an existing and working implementation of what this
initiative is intended to produce) get even a mention.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Thomas Heller
2015-05-29 22:19:05 UTC
Permalink
On Fri, May 29, 2015 at 11:14 PM, Niall Douglas <***@nedprod.com>
wrote:

> There is also cleanliness of design problem - over a year ago as a
> thought experiment I wrote up a SHA256 implementation which issued a
> promise-future once per SHA round. Performance was dreadful obviously
> enough because a SHA round isn't big compared to a promise-future
> cycle, and indeed your colleague Thomas Heller showed exactly the
> same thing for HPX at C++ Now - granularity of task slice has to be
> proportionate to the promise-future cycle overhead.
>

Absolutely! However, what is proposed in this thread is hardly usable in
any context where concurrent operations may occur. What you missed is the
message that it is not memory allocation or the existance of exception
handling code that makes the futures "slow". In fact what makes futures
slow is the mechanism to start asynchronous tasks (If you have the finest
task granularity you can imagine). Having a single SHA round behind a
future makes little to no sense ...


>
> But it got me thinking about exactly what is so flawed with the
> existing promise-future design, because they *should* be lightweight
> enough that using a promise-future per SHA round isn't a stupid
> design decision. I don't think I am alone in thinking this about
> existing promise-futures.
>

It is of course not stupid to have as lightweight futures as possible.
What's stupid is the assumption that your task decomposition has to go down
to a single instruction on a single data elemtn. Well, it is at least
stupid on today's architectures (SIMD, caches etc.). So yes, grain size is
important, not only to overcome any overheads that are associated with
spawning the asynchronous task but also executing its work (I hope AFIO
doesn't fetch data from disk one single byte at a time just because the
future might be able to handle it).


>
> Niall
>
> --
> ned Productions Limited Consulting
> http://www.nedproductions.biz/
> http://ie.linkedin.com/in/nialldouglas/
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
>

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 00:14:38 UTC
Permalink
On 25 May 2015 at 18:33, Avi Kivity wrote:

> >>> ... turns into a "mov $5, %eax", so future<T> is now also a
> >>> lightweight monadic return transport capable of being directly
> >>> constructed.
> >> Can you post the code? I'd be very interested in comparing it with
> >> seastar's non-allocating futures.
> > I may do so once I've got the functioning constexpr reduction being
> > unit tested per commit.
>
> I'm looking forward to it! I've been bitten by the same compile time
> explosion problems and I'm curious to see how you solved them.

Now fully functional at:

https://github.com/ned14/boost.spinlock/tree/master/test/constexprs

I have the clang and GCC disassembler dumps there too so you can see
they all reduce to single opcodes on GCC only. clang isn't so good,
and I'm just about to send a large bug report to LLVM
(https://llvm.org/bugs/show_bug.cgi?id=23652) followed by email to
Chandler. The unit testing will later do a diff compare to ensure
that single opcode reduction always remains so per commit.

VS2015 RC can't cope with exception_ptr being inside an unrestricted
union, so it doesn't work there yet. I have a hacky macro workaround,
but I may just wait for VS2015 RTM.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Avi Kivity
2015-05-26 06:05:36 UTC
Permalink
On 05/26/2015 03:14 AM, Niall Douglas wrote:
> On 25 May 2015 at 18:33, Avi Kivity wrote:
>
>>>>> ... turns into a "mov $5, %eax", so future<T> is now also a
>>>>> lightweight monadic return transport capable of being directly
>>>>> constructed.
>>>> Can you post the code? I'd be very interested in comparing it with
>>>> seastar's non-allocating futures.
>>> I may do so once I've got the functioning constexpr reduction being
>>> unit tested per commit.
>> I'm looking forward to it! I've been bitten by the same compile time
>> explosion problems and I'm curious to see how you solved them.
> Now fully functional at:
>
> https://github.com/ned14/boost.spinlock/tree/master/test/constexprs
>
> I have the clang and GCC disassembler dumps there too so you can see
> they all reduce to single opcodes on GCC only. clang isn't so good,
> and I'm just about to send a large bug report to LLVM
> (https://llvm.org/bugs/show_bug.cgi?id=23652) followed by email to
> Chandler. The unit testing will later do a diff compare to ensure
> that single opcode reduction always remains so per commit.
>
> VS2015 RC can't cope with exception_ptr being inside an unrestricted
> union, so it doesn't work there yet. I have a hacky macro workaround,
> but I may just wait for VS2015 RTM.
>

Thanks! Our compile time explosions are related to chaining, which isn't
there yet, so I'll wait for those.

Some notes:
- get() waits by spinning, how do you plan to fix that? Seems like
you'd need a mutex/cond_var pair,
which can dwarf the future/promise pair in terms of size and costs.
- _lock_buffer isn't aligned, should use a union or aligned_storage
or something.
- _is_consuming causes value_storage::reset() to be called, but even
if !_is_consuming,
the value will be moved away, effectively consuming it.

Like Peter mentioned, the compiler optimizations aren't related to
constexpr, but to aggressive inlining. I expect you'll get the same
generated code even if you drop all constexpr annotations.

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 10:08:38 UTC
Permalink
On 26 May 2015 at 9:05, Avi Kivity wrote:

> Thanks! Our compile time explosions are related to chaining, which isn't
> there yet, so I'll wait for those.

I haven't tested this yet, but I've noticed you need to avoid
non-trivial destructors doing non-trivial operations, so the
destructor of std::vector where it deallocates is good to avoid.

I was going to use an unrestricted union of a single continuation
stored in a single std::function or a std::vector<std::function> for
more than one continuation. The problem is
sizeof(std::function)>sizeof(std::vector), otherwise a static array
for small numbers of continuations might be a good idea.

I also have the problem that I want multiple continuations to hang
from a single future, which is an extension of the Concurreny TS.
Such continuations take a const future&, so you can either add a
single continuation taking a future, or many taking a const future &.

> Some notes:
> - get() waits by spinning, how do you plan to fix that? Seems like
> you'd need a mutex/cond_var pair,

That's just a standin. It'll be my C11 permit object. That makes
future-promise usable from C which solves getting AFIO usable from
COM and C.

> which can dwarf the future/promise pair in terms of size and costs.

I'll be keeping a process wide cache of C11 permit objects (they are
resettable). A permit object is only allocated lazily. They
themselves only allocate an internal condvar lazily too, also from a
process wide cache.

> - _lock_buffer isn't aligned, should use a union or aligned_storage
> or something.

I believe this doesn't matter on any major architecture as it will
always be four byte aligned thanks to compiler padding. From a cache
coherency traffic perspective, it also doesn't matter, this is not a
highly contended spinlock.

> - _is_consuming causes value_storage::reset() to be called, but even
> if !_is_consuming,

I don't see this. Can you show me?

> the value will be moved away, effectively consuming it.

Another standin. There will be some metaprogramming which selects a
T& if is_consuming is false, and makes sure the internal storage is
passed out by lvalue ref. That's all shared_future semantics though,
currently not a priority.

> Like Peter mentioned, the compiler optimizations aren't related to
> constexpr, but to aggressive inlining. I expect you'll get the same
> generated code even if you drop all constexpr annotations.

constexpr was only in there to get compiler errors telling me where I
was doing constexpr unsafe stuff to debug the lack of constexpr
folding.

Now that is debugged, I'll be removing most of the constexpr.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Glen Fernandes
2015-05-26 15:28:45 UTC
Permalink
On Tue, May 26, 2015 at 3:08 AM, Niall Douglas
<***@nedprod.com> wrote:
>
>> - _lock_buffer isn't aligned, should use a union or aligned_storage
>> or something.
>
> I believe this doesn't matter on any major architecture as it will
> always be four byte aligned thanks to compiler padding.

I'm surprised that this is the response. Why would you not write an
"alignas(alignof(spinlock<bool>))" for _lock_buffer (or use
aligned_storage, as Avi suggested, and have correct C++ code, instead
of "probably correct C++ code depending on implementation"?

Is this code not part of what you intend to submit for inclusion in Boost?

Glen

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-25 12:51:59 UTC
Permalink
Niall Douglas wrote:

> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Stronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstatesfromfunctions

This section is called

"(Strongly) consider using constexpr semantic wrapper transport types to
return states from functions"

but I see nothing constexpr in it - the result of calling ::open can never
be constexpr.

Could you perhaps elaborate a bit on the "constexpr semantic" part?


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 14:59:53 UTC
Permalink
On 25 May 2015 at 15:51, Peter Dimov wrote:

> Niall Douglas wrote:
>
> > https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:Stronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstatesfromfunctions
>
> This section is called
>
> "(Strongly) consider using constexpr semantic wrapper transport types to
> return states from functions"
>
> but I see nothing constexpr in it - the result of calling ::open can never
> be constexpr.
>
> Could you perhaps elaborate a bit on the "constexpr semantic" part?

I was referring to the return type of:

std::expected<
std::expected<
std::shared_ptr<handle_type>,
std::error_code>,
std::exception_ptr>

This should constexpr reduce into no assembler output where the
compiler can see the implementation. In other words, returing an
error_code is as if returning a naked error_code from the point of
view of opcodes generated - std::expected "disappears".

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Giovanni Piero Deretta
2015-05-25 21:27:14 UTC
Permalink
On Mon, May 25, 2015 at 10:37 AM, Niall Douglas
<***@nedprod.com> wrote:
> Dear list,
>

Hi Niall

[snip]
> Essentially the win is that future-promise generates no code at all
> on recent C++ 11 compilers unless it has to [1], and when it does it
> generates an optimally minimal set with no memory allocation unless T
> does so.

So, the future/promise pair can be optimized out if the work can be
completed synchronously (i.e. immediately or at get time). But then,
why use a future at all? What is the use case you are trying to
optimize for? do you have an example?

I have an use case for very light weight futures as I have been
experimenting with cilk-style work stealing. In the fast (non stolen)
clone you want the futures to have zero overhead (other than the steal
check) as the computation is strictly synchronous. I do not think a
generic future would be appropriate.

Re allocation, you know my views :).

[...]
> Anyway, my earlier experiments were all very promising, but they all
> had one big problem: the effect on compile time. My final design is
> therefore ridiculously simple: a future<T> can return only these
> options:
>
> * A T.
> * An error_code (i.e. non-type erased error, optimally lightweight)
> * An exception_ptr (i.e. type erased exception type, allocates
> memory, you should avoid this if you want performance)
>

I agree that generally as a result holder you want either<T,
error_code, exception_ptr>.

[...]
>
> However, future<T> doesn't seem named very "monadic", so I am
> inclined to turn future<T> into a subclass of a type better named.
> Options are:
>
> * result<T>
> * maybe<T>
>
> Or anything else you guys can think of? future<T> is then a very
> simple subclass of the monadic implementation type, and is simply
> some type sugar for promise<T> to use to construct a future<T>.
>
> Let the bike shedding begin! And my thanks in advance.

I believe that trying to design a future that can fulfill everybody's
requirements is a lost cause. The c++ way is to define concepts and
algorithms that work on concepts. The types we want to generalize are
std::future, expected, possibly optional and all the other futures
that have been cropping up in the meantime. The algorithms are of
course those required for composition: then, when_all, when_any plus
probably get and wait.

We could just take a page from Haskell and call the concept Monad, but
maybe we want something more specific, like Result.

Then, in addition to the algorithms themselves, there is certainly
space in boost for a library for helping build custom futures, a-la
boost.iterator.

-- gpd

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Vicente J. Botet Escriba
2015-05-25 21:35:52 UTC
Permalink
Le 25/05/15 11:37, Niall Douglas a écrit :
> Dear list,
>
> As AFIO looks likely to be finally getting a community review soon,
> I've made a start on a final non-allocating constexpr-collapsing next
> generation future-promise such that the AFIO you review is "API
> final". You may remember my experimentations on those from:
>
> http://boost.2283326.n4.nabble.com/Non-allocating-future-promise-td466
> 8339.html.
>
> Essentially the win is that future-promise generates no code at all
> on recent C++ 11 compilers unless it has to [1], and when it does it
> generates an optimally minimal set with no memory allocation unless T
> does so. This should make these future-promises several orders of
> magnitude faster than the current ones in the C++ standard and solve
> their scalability problems for use with things like ASIO. They also
> have major wins for resumable functions which currently always
> construct a promise every resumable function entry - these next gen
> future-promises should completely vanish if the resumable function
> never suspends, saving a few hundred cycles each call.
>
> Anyway, my earlier experiments were all very promising, but they all
> had one big problem: the effect on compile time. My final design is
> therefore ridiculously simple: a future<T> can return only these
> options:
>
> * A T.
> * An error_code (i.e. non-type erased error, optimally lightweight)
> * An exception_ptr (i.e. type erased exception type, allocates
> memory, you should avoid this if you want performance)
>
> In other words, it's a fixed function monad where the expected return
> is T, and the unexpected return can be either exception_ptr or
> error_code. The next gen future provides Haskell type monadic
> operations similar to Boost.Thread + Boost.Expected, and thanks to
> the constexpr collapse this:
>
> future<int> test() {
> future<int> f(5);
> return f;
> }
> test().get();
>
> ... turns into a "mov $5, %eax", so future<T> is now also a
> lightweight monadic return transport capable of being directly
> constructed.
>
> In case you might want to know why a monadic return transport might
> be so useful as to be a whole new design idiom for C++ 11, try
> reading
> https://svn.boost.org/trac/boost/wiki/BestPracticeHandbook#a8.DESIGN:S
> tronglyconsiderusingconstexprsemanticwrappertransporttypestoreturnstat
> esfromfunctions.
>
> However, future<T> doesn't seem named very "monadic",
Why? Because we don't have mbind or the proposed next?
> so I am
> inclined to turn future<T> into a subclass of a type better named.
Sub-classing should be an implementation detail and I don't see how a
future could be a sub-class of a class that is not asynchronous itself.
> Options are:
>
> * result<T>
sync or async result?
> * maybe<T>
we have already optional, isn't it?
>
> Or anything else you guys can think of? future<T> is then a very
> simple subclass of the monadic implementation type, and is simply
> some type sugar for promise<T> to use to construct a future<T>.
>
> Let the bike shedding begin! And my thanks in advance.

I'm not sure we are ready for bike shedding yet.

Some comments, not always directly related to your
future/promise/expected design, but about the interaction between future
and expected.

IMO, a future is not an expected (nor result or maybe). We can say that
a ready future behaves like an expected, but a future has an additional
state. Ready or not. The standard proposal and the future in
Boost.Thread has yet an additional state, valid or not.
So future has the following states invalid, not ready, valued or
exceptional.
We should be able to get an implementation that performs better if we
have less states. Would the future you want have all these states?

A future can store itself the shared state when the state is not shared,
I suppose this is your idea and I think it is a good one.Let me know if
I'm wrong. Clearly this future doesn't need allocators, nor memory
allocation.

We could have a conversion from an expected to a future. A future<T>
could be constructed from an expected<T>.

I believe that we could have an future operation that extracts an
expected from a ready future or that it blocks until the future is
ready. In the same way we have future<T>::shared() that returns a
shared_future<T>, we could have a future<T>::expected() function that
returns an expected<T> (waiting if needed).

If a continuation RetExpectedC returns an expected<C>, the decltype(f1)
could be future<C>

auto f1 = f.then(RetExpectedC);

We could also have a when_all/match that could be applied to any
probable valued type, including optional, expected, future, ...

optional<int> a;
auto f4 = when_all(a, f).match<expected<int>>(
[](int i, int j ) { return 1; },
[](...) make_unexpected(MyException) ; }
);

the type of f4 would be future<int>. The previous could be equivalent to

auto f4 = when_all(f).then([a](future<int> b) {
return inspect(a, b.expected()).match<expected<int>>(
[](int a, int b )
{ return a + b; },
[](nullopt_ i, auto const &j )
{
return ???;
}
);
});


auto f4 = when_all(a, f).next(
[](int i, int j ) { return 1; }
);

but the result of when_all will be a future.

The inspect(a, b, c) could be seen as a when_all applied to probably
valued instances that are all ready.

Best,
Vicente



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-25 23:09:57 UTC
Permalink
On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:

> > However, future<T> doesn't seem named very "monadic",
> Why? Because we don't have mbind or the proposed next?

No, merely the name "future<T>"!

future<T> is fine for a thread safe monad. But for a faster, thread
unsafe one, I was asking here purely for names.

Names suggested so far are maybe, result, holder, value.

> > so I am
> > inclined to turn future<T> into a subclass of a type better named.
> Sub-classing should be an implementation detail and I don't see how a
> future could be a sub-class of a class that is not asynchronous itself.

It's not a problem. My future<T> subclasses an internal
implementation type monad<T, consuming> which does most of the work
of a monad already. monad<> has no knowledge of synchronisation.

I am simply proposing subclassing monad<T, consuming> with a thread
unsafe subclass called <insert name here>. It has almost the same API
as future as it shares a common code implementation.

> > Options are:
> >
> > * result<T>
> sync or async result?

A result<T> has most of the future<T> API with the set APIs from
promise<T>. So you might do:

result<int> v(5);
assert(v.get()==5);
v.set_value(6);
assert(v.get()==6);
v.set_exception(foo());
v.get(); // throws foo
v.set_error(error_code);
v.get(); // throws system_error(error_code)

> > * maybe<T>
> we have already optional, isn't it?

True. But I think a monadic transport supersets optional as it can
have no value. So:

result<int> v;
assert(!v.is_ready());
assert(!v.has_value());
v.get(); // throws no_state.

The compiler treats a default initialised monad identically to a void
return i.e. zero overhead.

> Some comments, not always directly related to your
> future/promise/expected design, but about the interaction between future
> and expected.
>
> IMO, a future is not an expected (nor result or maybe). We can say that
> a ready future behaves like an expected, but a future has an additional
> state. Ready or not. The standard proposal and the future in
> Boost.Thread has yet an additional state, valid or not.
> So future has the following states invalid, not ready, valued or
> exceptional.
> We should be able to get an implementation that performs better if we
> have less states. Would the future you want have all these states?

My aim is to track, as closely as possible, the Concurrency TS.
Including all its bad decisions which aren't too awful. So yes, I'd
keep the standard states. I agree absolutely that makes my monad not
expected, nor even a proper monad. I'd call it a "bastard C++ monad
type" of the kind purists dislike.

> A future can store itself the shared state when the state is not shared,
> I suppose this is your idea and I think it is a good one.Let me know if
> I'm wrong. Clearly this future doesn't need allocators, nor memory
> allocation.

Yes, either the promise or the future can keep the shared state. It
always prefers to use the future where possible though.

> We could have a conversion from an expected to a future. A future<T>
> could be constructed from an expected<T>.

Absolutely agreed.

> I believe that we could have an future operation that extracts an
> expected from a ready future or that it blocks until the future is
> ready. In the same way we have future<T>::shared() that returns a
> shared_future<T>, we could have a future<T>::expected() function that
> returns an expected<T> (waiting if needed).
>
> If a continuation RetExpectedC returns an expected<C>, the decltype(f1)
> could be future<C>
>
> auto f1 = f.then(RetExpectedC);
>
> We could also have a when_all/match that could be applied to any
> probable valued type, including optional, expected, future, ...
>
> optional<int> a;
> auto f4 = when_all(a, f).match<expected<int>>(
> [](int i, int j ) { return 1; },
> [](...) make_unexpected(MyException) ; }
> );
>
> the type of f4 would be future<int>. The previous could be equivalent to
>
> auto f4 = when_all(f).then([a](future<int> b) {
> return inspect(a, b.expected()).match<expected<int>>(
> [](int a, int b )
> { return a + b; },
> [](nullopt_ i, auto const &j )
> {
> return ???;
> }
> );
> });
>
>
> auto f4 = when_all(a, f).next(
> [](int i, int j ) { return 1; }
> );
>
> but the result of when_all will be a future.
>
> The inspect(a, b, c) could be seen as a when_all applied to probably
> valued instances that are all ready.

expected integration is very far away for me. I'm even a fair
distance from continuations, because getting a std::vector to
constexpr collapse is tricky, and you need a
std::vector<std::function> to hold the continuations. My main goal is
getting AFIO past peer review for now.

However, I have been speaking with Gor @ Microsoft, and if I
understand how his resumable functions implementation expands into
boilerplate then non-allocating future-promise means he can simplify
his implementation quite considerably, and improve its efficiency. No
magic tricks for future shared state allocation needed anymore.

I'll get my prototype working enough to submit to Microsoft first. If
Gor is interested, he'll need to shepherd getting the MSVC optimiser
to stop being so braindead when faced with this pattern. Gabi also
told me at C++ Now to send this problem to him too as I was gently
teasing him about how badly MSVC does here compared to everything
else, and he said he'd do what he could to make them fix the
optimiser.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Rob Stewart
2015-05-26 09:07:05 UTC
Permalink
On May 25, 2015 7:09:57 PM EDT, Niall Douglas <***@nedprod.com> wrote:
> On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:
>
> Names suggested so far are maybe, result, holder, value.

I'm still trying to understand use cases to help guide naming. However, among those choices, "result" seems best. That said, one often refers to the result of a function, so discussing a function that returns a "result" would get awkward. Perhaps "retval" would convey the idea well enough and be less awkward?

> I am simply proposing subclassing monad<T, consuming> with a thread
> unsafe subclass called <insert name here>. It has almost the same API
> as future as it shares a common code implementation.
>
> > > Options are:
> > >
> > > * result<T>
> > sync or async result?
>
> A result<T> has most of the future<T> API with the set APIs from
> promise<T>. So you might do:
>
> result<int> v(5);
> assert(v.get()==5);
> v.set_value(6);
> assert(v.get()==6);
> v.set_exception(foo());
> v.get(); // throws foo
> v.set_error(error_code);
> v.get(); // throws system_error(error_code)

Can one ask whether it contains an exception or an error_code? Can one retrieve the exception or error_code without calling get() with, say, get_exception() and get_error_code()?

> > > * maybe<T>
> > we have already optional, isn't it?
>
> True. But I think a monadic transport supersets optional as it can
> have no value. So:
>
> result<int> v;
> assert(!v.is_ready());
> assert(!v.has_value());
> v.get(); // throws no_state.

Here you show has_value(). Are there has_exception() and has_error_code()?

___
Rob

(Sent from my portable computation engine)

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 10:23:13 UTC
Permalink
On 26 May 2015 at 5:07, Rob Stewart wrote:

> On May 25, 2015 7:09:57 PM EDT, Niall Douglas <***@nedprod.com> wrote:
> > On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:
> >
> > Names suggested so far are maybe, result, holder, value.
>
> I'm still trying to understand use cases to help guide naming.

Exactly why I asked for people to bikeshed here on the naming. I am
also unsure.

> However,
> among those choices, "result" seems best. That said, one often refers to
> the result of a function, so discussing a function that returns a
> "result" would get awkward. Perhaps "retval" would convey the idea well
> enough and be less awkward?

One vote for result<T>. One vote for retval<T>. Okay.

I think that's three votes for result<T> now, none for maybe<T>.

> > result<int> v(5);
> > assert(v.get()==5);
> > v.set_value(6);
> > assert(v.get()==6);
> > v.set_exception(foo());
> > v.get(); // throws foo
> > v.set_error(error_code);
> > v.get(); // throws system_error(error_code)
>
> Can one ask whether it contains an exception or an error_code? Can one
> retrieve the exception or error_code without calling get() with, say,
> get_exception() and get_error_code()?

Of course. As with Boost.Thread's future, there are has_value(),
has_error(), has_exception() plus get(), get_error(),
get_exception().

get() will throw any error or exception. get_exception() returns any
error or exception as an exception_ptr. get_error() returns an
error_code.

> > > > * maybe<T>
> > > we have already optional, isn't it?
> >
> > True. But I think a monadic transport supersets optional as it can
> > have no value. So:
> >
> > result<int> v;
> > assert(!v.is_ready());
> > assert(!v.has_value());
> > v.get(); // throws no_state.
>
> Here you show has_value(). Are there has_exception() and has_error_code()?

Yes. There is also is_ready(), also from Boost.Thread. This is
aliased by explicit operator bool. Finally on the monad but not
future, there is a reset().

On future<T>, but not result<T>/retval<T>, there is also a valid()
which tests if a future has an associated promise or is ready. You
also cannot set the state of a future except via its promise - those
member functions are inherited from monad into future by protected
inheritance, and are not exposed publicly.

I haven't implemented them yet, but I am also going to add some of
the essential monadic operators from Expected which enable if, and
only if, your callable is noexcept. The rationale for insisting on
noexcept is that a later clang AST analysis tool can convert monadic
logic into a formal mathematical proof if and only if noexcept is
enforced, and this is not a limitation because the monadic transport
can carry exceptions.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Paul A. Bristow
2015-05-26 11:19:09 UTC
Permalink
> -----Original Message-----
> From: Boost [mailto:boost-***@lists.boost.org] On Behalf Of Niall Douglas
> Sent: 26 May 2015 11:23
> To: ***@lists.boost.org
> Subject: Re: [boost] [next gen future-promise] What to call the monadic return type?
>
> On 26 May 2015 at 5:07, Rob Stewart wrote:
>
> > On May 25, 2015 7:09:57 PM EDT, Niall Douglas <***@nedprod.com> wrote:
> > > On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:
> > >
> > > Names suggested so far are maybe, result, holder, value.
> >
> > I'm still trying to understand use cases to help guide naming.
>
> Exactly why I asked for people to bikeshed here on the naming. I am also unsure.

<bikeshedding_mode = on>

outcome?

Paul

---
Paul A. Bristow
Prizet Farmhouse
Kendal UK LA8 8AB
+44 (0) 1539 561830






_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 14:41:39 UTC
Permalink
On 26 May 2015 at 12:19, Paul A. Bristow wrote:

> > > > Names suggested so far are maybe, result, holder, value.
> > >
> outcome?

Options so far in order: result, maybe, holder, value, retval,
outcome.

Great stuff. Keep the ideas coming!

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Gottlob Frege
2015-05-26 18:22:11 UTC
Permalink
On Tue, May 26, 2015 at 6:23 AM, Niall Douglas
<***@nedprod.com> wrote:
> On 26 May 2015 at 5:07, Rob Stewart wrote:
>
>> On May 25, 2015 7:09:57 PM EDT, Niall Douglas <***@nedprod.com> wrote:
>> > On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:
>> >
>> > Names suggested so far are maybe, result, holder, value.
>>
>> I'm still trying to understand use cases to help guide naming.
>
> Exactly why I asked for people to bikeshed here on the naming. I am
> also unsure.
>
>> However,
>> among those choices, "result" seems best. That said, one often refers to
>> the result of a function, so discussing a function that returns a
>> "result" would get awkward. Perhaps "retval" would convey the idea well
>> enough and be less awkward?
>
> One vote for result<T>. One vote for retval<T>. Okay.


I would suggest taking

>> That said, one often refers to
>> the result of a function, so discussing a function that returns a
>> "result" would get awkward.

as a vote *against* the name 'result'. Or take my vote as being
against 'result'. Or both.

Same thing came up in committee with "dumb_ptr" - any names like
"raw_ptr", etc, (and in your case 'result<>') are bad because they
cause confusion when spoken,

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Rob Stewart
2015-05-27 09:11:50 UTC
Permalink
On May 26, 2015 2:22:11 PM EDT, Gottlob Frege <***@gmail.com> wrote:
> On Tue, May 26, 2015 at 6:23 AM, Niall Douglas
> <***@nedprod.com> wrote:
> > On 26 May 2015 at 5:07, Rob Stewart wrote:
> >
> >> On May 25, 2015 7:09:57 PM EDT, Niall Douglas
> <***@nedprod.com> wrote:
> >> >
> >> > Names suggested so far are maybe, result, holder, value.
> >>
> >> I'm still trying to understand use cases to help guide naming.
> >
> > Exactly why I asked for people to bikeshed here on the naming. I am
> > also unsure.
> >
> >> However,
> >> among those choices, "result" seems best. That said, one often
> refers to
> >> the result of a function, so discussing a function that returns a
> >> "result" would get awkward. Perhaps "retval" would convey the idea
> >> well enough and be less awkward?
> >
> > One vote for result<T>. One vote for retval<T>. Okay.
>
> I would suggest taking
>
> >> That said, one often refers to
> >> the result of a function, so discussing a function that returns a
> >> "result" would get awkward.
>
> as a vote *against* the name 'result'. Or take my vote as being
> against 'result'. Or both.

Exactly

> Same thing came up in committee with "dumb_ptr" - any names like
> "raw_ptr", etc, (and in your case 'result<>') are bad because they
> cause confusion when spoken,

Right

___
Rob

(Sent from my portable computation engine)

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Vicente J. Botet Escriba
2015-05-26 17:56:26 UTC
Permalink
Le 26/05/15 01:09, Niall Douglas a écrit :
> On 25 May 2015 at 23:35, Vicente J. Botet Escriba wrote:
>
>>> However, future<T> doesn't seem named very "monadic",
>> Why? Because we don't have mbind or the proposed next?
> No, merely the name "future<T>"!
>
> future<T> is fine for a thread safe monad. But for a faster, thread
> unsafe one, I was asking here purely for names.
>
> Names suggested so far are maybe, result, holder, value.

Neither result, holder nor value conveys the fact that the value can not
be there.
maybe<T> has already the connotation T or nothing as optional. There is
no place for error_code and exception_ptr.
If the type is not related to thread synchronization, here they are some
alternative names: probably_value<T>, probable<T>?
>
>>> so I am
>>> inclined to turn future<T> into a subclass of a type better named.
>> Sub-classing should be an implementation detail and I don't see how a
>> future could be a sub-class of a class that is not asynchronous itself.
> It's not a problem. My future<T> subclasses an internal
> implementation type monad<T, consuming> which does most of the work
> of a monad already. monad<> has no knowledge of synchronisation.
>
> I am simply proposing subclassing monad<T, consuming> with a thread
> unsafe subclass called <insert name here>. It has almost the same API
> as future as it shares a common code implementation.
I had the impression that you wanted to me future a subclass of <insert
name here>. I don't share this design.
>
>>> Options are:
>>>
>>> * result<T>
>> sync or async result?
> A result<T> has most of the future<T> API with the set APIs from
> promise<T>. So you might do:
>
> result<int> v(5);
> assert(v.get()==5);
> v.set_value(6);
> assert(v.get()==6);
> v.set_exception(foo());
> v.get(); // throws foo
> v.set_error(error_code);
> v.get(); // throws system_error(error_code)
This is quite close to expected, isn't it?
>
>>> * maybe<T>
>> we have already optional, isn't it?
> True. But I think a monadic transport supersets optional as it can
> have no value. So:
>
> result<int> v;
> assert(!v.is_ready());
> assert(!v.has_value());
> v.get(); // throws no_state.
>
> The compiler treats a default initialised monad identically to a void
> return i.e. zero overhead.
Ah, then the to be named type has the additional ready state. This is
closer to a valid future with the addition of the promise (setting)
interface.
>
>> Some comments, not always directly related to your
>> future/promise/expected design, but about the interaction between future
>> and expected.
>>
>> IMO, a future is not an expected (nor result or maybe). We can say that
>> a ready future behaves like an expected, but a future has an additional
>> state. Ready or not. The standard proposal and the future in
>> Boost.Thread has yet an additional state, valid or not.
>> So future has the following states invalid, not ready, valued or
>> exceptional.
>> We should be able to get an implementation that performs better if we
>> have less states. Would the future you want have all these states?
> My aim is to track, as closely as possible, the Concurrency TS.
> Including all its bad decisions which aren't too awful. So yes, I'd
> keep the standard states. I agree absolutely that makes my monad not
> expected, nor even a proper monad. I'd call it a "bastard C++ monad
> type" of the kind purists dislike.

I have no problem. There are not proper and dirty monads. Purist will
give you the good name. I see several nested monads.
IIUC, the type doesn't has the invalid state, isn't it?
>
>> A future can store itself the shared state when the state is not shared,
>> I suppose this is your idea and I think it is a good one.Let me know if
>> I'm wrong. Clearly this future doesn't need allocators, nor memory
>> allocation.
> Yes, either the promise or the future can keep the shared state. It
> always prefers to use the future where possible though.
>
>> We could have a conversion from an expected to a future. A future<T>
>> could be constructed from an expected<T>.
> Absolutely agreed.
>
>> I believe that we could have an future operation that extracts an
>> expected from a ready future or that it blocks until the future is
>> ready. In the same way we have future<T>::shared() that returns a
>> shared_future<T>, we could have a future<T>::expected() function that
>> returns an expected<T> (waiting if needed).
>>
>> If a continuation RetExpectedC returns an expected<C>, the decltype(f1)
>> could be future<C>
>>
>> auto f1 = f.then(RetExpectedC);
>>
>> We could also have a when_all/match that could be applied to any
>> probable valued type, including optional, expected, future, ...
>>
>> optional<int> a;
>> auto f4 = when_all(a, f).match<expected<int>>(
>> [](int i, int j ) { return 1; },
>> [](...) make_unexpected(MyException) ; }
>> );
>>
>> the type of f4 would be future<int>. The previous could be equivalent to
>>
>> auto f4 = when_all(f).then([a](future<int> b) {
>> return inspect(a, b.expected()).match<expected<int>>(
>> [](int a, int b )
>> { return a + b; },
>> [](nullopt_ i, auto const &j )
>> {
>> return ???;
>> }
>> );
>> });
>>
>>
>> auto f4 = when_all(a, f).next(
>> [](int i, int j ) { return 1; }
>> );
>>
>> but the result of when_all will be a future.
>>
>> The inspect(a, b, c) could be seen as a when_all applied to probably
>> valued instances that are all ready.
> expected integration is very far away for me. I'm even a fair
> distance from continuations, because getting a std::vector to
> constexpr collapse is tricky, and you need a
> std::vector<std::function> to hold the continuations.
Why do you need to store store a vector of continuations. You have just
one. What am I missing?

> My main goal is
> getting AFIO past peer review for now.
>
> However, I have been speaking with Gor @ Microsoft, and if I
> understand how his resumable functions implementation expands into
> boilerplate then non-allocating future-promise means he can simplify
> his implementation quite considerably, and improve its efficiency. No
> magic tricks for future shared state allocation needed anymore.
As others I'm waiting for a written proposal. But I think this is not on
your plans.

Vicente


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 23:36:26 UTC
Permalink
On 26 May 2015 at 19:56, Vicente J. Botet Escriba wrote:

> > Names suggested so far are maybe, result, holder, value.
>
> Neither result, holder nor value conveys the fact that the value can not
> be there. maybe<T> has already the connotation T or nothing as optional.
> There is no place for error_code and exception_ptr. If the type is not
> related to thread synchronization, here they are some alternative names:
> probably_value<T>, probable<T>?

probable<T> vs result<T> vs maybe<T>. Hmm, I'll have to think about
it.

> > result<int> v(5);
> > assert(v.get()==5);
> > v.set_value(6);
> > assert(v.get()==6);
> > v.set_exception(foo());
> > v.get(); // throws foo
> > v.set_error(error_code);
> > v.get(); // throws system_error(error_code)
> This is quite close to expected, isn't it?

I lean on expected's design yes. I like the expected design. No point
inventing a new API after all, we know yours works.

> > The compiler treats a default initialised monad identically to a void
> > return i.e. zero overhead.
> Ah, then the to be named type has the additional ready state. This is
> closer to a valid future with the addition of the promise (setting)
> interface.

Exactly right. No surprises.

> I have no problem. There are not proper and dirty monads. Purist will
> give you the good name. I see several nested monads.
> IIUC, the type doesn't has the invalid state, isn't it?

It has valid() on the future<T>. But not on the monad. So I suppose
not.

> > expected integration is very far away for me. I'm even a fair
> > distance from continuations, because getting a std::vector to
> > constexpr collapse is tricky, and you need a
> > std::vector<std::function> to hold the continuations.
> Why do you need to store store a vector of continuations. You have just
> one. What am I missing?

It makes my implementation a lot easier if I can add an unknown
number of continuations (non-consuming, therefore they are really C
type callbacks) to some future. Remember I am optionally supporting
Boost.Fiber later on, and I need an extensible and generic way of
firing off coroutines on future signal with minimal #ifdef code
branches all over the place.

Under the Concurrency TS you'd implement this by creating a callable
struct which contains the vector of more continuations, so as an
example:

struct many_continuations
{
std::vector<std::function<void(const std::future &)>> nonconsuming;
std::function<void(std::future)> consuming;
void operator()(std::future v)
{
for(auto &c : nonconsuming)
c(v);
}
consuming(v);
};

... and now do future.then(many_continuations());

I'm simply building that same functionality straight into my future
instead of having external code bolt it on.

> > My main goal is
> > getting AFIO past peer review for now.
> >
> As others I'm waiting for a written proposal. But I think this is not on
> your plans.

Nope. My priority is AFIO. I have about two months to deliver it.
We'll go from there after.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Vicente J. Botet Escriba
2015-05-27 05:46:28 UTC
Permalink
Le 27/05/15 01:36, Niall Douglas a écrit :
> On 26 May 2015 at 19:56, Vicente J. Botet Escriba wrote:
>
>>> Names suggested so far are maybe, result, holder, value.
>> Neither result, holder nor value conveys the fact that the value can not
>> be there. maybe<T> has already the connotation T or nothing as optional.
>> There is no place for error_code and exception_ptr. If the type is not
>> related to thread synchronization, here they are some alternative names:
>> probably_value<T>, probable<T>?
> probable<T> vs result<T> vs maybe<T>. Hmm, I'll have to think about
> it.
>
>>> result<int> v(5);
>>> assert(v.get()==5);
>>> v.set_value(6);
>>> assert(v.get()==6);
>>> v.set_exception(foo());
>>> v.get(); // throws foo
>>> v.set_error(error_code);
>>> v.get(); // throws system_error(error_code)
>> This is quite close to expected, isn't it?
> I lean on expected's design yes. I like the expected design. No point
> inventing a new API after all, we know yours works.
>
>>> The compiler treats a default initialised monad identically to a void
>>> return i.e. zero overhead.
>> Ah, then the to be named type has the additional ready state. This is
>> closer to a valid future with the addition of the promise (setting)
>> interface.
> Exactly right. No surprises.
>
>> I have no problem. There are not proper and dirty monads. Purist will
>> give you the good name. I see several nested monads.
>> IIUC, the type doesn't has the invalid state, isn't it?
> It has valid() on the future<T>. But not on the monad. So I suppose
> not.
Then you to be named type is between expected and future.
if it can be used asynchronously it could be seen as a valid_future<T>.
Then a future coul dbe seen as a specialization of
optional<valid_future<T>>.
Otherwise, if it is synchronous, it could be seen as an specialization
of optional<expected<T>>.

Does your "to be named" type provide wait(), then() , ....?
>
>>> expected integration is very far away for me. I'm even a fair
>>> distance from continuations, because getting a std::vector to
>>> constexpr collapse is tricky, and you need a
>>> std::vector<std::function> to hold the continuations.
>> Why do you need to store store a vector of continuations. You have just
>> one. What am I missing?
> It makes my implementation a lot easier if I can add an unknown
> number of continuations (non-consuming, therefore they are really C
> type callbacks) to some future. Remember I am optionally supporting
> Boost.Fiber later on, and I need an extensible and generic way of
> firing off coroutines on future signal with minimal #ifdef code
> branches all over the place.
>
> Under the Concurrency TS you'd implement this by creating a callable
> struct which contains the vector of more continuations, so as an
> example:
>
> struct many_continuations
> {
> std::vector<std::function<void(const std::future &)>> nonconsuming;
> std::function<void(std::future)> consuming;
> void operator()(std::future v)
> {
> for(auto &c : nonconsuming)
> c(v);
> }
> consuming(v);
> };
>
> ... and now do future.then(many_continuations());
>
> I'm simply building that same functionality straight into my future
> instead of having external code bolt it on.
This is a functionality that shared_future<T> accepts already. Note that
shared here is related to the underlying T value.
As you said, don't pay for what you don't use. The many_continualions
functor is only one way to achieve that. If you (the user) are looking
for performances you can have a more intrusive solution.
Does your "to be named" type provide this kind of
consuming/not-consuming continuations?
It is difficult to nane something we don't know which interface and
semantics it has.
>
>>> My main goal is
>>> getting AFIO past peer review for now.
>>>
>> As others I'm waiting for a written proposal. But I think this is not on
>> your plans.
> Nope. My priority is AFIO. I have about two months to deliver it.
> We'll go from there after.
>
If we are discussing here of internal implementation details of AFIO, I
would suggest you spent this time to improve the interface and the
documentation and documenting the performances. You will have all the
time to improve the implementation and its performances before going to
a release.

If we are discussing here of another possible boost classes (even if
only part of AFIO), then it is different.

In other words, would the "to be named" type appear on the AFIO
interface? Is for this reason that you need to name it? or it is really
an implementation detail. This merits clarification.

Best,
Vicente

_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Peter Dimov
2015-05-26 11:07:16 UTC
Permalink
Niall Douglas wrote:

> My final design is therefore ridiculously simple: a future<T> can return
> only these options:
>
> * A T.
> * An error_code (i.e. non-type erased error, optimally lightweight)
> * An exception_ptr (i.e. type erased exception type, allocates memory, you
> should avoid this if you want performance)

I don't think that inlining expected<T, error_code> into future<T> is the
correct decision. If people want to abide to a programming style based on
expected<T, error_code>, let them just use future<expected<T, error_code>>.
No need to couple independent components.

I also disagree with the implicit expectation that a programming style based
on expected<T, error_code> will take the world by storm. It won't.
Exceptions are much more convenient and make for clearer code. C++
programmers are not Haskell programmers and don't want to be; they don't use
monads and do-statements. There is no need.


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Niall Douglas
2015-05-26 14:40:26 UTC
Permalink
On 26 May 2015 at 14:07, Peter Dimov wrote:

> I also disagree with the implicit expectation that a programming style based
> on expected<T, error_code> will take the world by storm. It won't.
> Exceptions are much more convenient and make for clearer code. C++
> programmers are not Haskell programmers and don't want to be; they don't use
> monads and do-statements. There is no need.

Get some experience programming in Rust and come back to me.

I think you'll realise that monadic programming is going to become
huge in C++ 11/14 in the near future in those use case scenarios
where it is far better than all other idioms. We just need a decent
not-Haskell monad implementation that doesn't get in the way of C++
programming. But for sure, this is a personal opinion, and yours is
just is valid as mine right now until we see where the code trends
to.

BTW for reference the Rust Result<> monad *does* get in the way. I
think we can do a lot better in C++, something more natural to
program with and use.

Niall

--
ned Productions Limited Consulting
http://www.nedproductions.biz/
http://ie.linkedin.com/in/nialldouglas/
Peter Dimov
2015-05-26 15:04:39 UTC
Permalink
Niall Douglas wrote:

> Get some experience programming in Rust and come back to me.

This is a very generous offer, which I'm afraid I have to decline at
present.

> I think you'll realise that monadic programming is going to become huge in
> C++ 11/14 in the near future in those use case scenarios where it is far
> better than all other idioms.

With that qualification this statement is trivial, isn't it? My point was
that there do exist other scenarios, where monadic programming of this
particular expected<> variety is not at all far better than other idioms.

Either way, consider my message a naming suggestion for the bikeshedding
session. My naming suggestion is that your type is properly called

future<expected<T, error_code>>.

This name in no way prevents you from going all monadic on our posteriors.
:-)


_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Loading...