Discussion:
Planning for Juju 2.2 (16.10 timeframe)
Mark Shuttleworth
2016-03-08 23:51:51 UTC
Permalink
Hi folks

We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.

An early cut of topics of interest is below.

*Operational concerns

** LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model

There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a
systemic property. This might be a good area for someone new to the
project to lead design and implementation. Another similar area is the
idea of modelling machine properties - things like apt / yum
repositories, cache settings etc, and having the machine agent setup the
machine / vm / container according to those properties.

*Core Model

* * modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config

There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for
example, a benchmark operator charm to trigger benchmarks through a
relation rather than having the operator do it manually.

*Storage*

* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)

I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but
reuse block devices previously bound to an old database instance).
Also, I think the intersection of storage modelling and MAAS hasn't
really been explored, and since we see a lot of interest in the use of
charms to deploy software-defined storage solutions, this probably will
need thinking and work.


*Clouds and providers
*
* System Z and LinuxONE
* Oracle Cloud

There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see
constant requests for a Linode provider that would be a good target for
a refactored interface.


*Usability

* * expanding the set of known clouds and regions
* improving the handling of credentials across clouds

Mark
Tom Barber
2016-03-08 23:59:43 UTC
Permalink
Hi Mark

From my perspective relationship joins that can span models would be great.
I know I brought it up before, but being able to create, for example a
central monitoring model, or central Gitlab model that charms in my various
other models could tap into without them being merged into a "super model"
or maybe be in different regions on different controllers, would be great.

Cheers

Tom

--------------

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Tom Barber
2016-03-09 00:04:24 UTC
Permalink
Also, for stuff like monitoring, being able to position a charm service on
a different service provider to bolster resiliency.

Tom
Post by Tom Barber
Hi Mark
From my perspective relationship joins that can span models would be
great. I know I brought it up before, but being able to create, for example
a central monitoring model, or central Gitlab model that charms in my
various other models could tap into without them being merged into a "super
model" or maybe be in different regions on different controllers, would be
great.
Cheers
Tom
--------------
Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316
(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Mark Shuttleworth
2016-03-09 01:59:54 UTC
Permalink
Post by Tom Barber
Also, for stuff like monitoring, being able to position a charm service on
a different service provider to bolster resiliency.
That comes implicitly with cross-model relations, since the different
models can be on different clouds.

This enables pretty amazing hybrid operations in general - place what
you want where you want it, then just connect it all up.

Mark
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Merlijn Sebrechts
2016-03-09 16:01:41 UTC
Permalink
+1 for invoking actions from relations. I'd also really like to have the
ability to change config values from relations. An example of this is my
dhcp server. The network it should broadcast to started out as a config
value. Now, other Charms in my setup are becoming smarter so they can pass
the network over a relation. However I'd also like to keep the config value
for special cases...
Post by Mark Shuttleworth
Post by Tom Barber
Also, for stuff like monitoring, being able to position a charm service
on
Post by Tom Barber
a different service provider to bolster resiliency.
That comes implicitly with cross-model relations, since the different
models can be on different clouds.
This enables pretty amazing hybrid operations in general - place what
you want where you want it, then just connect it all up.
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Mark Shuttleworth
2016-03-09 01:58:58 UTC
Permalink
Post by Tom Barber
From my perspective relationship joins that can span models would be great.
I know I brought it up before, but being able to create, for example a
central monitoring model, or central Gitlab model that charms in my various
other models could tap into without them being merged into a "super model"
or maybe be in different regions on different controllers, would be great.
Yes, this is a great request, and the good news is it is on the list for
Juju 2.1 :)

Mark
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
James Beedy
2016-03-09 21:30:04 UTC
Permalink
Mark,

It is very exciting to see this list of future feature topics! I feel these
are all very legitimate feature topics, and of the feature topics you have
listed, the operational concerns will definitely help to continue to
solidify an enterprise-y feature set within the Juju ecosystem, which, I
feel is highly needed, and also highly anticipated :-)

Concerning storage, the lifecycle management model would be a tremendous
improvement over what currently exists. I think a lot of users would be
more inclined to use the storage feature if there was some level of
persistence/lifecycle management associated with it.

On that note^, can we get a few cycles on the cinder/openstack storage
provider? The provider currently exists, but with minimal usability as
there is no concept of `availability zones` or `type`. The lack of ability
to configure az and type make the openstack provider storage feature highly
unusable outside of the single use case of single az, no type spec. I feel
like most of the functionality is there, it just needs to be
solidified/completed/documented to some extent.

Cross Model Relationships - I second Tom on this one, I think this would
open up a whole new world of capabilities.

Features to make Juju more usable/integrable:

1. Ability to configure machine hostname/fqdn.
- This could be done by allowing a user to specify some subset
cloud-init user-data - hostname/fqdn would be a huge +1, but I'm sure there
are a lot of other things a little user-data could lend a hand to.
- I have been trying to integrate juju to my existing puppet
infrastructure -- not being able to specify a machine hostname/fqdn on
deploy is a huge road block for this task, and has kept me blocked from
bringing Juju fully on board throughout my company infra.

2. Juju Scale
- The ability to set resource quotas/policies to enable the auto
scaling of services based on load type would be a HUGE win for Juju.
- No more reaching out to heat for autoscaling groups!


Hope this helps!


~James
Kapil Thangavelu
2016-03-16 12:31:45 UTC
Permalink
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.
Post by Mark Shuttleworth
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config, config schemas/validation, rich status.
relation config is a huge boon to services that are multi-tenant to other
services, as is the workaround is to create either copies per tenant or
intermediaries.
Post by Mark Shuttleworth
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
it maybe out of band, but with storage comes backups/snapshots. also of
interest, is encryption on block and object storage using cloud native
mechanisms where available.
Post by Mark Shuttleworth
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Autoscaling, either tighter integration with cloud native features or juju
provided abstraction.
roger peppe
2016-03-16 13:17:14 UTC
Permalink
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings etc,
and having the machine agent setup the machine / vm / container according to
those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Kapil Thangavelu
2016-03-16 15:04:35 UTC
Permalink
Relations have associated config schemas that can be set by the user
creating the relation. I.e. I could run one autoscaling service and
associate with relation config for autoscale options to the relation with a
given consumer service.
Post by Mark Shuttleworth
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user
controllers
Post by Kapil Thangavelu
Post by Mark Shuttleworth
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a
systemic
Post by Kapil Thangavelu
Post by Mark Shuttleworth
property. This might be a good area for someone new to the project to
lead
Post by Kapil Thangavelu
Post by Mark Shuttleworth
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc,
Post by Kapil Thangavelu
Post by Mark Shuttleworth
and having the machine agent setup the machine / vm / container
according to
Post by Kapil Thangavelu
Post by Mark Shuttleworth
those properties.
ldap++. as brought up in the user list better support for aws best
practice
Post by Kapil Thangavelu
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for
state
Post by Kapil Thangavelu
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for
example, a
Post by Kapil Thangavelu
Post by Mark Shuttleworth
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
roger peppe
2016-03-16 16:03:40 UTC
Permalink
Post by Kapil Thangavelu
Relations have associated config schemas that can be set by the user
creating the relation. I.e. I could run one autoscaling service and
associate with relation config for autoscale options to the relation with a
given consumer service.
Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful flexibility.

One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.

Having the credential info associated with the relation itself would be perfect.
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings etc,
and having the machine agent setup the machine / vm / container according to
those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Tom Barber
2016-03-18 14:57:08 UTC
Permalink
Couple of new things cropped up today that would be very useful.

a) actions within gui, currently its a bit weird to drag stuff around in
the gui then drop to shell to run actions. Doesn't make much sense to a
user.
b) actions within bundles. For example, I'd like a few "standard" bundles,
but also a demo bundle seeded with sample data, to do this I'd need to run
some actions behind the scenes to get the stuff in place which I can't do
c) upload files with actions. Currently for some things I need to pass in
some files then trigger an action on the unit upon that file. It would be
good to say path=/tmp/myfile.xyz and have the action upload that to a place
you define.

Tom

--------------

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Kapil Thangavelu
Post by Kapil Thangavelu
Relations have associated config schemas that can be set by the user
creating the relation. I.e. I could run one autoscaling service and
associate with relation config for autoscale options to the relation
with a
Post by Kapil Thangavelu
given consumer service.
Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful flexibility.
One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.
Having the credential info associated with the relation itself would be perfect.
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and
gathering
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
priorities and requests from users of Juju. I'm writing to outline
some
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache
settings
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
etc,
and having the machine agent setup the machine / vm / container according to
those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials
(sts
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by the
db
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Merlijn Sebrechts
2016-03-18 16:17:06 UTC
Permalink
+1 for all of those.

I'd also like to have a way to get files from charms other than using juju
scp. An example of this is getting vpn config from the openvpn charm...
Post by Tom Barber
Couple of new things cropped up today that would be very useful.
a) actions within gui, currently its a bit weird to drag stuff around in
the gui then drop to shell to run actions. Doesn't make much sense to a
user.
Post by Tom Barber
b) actions within bundles. For example, I'd like a few "standard"
bundles, but also a demo bundle seeded with sample data, to do this I'd
need to run some actions behind the scenes to get the stuff in place which
I can't do
Post by Tom Barber
c) upload files with actions. Currently for some things I need to pass in
some files then trigger an action on the unit upon that file. It would be
good to say path=/tmp/myfile.xyz and have the action upload that to a place
you define.
Post by Tom Barber
Tom
--------------
Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316
(Thanks to the Saiku community we reached our Kickstart goal, but you can
always help by sponsoring the project)
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Relations have associated config schemas that can be set by the user
creating the relation. I.e. I could run one autoscaling service and
associate with relation config for autoscale options to the relation with a
given consumer service.
Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful flexibility.
One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.
Having the credential info associated with the relation itself would be perfect.
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to
lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc,
and having the machine agent setup the machine / vm / container according to
those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for
state
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Tom Barber
2016-03-18 16:31:18 UTC
Permalink
Another one....

bundle inheritance. For example I want to create a saiku bundle, that uses
a sharded mongodb setup, I import the bundle and attach saiku to it and
deploy that. Why not let me reference it like layers and import it so if
changes are made to the original bundle I get their goodness without
tracking them for changes.



--------------

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Merlijn Sebrechts
+1 for all of those.
I'd also like to have a way to get files from charms other than using juju
scp. An example of this is getting vpn config from the openvpn charm...
Post by Tom Barber
Couple of new things cropped up today that would be very useful.
a) actions within gui, currently its a bit weird to drag stuff around in
the gui then drop to shell to run actions. Doesn't make much sense to a
user.
Post by Tom Barber
b) actions within bundles. For example, I'd like a few "standard"
bundles, but also a demo bundle seeded with sample data, to do this I'd
need to run some actions behind the scenes to get the stuff in place which
I can't do
Post by Tom Barber
c) upload files with actions. Currently for some things I need to pass
in some files then trigger an action on the unit upon that file. It would
be good to say path=/tmp/myfile.xyz and have the action upload that to a
place you define.
Post by Tom Barber
Tom
--------------
Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316
(Thanks to the Saiku community we reached our Kickstart goal, but you
can always help by sponsoring the project)
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Relations have associated config schemas that can be set by the user
creating the relation. I.e. I could run one autoscaling service and
associate with relation config for autoscale options to the relation
with a
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
given consumer service.
Great, I hoped that's what you meant.
I'm also +1 on this feature - it would enable all kinds of useful
flexibility.
Post by Tom Barber
Post by roger peppe
One recent example I've come across that could use this feature
is that we've got a service that can hand out credentials to services
that are related to it. At the moment the only way to state that
certain services should be handed certain classes of credential
is to have a config value that holds a map of service name to
credential info, which doesn't seem great - it's awkward, easy
to get wrong, and when a service goes away, its associated info
hangs around.
Having the credential info associated with the relation itself would be
perfect.
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
On Wed, Mar 16, 2016 at 9:17 AM roger peppe <
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and
gathering
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
priorities and requests from users of Juju. I'm writing to
outline some
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user
controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a
systemic
property. This might be a good area for someone new to the
project to
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache
settings
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
etc,
and having the machine agent setup the machine / vm / container
according to
those properties.
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient
credentials (sts
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
role assume, needs AWS_SECURITY_TOKEN support), and instance role
for
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
state
servers.
Post by Mark Shuttleworth
Core Model
* modelling individual services (i.e. each database exported by
the db
Post by Tom Barber
Post by roger peppe
Post by Kapil Thangavelu
Post by roger peppe
Post by Kapil Thangavelu
Post by Mark Shuttleworth
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for
example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
in priority order, relation config
What do you understand by the term "relation config"?
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Eric Snow
2016-03-18 16:44:54 UTC
Permalink
Post by Tom Barber
c) upload files with actions. Currently for some things I need to pass in
some files then trigger an action on the unit upon that file. It would be
good to say path=/tmp/myfile.xyz and have the action upload that to a place
you define.
Have you taken a look at resources in the upcoming 2.0? You define
resources in your charm metadata and use "juju attach" to upload them
to the controller (e.g. "juju attach my-service/0
my-resource=/tmp/myfile.xyz"). * Then charms can use the
"resource-get" hook command to download the resource file from the
controller. "resource-get" returns the path where the downloaded file
was saved.

-eric


* You will also upload the resources to the charm store for charm store charms.
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Tom Barber
2016-03-18 16:49:43 UTC
Permalink
Yeah, I guess that would be a good solution for sample data and stuff.
Doesn't work for user defined bits and pieces though. For actions we
current "cat" the content into a parameter, but of course that doesn't work
for everything, and really really sucks when you try and cat unescaped JSON
into it. But for users who want to deploy their own content to services,
personally I think it would just be cleaner to allow a file type in an
action for people to pass in.

--------------

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Eric Snow
Post by Tom Barber
c) upload files with actions. Currently for some things I need to pass in
some files then trigger an action on the unit upon that file. It would be
good to say path=/tmp/myfile.xyz and have the action upload that to a
place
Post by Tom Barber
you define.
Have you taken a look at resources in the upcoming 2.0? You define
resources in your charm metadata and use "juju attach" to upload them
to the controller (e.g. "juju attach my-service/0
my-resource=/tmp/myfile.xyz"). * Then charms can use the
"resource-get" hook command to download the resource file from the
controller. "resource-get" returns the path where the downloaded file
was saved.
-eric
* You will also upload the resources to the charm store for charm store charms.
Jacek Nykis
2016-03-18 16:52:56 UTC
Permalink
Post by Mark Shuttleworth
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but
reuse block devices previously bound to an old database instance).
Also, I think the intersection of storage modelling and MAAS hasn't
really been explored, and since we see a lot of interest in the use of
charms to deploy software-defined storage solutions, this probably will
need thinking and work.
Hi Mark,

I took juju storage for a spin a few weeks ago. It is a great idea and
I'm sure it will simplify our models (no more need for
block-storage-broker and storage charms). It will also improve security
because block-storage-broker needs nova credentials to work

I only played with storage briefly but I hope my feedback and ideas will
be useful

* IMO it would be incredibly useful to have storage lifecycle
management. Deploying a new database using pre-existing block device you
mentioned would certainly be nice. Another scenario could be users who
deploy to local disk and decide to migrate to block storage later
without redeploying and manual data migration

One day we may even be able to connect storage with actions. I'm
thinking "storage snapshot" action followed by juju deploy to create up
to date database clone for testing/staging/dev

* I found documentation confusing. It's difficult for me to say exactly
what is wrong but I had to read it a few times before things became
clear. I raised some specific points on github:
https://github.com/juju/docs/issues/889

* cli for storage is not as nice as other juju commands. For example we
have the in the docs:

juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd

I suspect most charms will use single storage device so it may be
possible to optimize for that use case. For example we could have:

juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G

If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional

* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported

[0] http://paste.ubuntu.com./15414289/

* the way things are now hooks are responsible for creating filesystem
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter

* finally I hit 2 small bugs:

https://bugs.launchpad.net/juju-core/+bug/1539684
https://bugs.launchpad.net/juju-core/+bug/1546492


If anybody is interested in more details just ask, I'm happy to discuss
or try things out just note that I will be off next week so will most
likely reply on 29th


Regards,
Jacek
Andrew Wilkins
2016-03-19 03:20:24 UTC
Permalink
Post by Jacek Nykis
Post by Mark Shuttleworth
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible
APIS)
Post by Mark Shuttleworth
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but
reuse block devices previously bound to an old database instance).
Also, I think the intersection of storage modelling and MAAS hasn't
really been explored, and since we see a lot of interest in the use of
charms to deploy software-defined storage solutions, this probably will
need thinking and work.
Hi Mark,
I took juju storage for a spin a few weeks ago. It is a great idea and
I'm sure it will simplify our models (no more need for
block-storage-broker and storage charms). It will also improve security
because block-storage-broker needs nova credentials to work
I only played with storage briefly but I hope my feedback and ideas will
be useful
* IMO it would be incredibly useful to have storage lifecycle
management. Deploying a new database using pre-existing block device you
mentioned would certainly be nice. Another scenario could be users who
deploy to local disk and decide to migrate to block storage later
without redeploying and manual data migration
One day we may even be able to connect storage with actions. I'm
thinking "storage snapshot" action followed by juju deploy to create up
to date database clone for testing/staging/dev
* I found documentation confusing. It's difficult for me to say exactly
what is wrong but I had to read it a few times before things became
https://github.com/juju/docs/issues/889
* cli for storage is not as nice as other juju commands. For example we
juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
I suspect most charms will use single storage device so it may be
juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
It seems like the issues you've noted below are all documentation issues,
rather than limitations in the implementation. Please correct me if I'm
wrong.
Post by Jacek Nykis
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
Storage type is already optional. If you omit it, you'll get the provider
default. e.g. for AWS, that's EBS magnetic disks.
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
Not quite sure what you mean here. If you have a "filesystem" type, you can
use any storage provider that supports natively creating filesystems (e.g.
"tmpfs") or block devices (e.g. "ebs"). If you specify the latter, Juju
will manage the filesystem on the block device.

* the way things are now hooks are responsible for creating filesystem
Post by Jacek Nykis
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
That is exactly what it does, so again, I think this is an issue of
documentation clarity. If you're using the "filesystem" type, Juju will
create the filesystem; if you use "block", it won't.

If you could provide more details on what you're doing (off list, I think
would be best), I can try and help. We can then feed back into the docs to
make it clearer.

Cheers,
Andrew
Post by Jacek Nykis
https://bugs.launchpad.net/juju-core/+bug/1539684
https://bugs.launchpad.net/juju-core/+bug/1546492
If anybody is interested in more details just ask, I'm happy to discuss
or try things out just note that I will be off next week so will most
likely reply on 29th
Regards,
Jacek
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Tom Barber
2016-03-19 23:35:20 UTC
Permalink
Here's another one, which I can't find in the docs, but apologies if it
exists.

It would be good to be able to specify allowed origin IPs for juju expose
for cloud types that support it.

For example in EC2 instead of allowing 0.0.0.0 allow a specific address or
range. But also expand that further, so each service could be exposed to
different addresses, say different services in the same model for different
clients, or similar.

Tom

--------------

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart
<http://kickstarter.com/projects/2117053714/saiku-reporting-interactive-report-designer/>
goal, but you can always help by sponsoring the project
<http://www.meteorite.bi/products/saiku/sponsorship>)
Post by Andrew Wilkins
Post by Jacek Nykis
Post by Mark Shuttleworth
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible
APIS)
Post by Mark Shuttleworth
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but
reuse block devices previously bound to an old database instance).
Also, I think the intersection of storage modelling and MAAS hasn't
really been explored, and since we see a lot of interest in the use of
charms to deploy software-defined storage solutions, this probably will
need thinking and work.
Hi Mark,
I took juju storage for a spin a few weeks ago. It is a great idea and
I'm sure it will simplify our models (no more need for
block-storage-broker and storage charms). It will also improve security
because block-storage-broker needs nova credentials to work
I only played with storage briefly but I hope my feedback and ideas will
be useful
* IMO it would be incredibly useful to have storage lifecycle
management. Deploying a new database using pre-existing block device you
mentioned would certainly be nice. Another scenario could be users who
deploy to local disk and decide to migrate to block storage later
without redeploying and manual data migration
One day we may even be able to connect storage with actions. I'm
thinking "storage snapshot" action followed by juju deploy to create up
to date database clone for testing/staging/dev
* I found documentation confusing. It's difficult for me to say exactly
what is wrong but I had to read it a few times before things became
https://github.com/juju/docs/issues/889
* cli for storage is not as nice as other juju commands. For example we
juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
I suspect most charms will use single storage device so it may be
juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd
--storage-size=10G
It seems like the issues you've noted below are all documentation issues,
rather than limitations in the implementation. Please correct me if I'm
wrong.
Post by Jacek Nykis
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
Storage type is already optional. If you omit it, you'll get the provider
default. e.g. for AWS, that's EBS magnetic disks.
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
Not quite sure what you mean here. If you have a "filesystem" type, you
can use any storage provider that supports natively creating filesystems
(e.g. "tmpfs") or block devices (e.g. "ebs"). If you specify the latter,
Juju will manage the filesystem on the block device.
* the way things are now hooks are responsible for creating filesystem
Post by Jacek Nykis
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
That is exactly what it does, so again, I think this is an issue of
documentation clarity. If you're using the "filesystem" type, Juju will
create the filesystem; if you use "block", it won't.
If you could provide more details on what you're doing (off list, I think
would be best), I can try and help. We can then feed back into the docs to
make it clearer.
Cheers,
Andrew
Post by Jacek Nykis
https://bugs.launchpad.net/juju-core/+bug/1539684
https://bugs.launchpad.net/juju-core/+bug/1546492
If anybody is interested in more details just ask, I'm happy to discuss
or try things out just note that I will be off next week so will most
likely reply on 29th
Regards,
Jacek
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Jacek Nykis
2016-03-29 10:27:49 UTC
Permalink
Post by Andrew Wilkins
It seems like the issues you've noted below are all documentation issues,
rather than limitations in the implementation. Please correct me if I'm
wrong.
Post by Jacek Nykis
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
Storage type is already optional. If you omit it, you'll get the provider
default. e.g. for AWS, that's EBS magnetic disks.
Good to hear, it's a simple documentation fix then.
Post by Andrew Wilkins
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
Not quite sure what you mean here. If you have a "filesystem" type, you can
use any storage provider that supports natively creating filesystems (e.g.
"tmpfs") or block devices (e.g. "ebs"). If you specify the latter, Juju
will manage the filesystem on the block device.
OK this makes sense. Documentation is really confusing on this subject.
I assumed that "location" was a pre existing local path for juju to use.

If juju will manage filesystem what's the point in having "location"
option? Paths can be easily autogenerated and that would remove need to
hardcode paths in metadata.yaml
Post by Andrew Wilkins
* the way things are now hooks are responsible for creating filesystem
Post by Jacek Nykis
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
That is exactly what it does, so again, I think this is an issue of
documentation clarity. If you're using the "filesystem" type, Juju will
create the filesystem; if you use "block", it won't.
I am glad to hear it's just docs. I'll be happy to review them when
fixed just let me know when it's done
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Andrew Wilkins
2016-03-29 23:02:18 UTC
Permalink
Just changed subject so we don't derail the 2.2 discussion.
Post by Jacek Nykis
Post by Andrew Wilkins
It seems like the issues you've noted below are all documentation issues,
rather than limitations in the implementation. Please correct me if I'm
wrong.
Post by Jacek Nykis
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
Storage type is already optional. If you omit it, you'll get the provider
default. e.g. for AWS, that's EBS magnetic disks.
Good to hear, it's a simple documentation fix then.
Post by Andrew Wilkins
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
Not quite sure what you mean here. If you have a "filesystem" type, you
can
Post by Andrew Wilkins
use any storage provider that supports natively creating filesystems
(e.g.
Post by Andrew Wilkins
"tmpfs") or block devices (e.g. "ebs"). If you specify the latter, Juju
will manage the filesystem on the block device.
OK this makes sense. Documentation is really confusing on this subject.
I assumed that "location" was a pre existing local path for juju to use.
Assuming you're looking at the stable docs, take a look at devel and see if
it helps at all. They were restructured and reworded because they were
found to be a be confusing (first cut never really got translated from
developerese into English). You'll find that here:
https://jujucharms.com/docs/devel/charms-storage
and here:
https://jujucharms.com/docs/devel/developer-storage

If juju will manage filesystem what's the point in having "location"
Post by Jacek Nykis
option? Paths can be easily autogenerated and that would remove need to
hardcode paths in metadata.yaml
It's only there in case your charmed application expects to find things in
a specific location. Having a predefined location makes charming storage a
bit easier in those cases. In general, though, you shouldn't need to
specify a location. In fact, it's harmful if not done correctly, because
then you're prone to collisions with other charms.
Post by Jacek Nykis
Post by Andrew Wilkins
* the way things are now hooks are responsible for creating filesystem
Post by Jacek Nykis
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
That is exactly what it does, so again, I think this is an issue of
documentation clarity. If you're using the "filesystem" type, Juju will
create the filesystem; if you use "block", it won't.
I am glad to hear it's just docs. I'll be happy to review them when
fixed just let me know when it's done
Great, I'll try to keep that in mind. Take a look at the devel docs some
time and see if you still find them confusing.

Cheers,
Andrew
Mark Shuttleworth
2016-04-01 13:34:45 UTC
Permalink
Post by Jacek Nykis
* cli for storage is not as nice as other juju commands. For example we
juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
I suspect most charms will use single storage device so it may be
possible to optimize for that use case.
That, however, means you still have to know IF there's only one store.
Or you have to know what the default store is. Better to just be explicit.
Post by Jacek Nykis
juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
What would sensible defaults look like for storage? The default we have
is quite sensible, you get the root filesystem :)
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
I'm not sure what the issue is with this one.

If we have filesystem storage it's always at the same place.

If we have a single mounted block store, it's always at the same place.

If we can attach multiple block devices, THEN you need to handle them as
they are attached.

Can you explain the problem more clearly? We do have an issue with the
LXD provider and block devices, which we think will be resolved thanks
to some good kernel work on a range of fronts, but that can't surely be
what's driving your concerns.
Post by Jacek Nykis
* the way things are now hooks are responsible for creating filesystem
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
Well, yes, that's the idea, but these things are quite subtle.

In some cases you very explicitly want the raw block. So we have to
allow that. In other cases you just want a filesystem there, and IIRC
that's the default behaviour in the common case. Finally, we have to
deal with actual network filesystems (as opposed to block devices) and I
don't think we have implemented that yet.

Mark
Jacek Nykis
2016-04-01 15:13:10 UTC
Permalink
Post by Mark Shuttleworth
Post by Jacek Nykis
* cli for storage is not as nice as other juju commands. For example we
juju deploy cs:~axwalk/postgresql --storage data=ebs-ssd,10G pg-ssd
I suspect most charms will use single storage device so it may be
possible to optimize for that use case.
That, however, means you still have to know IF there's only one store.
Or you have to know what the default store is. Better to just be explicit.
I think it's possible to handle all scenarios nicely.

For charms with just one store only require "--storage-size" and DTRT

For charms with multiple stores require "--store" parameter on top of
that. If not given error with "This charm supports more than one store,
please specify"

For charms without storage support when users provide one of storage
options error with "Storage not supported"

And for charms that do support storage but users don't ask for it print
something like "This charm supports storage, you can try it with --size
10G option"
Post by Mark Shuttleworth
Post by Jacek Nykis
juju deploy cs:~axwalk/postgresql --storage-type=ebs-ssd --storage-size=10G
If we come up with sensible defaults for different providers we could
make end users' experience even better by making --storage-type optional
What would sensible defaults look like for storage? The default we have
is quite sensible, you get the root filesystem :)
I was thinking about defaults for block device backed storage. We could
allow users to skip "ebs-ssd" and pick the most sensible store type for
every supported cloud. And for clouds which support just one block
storage type use that automatically without need to specify anything.
Post by Mark Shuttleworth
Post by Jacek Nykis
* it would be good to have ability to use single storage stanza in
metadata.yaml that supports all types of storage. They way it is done
now [0] means I can't test block storage hooks in my local dev
environment. It also forces end users to look for storage labels that
are supported
[0] http://paste.ubuntu.com./15414289/
I'm not sure what the issue is with this one.
If we have filesystem storage it's always at the same place.
If we have a single mounted block store, it's always at the same place.
If we can attach multiple block devices, THEN you need to handle them as
they are attached.
Can you explain the problem more clearly? We do have an issue with the
LXD provider and block devices, which we think will be resolved thanks
to some good kernel work on a range of fronts, but that can't surely be
what's driving your concerns.
It's my bad, I misunderstood how things worked, you can ignore this
point. Andrew Wilkins helpfully explained things to me earlier in this
thread (thanks Andrew)
Post by Mark Shuttleworth
Post by Jacek Nykis
* the way things are now hooks are responsible for creating filesystem
on block devices. I feel that as a charmer I shouldn't need to know that
much about storage internals. I would like to ask juju and get
preconfigured path back. Whether it's formatted and mounted block
device, GlusterFS or local filesystem it should not matter
Well, yes, that's the idea, but these things are quite subtle.
In some cases you very explicitly want the raw block. So we have to
allow that. In other cases you just want a filesystem there, and IIRC
that's the default behaviour in the common case. Finally, we have to
deal with actual network filesystems (as opposed to block devices) and I
don't think we have implemented that yet.
Sorry this was also me misunderstanding things, Andrew already clarified
them for me (thanks again)

Jacek
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Stuart Bishop
2016-03-19 01:02:19 UTC
Permalink
Post by Mark Shuttleworth
Operational concerns
I still want 'juju-wait' as a supported, builtin command rather than
as a fragile plugin I maintain and as code embedded in Amulet that the
ecosystem team maintain. A thoughtless change to Juju's status
reporting would break all our CI systems.
Post by Mark Shuttleworth
Core Model
At the moment logging, monitoring (alerts) and metrics involve
customizing your charm to work with a specific subordinate. And at
deploy time, you of course need to deploy and configure the
subordinate, relate it etc. and things can get quite cluttered.

Could logging, monitoring and metrics be brought into the core model somehow?

eg. I attach a monitoring service such as nagios to the model, and all
services implicitly join the monitoring relation. Rather than talk
bespoke protocols, units use the 'monitoring-alert' tool send a JSON
dict to the monitoring service (for push alerts). There is some
mechanism for the monitoring service to trigger checks remotely.
Requests and alerts go via a separate SSL channel rather than the
relation, as relations are too heavy weight to trigger several times a
second and may end up blocked by eg. other hooks running on the unit
or jujud having been killed by OOM.

Similarly, we currently handle logging by installing a subordinate
that knows how to push rotated logs to Swift. It would be much nicer
to set this at the model level, and have tools available for the charm
to push rotated logs or stream live logs to the desired logging
service. syslog would be a common approach, as would streaming stdout
or stderr.

And metrics, where a charm installs a cronjob or daemon to spit out
performance metrics as JSON dicts to a charm tool which sends them to
the desired data store and graphing systems, maybe once a day or maybe
several times a second. Rather than the current approach of assuming
statsd as the protocol and spitting out packages to an IP address
pulled from the service configuration.
Post by Mark Shuttleworth
* modelling individual services (i.e. each database exported by the db
application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
This is interesting. You can sort of do this already if you setup ssh
so units can run commands on each other, but network partitions are an
issue. Triggering an action and waiting on the result works around
this problem.

For failover in the PostgreSQL charm, I currently need to leave
requests in the leader settings and wait for units to perform the
requested tasks and report their results using the peer relation. It
might be easier to coordinate if the leader was able to trigger these
tasks directly on the other units.

Similarly, most use cases for charmhelpers.coordinator or the
coordinator layer would become easier. Rather than using several
rounds of leadership and peer relation hooks to perform a rolling
restart or rolling upgrade, the leader could trigger the operations
remotely one at a time via a peer relation.
Post by Mark Shuttleworth
Storage
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
Reusing an old mount on a new unit is a common use case. Single unit
PostgreSQL is simplest here - it detects an existing database is on
the mount, and rather than recreate it fixes permissions (uids and
gids will often not match), mounts it and recreates any resources the
charm needs (such as the 'nagios' user so the monitoring checks work).
But if you deploy multiple PostgreSQL units reusing old mounts, what
do you do? At the moment, the one lucky enough to be elected master
gets used and the others destroyed.

Cassandra is problematic, as the newly provisioned units will have
different positions and ranges in the replication ring and the
existing data will usually actually belong to other units in the
service. It would be simpler to create a new cluster, then attach the
old data as an 'import' mount and have the storage hook load it into
the cluster. Which requires twice the disk space, but means you could
migrate a 10 unit Cassandra cluster to a new 5 unit Cassandra cluster.
(the charm doesn't actually do this yet, this is just speculation on
how it could be done). I imagine other services such as OpenStack
Swift would be in the same boat.
--
Stuart Bishop <***@canonical.com>
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Mark Shuttleworth
2016-04-01 13:50:08 UTC
Permalink
Post by Stuart Bishop
Post by Mark Shuttleworth
Operational concerns
I still want 'juju-wait' as a supported, builtin command rather than
as a fragile plugin I maintain and as code embedded in Amulet that the
ecosystem team maintain. A thoughtless change to Juju's status
reporting would break all our CI systems.
Hmm.. I would have thought that would be a lot more reasonable now we
have status well in hand. However, the charms need to support status for
it to be meaningful to the average operator, and we haven't yet made
good status support a requirement for charm promulgation in the store.

I'll put this on the list to discuss.
Post by Stuart Bishop
Post by Mark Shuttleworth
Core Model
At the moment logging, monitoring (alerts) and metrics involve
customizing your charm to work with a specific subordinate. And at
deploy time, you of course need to deploy and configure the
subordinate, relate it etc. and things can get quite cluttered.
Could logging, monitoring and metrics be brought into the core model somehow?
eg. I attach a monitoring service such as nagios to the model, and all
services implicitly join the monitoring relation. Rather than talk
bespoke protocols, units use the 'monitoring-alert' tool send a JSON
dict to the monitoring service (for push alerts). There is some
mechanism for the monitoring service to trigger checks remotely.
Requests and alerts go via a separate SSL channel rather than the
relation, as relations are too heavy weight to trigger several times a
second and may end up blocked by eg. other hooks running on the unit
or jujud having been killed by OOM.
Similarly, we currently handle logging by installing a subordinate
that knows how to push rotated logs to Swift. It would be much nicer
to set this at the model level, and have tools available for the charm
to push rotated logs or stream live logs to the desired logging
service. syslog would be a common approach, as would streaming stdout
or stderr.
And metrics, where a charm installs a cronjob or daemon to spit out
performance metrics as JSON dicts to a charm tool which sends them to
the desired data store and graphing systems, maybe once a day or maybe
several times a second. Rather than the current approach of assuming
statsd as the protocol and spitting out packages to an IP address
pulled from the service configuration
I'm pretty comfortable with logging, in this list. The others make me
feel like we'd require modification of the monitoring stuff anyhow, from
the vanilla tools people have today. Logging is AFAICT relatively
standardised, so I can see us setting loggin policy per model or per
application, and having the agents do the right thing.
Post by Stuart Bishop
Post by Mark Shuttleworth
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
This is interesting. You can sort of do this already if you setup ssh
so units can run commands on each other, but network partitions are an
issue. Triggering an action and waiting on the result works around
this problem.
For failover in the PostgreSQL charm, I currently need to leave
requests in the leader settings and wait for units to perform the
requested tasks and report their results using the peer relation. It
might be easier to coordinate if the leader was able to trigger these
tasks directly on the other units.
Yes. On peers it should be completely uncontroversial since these are
the same charm and, well, it should always work if the charm developer
tested it :)

The slightly controversial piece comes on invocation of actions across a
relation, because it starts to imply that a different charm can't be
substituted in on the other side of the relation unless it ALSO
implements the actions that this charm expects.
Post by Stuart Bishop
Similarly, most use cases for charmhelpers.coordinator or the
coordinator layer would become easier. Rather than using several
rounds of leadership and peer relation hooks to perform a rolling
restart or rolling upgrade, the leader could trigger the operations
remotely one at a time via a peer relation.
Right.

I'll take that as a +1 from you then :)
Post by Stuart Bishop
Post by Mark Shuttleworth
Storage
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
Reusing an old mount on a new unit is a common use case. Single unit
PostgreSQL is simplest here - it detects an existing database is on
the mount, and rather than recreate it fixes permissions (uids and
gids will often not match), mounts it and recreates any resources the
charm needs (such as the 'nagios' user so the monitoring checks work).
But if you deploy multiple PostgreSQL units reusing old mounts, what
do you do? At the moment, the one lucky enough to be elected master
gets used and the others destroyed.
Cassandra is problematic, as the newly provisioned units will have
different positions and ranges in the replication ring and the
existing data will usually actually belong to other units in the
service. It would be simpler to create a new cluster, then attach the
old data as an 'import' mount and have the storage hook load it into
the cluster. Which requires twice the disk space, but means you could
migrate a 10 unit Cassandra cluster to a new 5 unit Cassandra cluster.
(the charm doesn't actually do this yet, this is just speculation on
how it could be done). I imagine other services such as OpenStack
Swift would be in the same boat.
Yes, broadly speaking it seems the semantics of the old and the new
service with the old mounts are very app specific. I don't have any
brilliant ideas for clean syntax on this front yet :)

Mark
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Stuart Bishop
2016-04-02 06:21:02 UTC
Permalink
Post by Mark Shuttleworth
Post by Stuart Bishop
Post by Mark Shuttleworth
Operational concerns
I still want 'juju-wait' as a supported, builtin command rather than
as a fragile plugin I maintain and as code embedded in Amulet that the
ecosystem team maintain. A thoughtless change to Juju's status
reporting would break all our CI systems.
Hmm.. I would have thought that would be a lot more reasonable now we
have status well in hand. However, the charms need to support status for
it to be meaningful to the average operator, and we haven't yet made
good status support a requirement for charm promulgation in the store.
I'll put this on the list to discuss.
It is easier with Juju 1.24+. You check the status. If all units are
idle, you wait about 15 seconds and check again. If all units are
still idle and the timestamps haven't changed, the environment is
probably idle. And for some (all?) versions of Juju, you also need to
ssh into the units and ensure that one of the units in each service
thinks it is the leader as it can take some time for a new leader to
be elected.

Which means 'juju wait' as a plugin takes quite a while to run and
only gives a probable result, whereas if this information about the
environment was exposed it could be instantaneous and correct.
--
Stuart Bishop <***@canonical.com>
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Simon Davy
2016-04-05 10:31:12 UTC
Permalink
Lots of interesting ideas here.

Some other ideas (apologies if these have already been proposed, but I
don't *think* they have)

a) A small one - please can we have 'juju get <svc> <config>'? See
https://bugs.launchpad.net/juju-core/+bug/1423548

Maybe this is already in the config schema work, but it would *really*
help in a lot
of situations, and it seems simple?

e.g.

$ juju get my-service foo
bar

This would make me very happy :)


b) A bigger ask: DNS for units.

Provider level dns (when present) only gives machine name dns, which
is not useful when working at the model level. As an operator, I've
generally no idea which machine unit X is on, and have to go hunting
in juju status. It's be great to be able to address individual units, both
manually when troubleshooting, and in scripts.

One way to do this might be if juju could provide a local dns resolver
as part of the controller.

e.g. if you have a model called 'bar', with service called
'foo', with 2 units, the following domains[1] could be resolved by the
controller dns resolver:

foo-0.bar
foo-1.bar

and/or

unit-foo-0.bar
unit-foo-1.bar

or even

0.foo.bar
1.foo.bar


Then tools can be configured to use this dns resolver. For example, we
have deployment servers where we manage our models from. We could add
the controller's dns here, making it easy for deployment/maintenance
scripts to target units easily.

Right now, we have to parse json output in bash from juju status to
scrape ip addresses, which is horrible[2]

Other possibilities (warning: not necessarily a good idea)

* add this resolver into the provisioned machine configuration, so config
on the units could use these domain names.

* the controller dns resolver can forward to a specified upstream
resolver (falling back to host's resolv.conf info)
- single point of control for dns for all models in that controller
- repeatability/reliability - if upsteam dns is down, controller
dns still provides local resolution, and also could cache upstream,
perhaps.

* if you ask for a service level address, rather than unit, it could
maybe return a dns round robin record. This would be useful for
internal haproxy services, for example, and could give some default
load-balancing OOTB

* would provide dns on providers that don't have native support
(like, erm, ps4.5 :)

Now, there are caveats a plenty here. We'd need HA dns cluster, and
there's a whole bunch of security issues that would need addressing,
to be sure. And I would personally opt for an implementation that uses
proven dns technology rather than implementing a new dns
resolver/forwarder in go with a mongodb backend. But maybe that's just
me ;P


Thanks.


[1] in hindsight, I do think having a / in the unit name was not the
best option, due to it's path/url issues. AIUI, internally juju uses
unit-<svc>-N as identifiers? Could this be exposed as alternate unit
names? i.e. cli/api commands could accept either?

[2] At the very least, it would be great to have a cli to get the
ip(s) of a unit, would simplify a lot of scripts. e.g.

$ juju get-ip foo/0 --private
10.0.3.24
$ juju get-ip foo/0 --public
1.2.3.4
$ juju get-ip foo --private
10.0.3.24
10.0.3.134
--
Simon
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Merlijn Sebrechts
2016-04-05 19:50:11 UTC
Permalink
+1 for the DNS; We could really use that!
Post by Simon Davy
Lots of interesting ideas here.
Some other ideas (apologies if these have already been proposed, but I
don't *think* they have)
a) A small one - please can we have 'juju get <svc> <config>'? See
https://bugs.launchpad.net/juju-core/+bug/1423548
Maybe this is already in the config schema work, but it would *really*
help in a lot
of situations, and it seems simple?
e.g.
$ juju get my-service foo
bar
This would make me very happy :)
b) A bigger ask: DNS for units.
Provider level dns (when present) only gives machine name dns, which
is not useful when working at the model level. As an operator, I've
generally no idea which machine unit X is on, and have to go hunting
in juju status. It's be great to be able to address individual units, both
manually when troubleshooting, and in scripts.
One way to do this might be if juju could provide a local dns resolver
as part of the controller.
e.g. if you have a model called 'bar', with service called
'foo', with 2 units, the following domains[1] could be resolved by the
foo-0.bar
foo-1.bar
and/or
unit-foo-0.bar
unit-foo-1.bar
or even
0.foo.bar
1.foo.bar
Then tools can be configured to use this dns resolver. For example, we
have deployment servers where we manage our models from. We could add
the controller's dns here, making it easy for deployment/maintenance
scripts to target units easily.
Right now, we have to parse json output in bash from juju status to
scrape ip addresses, which is horrible[2]
Other possibilities (warning: not necessarily a good idea)
* add this resolver into the provisioned machine configuration, so config
on the units could use these domain names.
* the controller dns resolver can forward to a specified upstream
resolver (falling back to host's resolv.conf info)
- single point of control for dns for all models in that controller
- repeatability/reliability - if upsteam dns is down, controller
dns still provides local resolution, and also could cache upstream,
perhaps.
* if you ask for a service level address, rather than unit, it could
maybe return a dns round robin record. This would be useful for
internal haproxy services, for example, and could give some default
load-balancing OOTB
* would provide dns on providers that don't have native support
(like, erm, ps4.5 :)
Now, there are caveats a plenty here. We'd need HA dns cluster, and
there's a whole bunch of security issues that would need addressing,
to be sure. And I would personally opt for an implementation that uses
proven dns technology rather than implementing a new dns
resolver/forwarder in go with a mongodb backend. But maybe that's just
me ;P
Thanks.
[1] in hindsight, I do think having a / in the unit name was not the
best option, due to it's path/url issues. AIUI, internally juju uses
unit-<svc>-N as identifiers? Could this be exposed as alternate unit
names? i.e. cli/api commands could accept either?
[2] At the very least, it would be great to have a cli to get the
ip(s) of a unit, would simplify a lot of scripts. e.g.
$ juju get-ip foo/0 --private
10.0.3.24
$ juju get-ip foo/0 --public
1.2.3.4
$ juju get-ip foo --private
10.0.3.24
10.0.3.134
--
Simon
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Narinder Gupta
2016-04-05 20:04:29 UTC
Permalink
+1 for DNS as this was request in other community project as well which
uses JUJU.

Thanks and Regards,
Narinder Gupta (PMP) ***@canonical.com
Canonical, Ltd. narindergupta [irc.freenode.net]
+1.281.736.5150 narindergupta2007[skype]

Ubuntu- Linux for human beings | www.ubuntu.com | www.canonical.com


On Tue, Apr 5, 2016 at 2:50 PM, Merlijn Sebrechts <
Post by Merlijn Sebrechts
+1 for the DNS; We could really use that!
Post by Simon Davy
Lots of interesting ideas here.
Some other ideas (apologies if these have already been proposed, but I
don't *think* they have)
a) A small one - please can we have 'juju get <svc> <config>'? See
https://bugs.launchpad.net/juju-core/+bug/1423548
Maybe this is already in the config schema work, but it would *really*
help in a lot
of situations, and it seems simple?
e.g.
$ juju get my-service foo
bar
This would make me very happy :)
b) A bigger ask: DNS for units.
Provider level dns (when present) only gives machine name dns, which
is not useful when working at the model level. As an operator, I've
generally no idea which machine unit X is on, and have to go hunting
in juju status. It's be great to be able to address individual units, both
manually when troubleshooting, and in scripts.
One way to do this might be if juju could provide a local dns resolver
as part of the controller.
e.g. if you have a model called 'bar', with service called
'foo', with 2 units, the following domains[1] could be resolved by the
foo-0.bar
foo-1.bar
and/or
unit-foo-0.bar
unit-foo-1.bar
or even
0.foo.bar
1.foo.bar
Then tools can be configured to use this dns resolver. For example, we
have deployment servers where we manage our models from. We could add
the controller's dns here, making it easy for deployment/maintenance
scripts to target units easily.
Right now, we have to parse json output in bash from juju status to
scrape ip addresses, which is horrible[2]
Other possibilities (warning: not necessarily a good idea)
* add this resolver into the provisioned machine configuration, so config
on the units could use these domain names.
* the controller dns resolver can forward to a specified upstream
resolver (falling back to host's resolv.conf info)
- single point of control for dns for all models in that controller
- repeatability/reliability - if upsteam dns is down, controller
dns still provides local resolution, and also could cache upstream,
perhaps.
* if you ask for a service level address, rather than unit, it could
maybe return a dns round robin record. This would be useful for
internal haproxy services, for example, and could give some default
load-balancing OOTB
* would provide dns on providers that don't have native support
(like, erm, ps4.5 :)
Now, there are caveats a plenty here. We'd need HA dns cluster, and
there's a whole bunch of security issues that would need addressing,
to be sure. And I would personally opt for an implementation that uses
proven dns technology rather than implementing a new dns
resolver/forwarder in go with a mongodb backend. But maybe that's just
me ;P
Thanks.
[1] in hindsight, I do think having a / in the unit name was not the
best option, due to it's path/url issues. AIUI, internally juju uses
unit-<svc>-N as identifiers? Could this be exposed as alternate unit
names? i.e. cli/api commands could accept either?
[2] At the very least, it would be great to have a cli to get the
ip(s) of a unit, would simplify a lot of scripts. e.g.
$ juju get-ip foo/0 --private
10.0.3.24
$ juju get-ip foo/0 --public
1.2.3.4
$ juju get-ip foo --private
10.0.3.24
10.0.3.134
--
Simon
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
William (Will) Forsyth
2016-03-28 23:46:52 UTC
Permalink
A feature that I think would clean up the deployment of multi-charm bundles
would be the ability to deploy directly to lxc containers without
specifying or pre-adding a machine.

For example, in a juju on maas deployment, upon charm deploy, juju would
query maas and request a new machine, but let maas choose the series. Once
provisioned, juju would then create a lxc container with the required
series for the charm being deployed. If maas reports that there are no
machines available for allocation, then it will pick a current machine
based on utilization and spawn the container there.

This would allow for greater and more seamless homogeneity of the deployed
machines, and would help with the push for container-per-charm deployments
that will be critical in enabling live migration of lxd containers.


William Forsyth

Infrastructure Administrator
Liferay, Inc.
Enterprise. Open Source. For life.
Date: Tue, Mar 8, 2016 at 6:52 PM
Subject: Planning for Juju 2.2 (16.10 timeframe)
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
John Meinel
2016-03-31 05:50:06 UTC
Permalink
On Mar 29, 2016 3:47 AM, "William (Will) Forsyth" <
Post by William (Will) Forsyth
A feature that I think would clean up the deployment of multi-charm
bundles would be the ability to deploy directly to lxc containers without
specifying or pre-adding a machine.
I'm not sure of the details of bundles, but I believe today you can do
"juju deploy --to lxc:" (and soon it will be changing to --to lxd:) Leaving
off the machine number has Juju allocate a new machine.
Post by William (Will) Forsyth
For example, in a juju on maas deployment, upon charm deploy, juju would
query maas and request a new machine, but let maas choose the series. Once
provisioned, juju would then create a lxc container with the required
series for the charm being deployed. If maas reports that there are no
machines available for allocation, then it will pick a current machine
based on utilization and spawn the container there.
This would allow for greater and more seamless homogeneity of the deployed
machines, and would help with the push for container-per-charm deployments
that will be critical in enabling live migration of lxd containers.
Unfortunately capacity planning gets us much more into AI/application
specific territory. What is currently Idle may just be because production
hasn't been exposed to the world yet. So all your Nova Compute nodes look
completely idle, but they'll be heavily loaded with user VMs. Or your
Production Database machine doesn't have load yet.
This is where Juju is *intentionally* workload agnostic and wants to make
it easy for 3rd party applications to bring their own intelligence into the
system. Things like the Openstack Autopilot that understands what the
actual charms and workloads running are, can leverage Juju to orchestrate
the deployment.
We have had some discussions about "stacks" or charm brains, or something
along those lines to allow the charms themselves to provide understanding
of the workload back into the system. This might be something to focus on
in the 2.2 timeline. At the very least first steps of this where when Juju
is trying to make a decision (such as placement), we could have registered
3rd parties that we will consult to see if they have an answer for us.

John
=:->
Post by William (Will) Forsyth
William Forsyth
Infrastructure Administrator
Liferay, Inc.
Enterprise. Open Source. For life.
Date: Tue, Mar 8, 2016 at 6:52 PM
Subject: Planning for Juju 2.2 (16.10 timeframe)
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
William (Will) Forsyth
2016-03-31 17:04:24 UTC
Permalink
I'm not sure of the details of bundles, but I believe today you can do
Post by John Meinel
"juju deploy --to lxc:" (and soon it will be changing to --to lxd:) Leaving
off the machine number has Juju allocate a new machine.
At least as of beta 3 there is no --to lxc: parameter. It errors with

*error: invalid --to parameter "lxc:"*

You can deploy --to lxc (without the colon), but it just hangs on
allocating and doesn't ever put the charm anywhere, and doesn't allocate a
new machine.


As far as the capacity planning, that makes total sense.


William

William Forsyth
Infrastructure Administrator
Liferay, Inc.
Enterprise. Open Source. For life.
Samuel Cozannet
2016-04-01 16:24:25 UTC
Permalink
* Resource Management :
** GPU:
From a very operational perspective, as GPUs become more widely available
across clouds, having a way to constrain/schedule them would be
interesting.

** Mapping to constraints
When colocating services, one could want to drive Juju to make clever
decisions to make sure resources are available to them. That is to say
expand the --to command to map specific machine constraints (such as --to
"cpu-cores>4"), similarly to cgroup constraints for containers.

* Monitoring / Logging / Support Tools:
Stuart made a good point on the relation for "support services" like
logging / monitoring...
I think the monitoring tools should be clever enough they recon where they
run and adapt, and users shall be free to use whichever. However the
subordinate relation is really cumbersome.

Quick win, a flag such as "juju deploy --to all <foo>" would make sense.

Alternatively, IT tools really are really attributes / meta-services of a
user's models. When one selects to run Logstash, one make that decision
globally, for all existing and future nodes and services. Therefore,
* juju enable logstash <--model foo> <--cloud bar> --all : this deploys
logstash agents to all nodes from one or more models / cloud instances....
Furthermore, as the model expands, new units would then automagically get
the support service enabled (deploy + relate)
* juju add-relation logstash < other service or json config>: this other
command inherited from the classic relates to either another charm
(elasticsearch) or provide "fake relation data" to connect to an external
service (proxy charm also possible) that is non juju driven (SaaS)

* UX
** Tags
One feature I really really love on Google Cloud Platform is the ability to
tag pretty much any and everything to my own vocabulary. I would love the
ability to tag charms to functional layers of my choice (middleware, front
end, back end...), to then be able to filter them efficiently.
** Filters
If I run a vast model, with many units of many types, I would love the
ability to filter the status by the tags I have, and not only by names.

Best,
Sam


--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu <http://ubuntu.com> / Canonical UK LTD <http://canonical.com> / Juju
<https://jujucharms.com>
***@canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23
[image: View Samuel Cozannet's profile on LinkedIn]
<https://es.linkedin.com/in/scozannet>
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
*Operational concerns ** LDAP integration for Juju controllers now we
have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of
user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes
they want to make in a model
There has also been quite a lot of interest in log control - debug
settings for logging, verbosity control, and log redirection as a systemic
property. This might be a good area for someone new to the project to lead
design and implementation. Another similar area is the idea of modelling
machine properties - things like apt / yum repositories, cache settings
etc, and having the machine agent setup the machine / vm / container
according to those properties.
*Core Model * * modelling individual services (i.e. each database
exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation
when the relation interface declares them. This would allow, for example, a
benchmark operator charm to trigger benchmarks through a relation rather
than having the operator do it manually.
*Storage*
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For
example, whether it would be helpful to provide lifecycle management for
storage being re-assigned (e.g. launch a new database application but reuse
block devices previously bound to an old database instance). Also, I think
the intersection of storage modelling and MAAS hasn't really been explored,
and since we see a lot of interest in the use of charms to deploy
software-defined storage solutions, this probably will need thinking and
work.
*Clouds and providers *
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider
interface. Now we have seen many cloud providers get done, we are in a
better position to design the best provider interface. This would be a
welcome area of contribution for someone new to the project who wants to
make it easier for folks creating new cloud providers. We also see constant
requests for a Linode provider that would be a good target for a refactored
interface.
*Usability * * expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
https://lists.ubuntu.com/mailman/listinfo/juju
Evgeny Zobnitsev
2016-04-14 19:13:44 UTC
Permalink
Hi Mark,

As I have been promised during MWC show to write the ideas here, let me describe when and how it seems to me YANG(RFC6020) can be useful in Juju world.

The juju bundle is nice way to model the application topologies, relations - give ability to generate to relational part of configuration for the applications (f.e. what mysql table structure will be needed to make workspace working, etc). The juju itself give us ability to install/change/uninstall the applications via charm hooks to run the deterministic finite automaton algorithm for It and all works nice and good.

But if try to look further


What if we will have the data model for the configuring services, it will allow us to have ability to abstract the configuration, with this model and base on it make needed mapping to the actual config for different services. Using that we will have an ability to make/model not only bundles or set of applications, but also topology of the set of running units(instances), that will allow to make more complex bundles possible


just an idea 
 may be you have some thoughts toward this direction ?
_________________________
Regards,
Evgeny Zobnitsev
Factor group
Moscow, Russia
phone: +74952803380 ext. 2209
mobile: +79161417041
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering priorities and requests from users of Juju. I'm writing to outline some current topics and also to invite requests or thoughts on relative priorities - feel free to reply on-list or to me privately.
An early cut of topics of interest is below.
Operational concerns
* LDAP integration for Juju controllers now we have multi-user controllers
* Support for read-only config
* Support for things like passwords being disclosed to a subset of user/operators
* LXD container migration
* Shared uncommitted state - enable people to collaborate around changes they want to make in a model
There has also been quite a lot of interest in log control - debug settings for logging, verbosity control, and log redirection as a systemic property. This might be a good area for someone new to the project to lead design and implementation. Another similar area is the idea of modelling machine properties - things like apt / yum repositories, cache settings etc, and having the machine agent setup the machine / vm / container according to those properties.
Core Model
* modelling individual services (i.e. each database exported by the db application)
* rich status (properties of those services and the application itself)
* config schemas and validation
* relation config
There is also interest in being able to invoke actions across a relation when the relation interface declares them. This would allow, for example, a benchmark operator charm to trigger benchmarks through a relation rather than having the operator do it manually.
Storage
* shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
* object storage abstraction (probably just mapping to S3-compatible APIS)
I'm interested in feedback on the operations aspects of storage. For example, whether it would be helpful to provide lifecycle management for storage being re-assigned (e.g. launch a new database application but reuse block devices previously bound to an old database instance). Also, I think the intersection of storage modelling and MAAS hasn't really been explored, and since we see a lot of interest in the use of charms to deploy software-defined storage solutions, this probably will need thinking and work.
Clouds and providers
* System Z and LinuxONE
* Oracle Cloud
There is also a general desire to revisit and refactor the provider interface. Now we have seen many cloud providers get done, we are in a better position to design the best provider interface. This would be a welcome area of contribution for someone new to the project who wants to make it easier for folks creating new cloud providers. We also see constant requests for a Linode provider that would be a good target for a refactored interface.
Usability
* expanding the set of known clouds and regions
* improving the handling of credentials across clouds
Mark
--
Juju mailing list
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Stuart Bishop
2016-04-28 08:35:13 UTC
Permalink
Post by Mark Shuttleworth
Hi folks
We're starting to think about the next development cycle, and gathering
priorities and requests from users of Juju. I'm writing to outline some
current topics and also to invite requests or thoughts on relative
priorities - feel free to reply on-list or to me privately.
Another item I'd like to see is distribution upgrades. We not have a
lot of systems deployed with Trusty that will need to be upgraded to
Xenial not too far in the future. For many services you would just
bring up a new service with a new name and cut over, but this is
impractical for other services such as database shards deployed on
MaaS provisioned hardware. Handling upgrades may be as simple as
allowing operators (or a charm action) perform the necessary
dist-upgrade one unit at a time and have the controller notice and
cope when the unit's jujud is bounced. Not all units would be running
the same distribution release at the same time, and I'm assuming the
service is running a multi-series charm here that supports both
releases (so we don't need to worry about how to handle upgrade-charm
hooks, at least for now)
--
Stuart Bishop <***@canonical.com>
--
Juju mailing list
***@lists.ubuntu.com
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
Loading...