Discussion:
Passing a limited amount of disk devices to jails
Willem Jan Withagen
2017-06-09 08:45:32 UTC
Permalink
Hi,

I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that....

Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.

AND I would need gpart to be able to work on them!

Would this be possible to do with the current jail implementation on
12-CURRENT?

Thanx,
--WjW
Miroslav Lachman
2017-06-09 09:07:31 UTC
Permalink
Post by Willem Jan Withagen
Hi,
I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that....
Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.
AND I would need gpart to be able to work on them!
Would this be possible to do with the current jail implementation on
12-CURRENT?
I don't think jail will ever have access to raw / block devices. It is
disallowed by security design.
Wouldn't it be better to use bhyve guests for this environment?


Miroslav Lachman
Steven Hartland
2017-06-09 09:23:08 UTC
Permalink
You could do effectively this by using dedicated zfs filesystems per jail
Post by Willem Jan Withagen
Hi,
I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that....
Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.
AND I would need gpart to be able to work on them!
Would this be possible to do with the current jail implementation on
12-CURRENT?
Thanx,
--WjW
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
Willem Jan Withagen
2017-06-09 13:48:44 UTC
Permalink
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,

That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...

I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.

--WjW
Post by Steven Hartland
Post by Willem Jan Withagen
Hi,
I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that....
Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.
AND I would need gpart to be able to work on them!
Would this be possible to do with the current jail implementation on
12-CURRENT?
Miroslav Lachman
2017-06-09 14:20:44 UTC
Permalink
Post by Willem Jan Withagen
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,
That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...
I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.
I totally skipped devd.conf in my mind in previous reply. So maybe you
can really use devd.conf to allow access to /dev/adaX devices or you can
use ZFS zvol if you have big pool and need some smaller devices to test
with.

Miroslav Lachman
Willem Jan Withagen
2017-06-11 00:13:37 UTC
Permalink
Post by Miroslav Lachman
Post by Willem Jan Withagen
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,
That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...
I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.
I totally skipped devd.conf in my mind in previous reply. So maybe you
can really use devd.conf to allow access to /dev/adaX devices or you can
use ZFS zvol if you have big pool and need some smaller devices to test
with.
I want the jail to look as much as a normal system would, and then run
ceph-tools on them. And they would like to see /dev/{disk}....

Now I have found /sbin/devfs which allows to add/remove devices to an
already existing devfs-mount.

So I can 'rule add type disk unhide' and see the disks.
Gpart can then list partitions.
But any of the other commands is met with an unwilling system:

***@ceph-1:/ # gpart delete -i 1 ada0
gpart: No such file or directory

So there is still some protection in place in the jail....

However dd-ing to the device does overwrite some stuff.
Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
gpartition.

But I don't see any sysctl options to toggle that on or off

--WjW
Allan Jude
2017-06-11 00:41:54 UTC
Permalink
Post by Willem Jan Withagen
Post by Miroslav Lachman
Post by Willem Jan Withagen
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,
That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...
I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.
I totally skipped devd.conf in my mind in previous reply. So maybe you
can really use devd.conf to allow access to /dev/adaX devices or you can
use ZFS zvol if you have big pool and need some smaller devices to test
with.
I want the jail to look as much as a normal system would, and then run
ceph-tools on them. And they would like to see /dev/{disk}....
Now I have found /sbin/devfs which allows to add/remove devices to an
already existing devfs-mount.
So I can 'rule add type disk unhide' and see the disks.
Gpart can then list partitions.
gpart: No such file or directory
So there is still some protection in place in the jail....
However dd-ing to the device does overwrite some stuff.
Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
gpartition.
But I don't see any sysctl options to toggle that on or off
--WjW
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-jail
To use GEOM tools like gpart, I think you'll need to unhide
/dev/geom.ctl in the jail
--
Allan Jude
Willem Jan Withagen
2017-06-12 09:48:46 UTC
Permalink
Post by Allan Jude
Post by Willem Jan Withagen
Post by Miroslav Lachman
Post by Willem Jan Withagen
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,
That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...
I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.
I totally skipped devd.conf in my mind in previous reply. So maybe you
can really use devd.conf to allow access to /dev/adaX devices or you can
use ZFS zvol if you have big pool and need some smaller devices to test
with.
I want the jail to look as much as a normal system would, and then run
ceph-tools on them. And they would like to see /dev/{disk}....
Now I have found /sbin/devfs which allows to add/remove devices to an
already existing devfs-mount.
So I can 'rule add type disk unhide' and see the disks.
Gpart can then list partitions.
gpart: No such file or directory
So there is still some protection in place in the jail....
However dd-ing to the device does overwrite some stuff.
Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
gpartition.
But I don't see any sysctl options to toggle that on or off
To use GEOM tools like gpart, I think you'll need to unhide
/dev/geom.ctl in the jail
Right, thanx, could very well be the case.
I'll try and post back here.

But I'll take a different approach and just enable all devices in /dev
Since I'm not really needing security, but only need separate compute
spaces. And jails have the advantage over bhyve that it is easy to
modify files in the subdomains.
Restricting afterwards might be an easier job.

I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have
'mount -t devfs -oruleset'
pick up the changes.
Even adding any extra ruleset to the /etc/defaults/devfs.rules does not
get picked up, hence my toying with /sbin/devfs.

--WjW
Willem Jan Withagen
2017-06-12 23:04:46 UTC
Permalink
Post by Willem Jan Withagen
Post by Allan Jude
Post by Willem Jan Withagen
Post by Miroslav Lachman
Post by Willem Jan Withagen
Post by Steven Hartland
You could do effectively this by using dedicated zfs filesystems per jail
Hi Steven,
That is how I'm going to do it, when nothing else works.
But then I don't get to test the part of building the ceph-cluster from
raw disk...
I was more thinking along the lines of tinkering with the devd.conf or
something. And would appreciate opinions on how to (not) do it.
I totally skipped devd.conf in my mind in previous reply. So maybe you
can really use devd.conf to allow access to /dev/adaX devices or you can
use ZFS zvol if you have big pool and need some smaller devices to test
with.
I want the jail to look as much as a normal system would, and then run
ceph-tools on them. And they would like to see /dev/{disk}....
Now I have found /sbin/devfs which allows to add/remove devices to an
already existing devfs-mount.
So I can 'rule add type disk unhide' and see the disks.
Gpart can then list partitions.
gpart: No such file or directory
So there is still some protection in place in the jail....
However dd-ing to the device does overwrite some stuff.
Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt
gpartition.
But I don't see any sysctl options to toggle that on or off
To use GEOM tools like gpart, I think you'll need to unhide
/dev/geom.ctl in the jail
Right, thanx, could very well be the case.
I'll try and post back here.
But I'll take a different approach and just enable all devices in /dev
Since I'm not really needing security, but only need separate compute
spaces. And jails have the advantage over bhyve that it is easy to
modify files in the subdomains.
Restricting afterwards might be an easier job.
I'm also having trouble expanding /etc/{,defaults/}devfs.rules and have
'mount -t devfs -oruleset'
pick up the changes.
Even adding any extra ruleset to the /etc/defaults/devfs.rules does not
get picked up, hence my toying with /sbin/devfs.
Right,
That will help.

Next challenge is to allow zfs to create a filesystem on a partition.

***@ceph-1:/ # gpart destroy -F ada8
ada8 destroyed
***@ceph-1:/ # gpart create -s GPT ada8
ada8 created
***@ceph-1:/ # gpart add -t freebsd-zfs -a 1M -l osd-disk-1 /dev/ada8
ada8p1 added
***@ceph-1:/ # zpool create -f osd.1 /dev/ada8p1
cannot create 'osd.1': permission denied
***@ceph-1:/ #

--WjW
cstanley
2018-02-27 04:11:17 UTC
Permalink
Sorry for the extremely late reply!

I am interested in any progress you have made on this front.

I have been playing around with BHYVE - I am able to get guests up and
running but I am having trouble mapping the raw block devices (/dev/ada5
etc) to the vm.

This prompted me to mess around with jails as an alternative, and I came
across this thread :)



--
Sent from: http://freebsd.1045724.x6.nabble.com/freebsd-jail-f5721530.html
Willem Jan Withagen
2018-03-07 14:58:05 UTC
Permalink
Post by cstanley
Sorry for the extremely late reply!
I am interested in any progress you have made on this front.
I have been playing around with BHYVE - I am able to get guests up and
running but I am having trouble mapping the raw block devices (/dev/ada5
etc) to the vm.
This prompted me to mess around with jails as an alternative, and I came
across this thread :)
I got side-tracked by different problems...
So there nothing that came out of this.

Other than that configuring this is not easy. Tried several things and
got no where other than just disabling it all and get everything in a jail.

--WjW

Konstantin Belousov
2017-06-09 10:04:26 UTC
Permalink
Post by Willem Jan Withagen
Hi,
I'm writting/building a test environment for my ceph cluster, and I'm
using jails for that....
Now one of the things I'd be interested in, is to pass a few raw disks
to each of the jails.
So jail ceph-1 gets /dev/ada1 and /dev/ada2 (and partitions), ceph-2
gets /dev/ada2 and /dev/ada3.
AND I would need gpart to be able to work on them!
Would this be possible to do with the current jail implementation on
12-CURRENT?
Read about devfs(8) and devfs.conf(5), follow further references from there.
In short, devfs allows to specify rules for nodes visibility, and the
rules are applied per-mount. Since jails use per-jail devfs mount, you
get dedicated namespace for the devfs nodes.
Loading...