Discussion:
Args for lxd init via script
(too old to reply)
Mark Constable
2017-05-19 06:39:12 UTC
Permalink
I'm trying to automate a simple setup of LXD via a bash script and I'm
not sure of the best way to provide some preset arguments to "lxd init",
if at all possible. Specifically...

Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? yes
Do you want to configure the LXD bridge (yes/no)? no

I'm hoping someone here has already been down this path, any suggestions?
Fajar A. Nugraha
2017-05-19 07:06:53 UTC
Permalink
Post by Mark Constable
I'm trying to automate a simple setup of LXD via a bash script and I'm
not sure of the best way to provide some preset arguments to "lxd init",
if at all possible. Specifically...
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 5
Would you like LXD to be available over the network (yes/no)? yes
Do you want to configure the LXD bridge (yes/no)? no
I'm hoping someone here has already been down this path, any suggestions?
Did you try "lxd --help"?

...
Init options:
--auto
Automatic (non-interactive) mode

Init options for non-interactive mode (--auto):
--network-address ADDRESS
Address to bind LXD to (default: none)
--network-port PORT
Port to bind LXD to (default: 8443)
--storage-backend NAME
Storage backend to use (btrfs, dir, lvm or zfs, default: dir)
--storage-create-device DEVICE
Setup device based storage using DEVICE
--storage-create-loop SIZE
Setup loop based storage with SIZE in GB
--storage-pool NAME
Storage pool to use or create
--trust-password PASSWORD
Password required to add new clients
...
--
Fajar
Mark Constable
2017-05-21 05:26:24 UTC
Permalink
Post by Fajar A. Nugraha
Post by Mark Constable
I'm trying to automate a simple setup of LXD via a bash script and
I'm not sure of the best way to provide some preset arguments to "lxd
init", if at all possible. Specifically...
Did you try "lxd --help"?
Sigh, not for the last year or two, thanks Fajar.

Now for an almost related question. I've got a mostly working setup script
for my needs but I am wondering if there is a better way to achieve the
same results...

https://github.com/netserva/sh/blob/master/bin/setup-lxd

I want to provide a definitive amount of disk space per container, visible
within the container, and the only way I can find to do that is to use a
zfs pool per container and the best way to define that pool is to provide
a container specific profile per container.

My question, is it reasonable to provide a separate profile and zfs pool
per container and is there a better or more efficient way to get the same
end result?
Jeff Kowalczyk
2017-05-21 06:02:34 UTC
Permalink
Will disk limits work for you?

https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk
Post by Mark Constable
I want to provide a definitive amount of disk space per container, visible
within the container, and the only way I can find to do that is to use a
zfs pool per container and the best way to define that pool is to provide
a container specific profile per container.
My question, is it reasonable to provide a separate profile and zfs pool
per container and is there a better or more efficient way to get the same
end result?
Mark Constable
2017-05-21 07:51:05 UTC
Permalink
Post by Jeff Kowalczyk
Post by Mark Constable
My question, is it reasonable to provide a separate profile and
zfs pool per container and is there a better or more efficient way
to get the same end result?
Will disk limits work for you?
https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk
Thanks for the suggestion and links. I'll have to test whether the
"size" limit is actually reflected within the container so that df
and other tools can be used to monitor disk usage.

For example, PHP's disk_total_space() and disk_free_space() functions
do work accurately with a zfs pool and seeing that I am working towards
a LXD plugin for my hosting control panel I really need disk limits to
work similar to a VPS or Xen VM.
Fajar A. Nugraha
2017-05-21 09:39:26 UTC
Permalink
Post by Mark Constable
Post by Mark Constable
My question, is it reasonable to provide a separate profile and
Post by Mark Constable
zfs pool per container and is there a better or more efficient way
to get the same end result?
Will disk limits work for you?
https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk
Thanks for the suggestion and links. I'll have to test whether the
"size" limit is actually reflected within the container so that df
and other tools can be used to monitor disk usage.
it works for zfs. lxd would set zfs quota property, which would be
displayed correctly in tools like df and such.

btrfs have similar limit setup, but it does NOT show in df.
--
Fajar
Post by Mark Constable
For example, PHP's disk_total_space() and disk_free_space() functions
do work accurately with a zfs pool and seeing that I am working towards
a LXD plugin for my hosting control panel I really need disk limits to
work similar to a VPS or Xen VM.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
gunnar.wagner
2017-05-21 13:16:45 UTC
Permalink
just for my understanding ... you want to monitor disk usage on the LXD
host, right?
Post by Mark Constable
My question, is it reasonable to provide a separate profile and
zfs pool per container and is there a better or more efficient way
to get the same end result?
Will disk limits work for you?
https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/
<https://stgraber.org/2016/03/26/lxd-2-0-resource-control-412/>
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk
<https://github.com/lxc/lxd/blob/master/doc/containers.md#type-disk>
Thanks for the suggestion and links. I'll have to test whether the
"size" limit is actually reflected within the container so that df
and other tools can be used to monitor disk usage.
it works for zfs. lxd would set zfs quota property, which would be
displayed correctly in tools like df and such.
btrfs have similar limit setup, but it does NOT show in df.
--
Fajar
For example, PHP's disk_total_space() and disk_free_space() functions
do work accurately with a zfs pool and seeing that I am working towards
a LXD plugin for my hosting control panel I really need disk limits to
work similar to a VPS or Xen VM.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
<http://lists.linuxcontainers.org/listinfo/lxc-users>
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
District, 201112 Shanghai, P.R. CHINA
mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
Mark Constable
2017-05-21 15:05:37 UTC
Permalink
Post by gunnar.wagner
just for my understanding ... you want to monitor disk usage on the
LXD host, right?
Yes but I also want the current disk usage to be available inside the
container so that, for instance, df returns realistic results.

Using a zfs pool per container works just fine for this purpose but I
am concerned that having potentially many 100s of zfs pools per server
may not be very efficient. This sums up what I am after...
Post by gunnar.wagner
Post by Mark Constable
For example, PHP's disk_total_space() and disk_free_space()
functions do work accurately with a zfs pool and seeing that I am
working towards a LXD plugin for my hosting control panel I really
need disk limits to work similar to a VPS or Xen VM.
IOW if I supply 5GB of space to a paying client I need to have a way
for both them and myself to easily monitor that disk space. It's the
one thing that has stopped me from using LXD for real. Well that and
not having an open source PHP control panel that runs on Ubuntu servers.
Fajar A. Nugraha
2017-05-22 02:28:28 UTC
Permalink
Post by Mark Constable
Post by gunnar.wagner
just for my understanding ... you want to monitor disk usage on the
LXD host, right?
Yes but I also want the current disk usage to be available inside the
container so that, for instance, df returns realistic results.
Have you tried lxd with zfs?
Post by Mark Constable
Using a zfs pool per container works just fine for this purpose but I
am concerned that having potentially many 100s of zfs pools per server
may not be very efficient. This sums up what I am after...
Did you mean zfs dataset?

Using separate POOL per container should be possible in newer lxd (e.g. in
xenial-backports). However that would also negate some benefits of using
zfs, as you'd need to have separate block device/loopback files for each
pool.

Using a default zfs pool, and having separate DATASET (or to be more
accurate, filesystem) per container, is the default setup. Which would
provide correct disk usage statistic (e.g. for "df" and such). And it's
perfectly normal to have several hundred or thousand dataset (which would
include snapshots as well) on a single pool.
Post by Mark Constable
For example, PHP's disk_total_space() and disk_free_space()
Post by gunnar.wagner
Post by Mark Constable
functions do work accurately with a zfs pool and seeing that I am
working towards a LXD plugin for my hosting control panel I really
need disk limits to work similar to a VPS or Xen VM.
IOW if I supply 5GB of space to a paying client I need to have a way
for both them and myself to easily monitor that disk space. It's the
one thing that has stopped me from using LXD for real. Well that and
not having an open source PHP control panel that runs on Ubuntu servers.
IIRC ubuntu's roadmap is to integrate lxd (and zfs) into openstack (which
sould have lots of control panel already).
In the mean time, your best bet is probably create your own (possibly based
on lxd-webui).
--
Fajar
Mark Constable
2017-05-22 03:25:09 UTC
Permalink
Post by Fajar A. Nugraha
Post by Mark Constable
Yes but I also want the current disk usage to be available inside
the container so that, for instance, df returns realistic results.
Have you tried lxd with zfs?
Yes, zfs (pool per container) is what I am currently using here...

https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd
Post by Fajar A. Nugraha
Did you mean zfs dataset?
Is that the same thing as a "volume" (as seen by lxd) ?

I would normally use btrfs but it can't provide usage statistics.
Post by Fajar A. Nugraha
Using separate POOL per container should be possible in newer lxd
(e.g. in xenial-backports). However that would also negate some
benefits of using zfs, as you'd need to have separate block
device/loopback files for each pool.
That's what I suspect, not the best approach.

Besides, I am pretty sure that the performance penalty of using
loopback instead of a block devices would get worse in large numbers.
Post by Fajar A. Nugraha
Using a default zfs pool, and having separate DATASET (or to be more
accurate, filesystem) per container, is the default setup. Which
would provide correct disk usage statistic (e.g. for "df" and such).
I did try something with "volumes" but I got the impression it did not
seem to provide correct "df" results but I was floundering around just
trying to get anything to work (with df) at the time.
Post by Fajar A. Nugraha
And it's perfectly normal to have several hundred or thousand dataset
(which would include snapshots as well) on a single pool.
And I'd imagine copying and moving containers would make more sense.
Post by Fajar A. Nugraha
IIRC ubuntu's roadmap is to integrate lxd (and zfs) into openstack
(which sould have lots of control panel already).
But unfortunately not in PHP so I can't deploy and extend whatever
control panels openstack offers. As far as I can see, one needs a
minimum of 5 servers just to get started with openstack so that is
180 degrees away from where I am going (super lightweight and simple).
Post by Fajar A. Nugraha
In the mean time, your best bet is probably create your own (possibly
based on lxd-webui).
Thanks for the hint. I'm not interested in developing in Angular but
I should be able to get some frontend ideas. Once I have a basic bash
script setup working then I'll create a PHP plugin for my framework.
Fajar A. Nugraha
2017-05-22 03:55:47 UTC
Permalink
Post by Mark Constable
Post by Mark Constable
Yes but I also want the current disk usage to be available inside
Post by Mark Constable
the container so that, for instance, df returns realistic results.
Have you tried lxd with zfs?
Yes, zfs (pool per container) is what I am currently using here...
https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd
Ah, so you create a new storage profile, with its own pool, for each plan?

from your script:
lxc storage create "zfs$_HOST$VSIZE" zfs size=$VSIZE

That would create a zfs pool backed by loopback file on /var/lib/lxd/disks/
. NOT a good setup, performance-wise.
Post by Mark Constable
Did you mean zfs dataset?
Is that the same thing as a "volume" (as seen by lxd) ?
zfs dataset is the equivalent of btrfs subvolume

Using a default zfs pool, and having separate DATASET (or to be more
Post by Mark Constable
Post by Mark Constable
accurate, filesystem) per container, is the default setup. Which
would provide correct disk usage statistic (e.g. for "df" and such).
I did try something with "volumes" but I got the impression it did not
seem to provide correct "df"
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
Post by Mark Constable
results but I was floundering around just
trying to get anything to work (with df) at the time.
And it's perfectly normal to have several hundred or thousand dataset
Post by Mark Constable
(which would include snapshots as well) on a single pool.
And I'd imagine copying and moving containers would make more sense.
Correct.
Simply create ONE storage pool, the default (part of lxd init). If you want
to create multiple profile, attach that same pool to the profile. Then for
each container, lxd will automatically create a dataset for that container.
And when there's only one pool, container creation (from image), snapshot,
and copy would be instaneous.

If you want better performance, create the pool manually using block device
(preferably disk/partition, but an LV should also work), and pass the name
of the pool to lxd init.
--
Fajar
gunnar.wagner
2017-05-22 06:09:04 UTC
Permalink
Post by Fajar A. Nugraha
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
isn't btrfs subvolume usage /path/to/subvolume the tool to get the
correct usage data?
gunnar.wagner
2017-05-22 06:29:32 UTC
Permalink
sorry I meant btrfs fi usage [/path/to/filesystem]
Post by gunnar.wagner
Post by Fajar A. Nugraha
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
isn't btrfs subvolume usage /path/to/subvolume the tool to get
the correct usage data?
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
District, 201112 Shanghai, P.R. CHINA
mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
Fajar A. Nugraha
2017-05-22 06:35:19 UTC
Permalink
Post by gunnar.wagner
Post by Fajar A. Nugraha
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
isn't btrfs subvolume usage /path/to/subvolume the tool to get the
correct usage data?
The requirement was that "df on the container must return the correct
output" w.r.t. size and usage.
--
Fajar
gunnar.wagner
2017-05-22 06:41:03 UTC
Permalink
On Mon, May 22, 2017 at 1:09 PM, gunnar.wagner
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
isn't btrfs subvolume usage /path/to/subvolume the tool to
get the correct usage data?
The requirement was that "df on the container must return the correct
output" w.r.t. size and usage.
yes, I saw that. Just thought as long as you can get reliable usage data
one wouldn't mind the particular tool all too much.
Fajar A. Nugraha
2017-05-22 07:16:08 UTC
Permalink
On Mon, May 22, 2017 at 1:09 PM, gunnar.wagner <
Post by gunnar.wagner
Post by Fajar A. Nugraha
subvolume with btrfs does not provide correct df.
zfs dataset provide correct df.
isn't btrfs subvolume usage /path/to/subvolume the tool to get the
correct usage data?
The requirement was that "df on the container must return the correct
output" w.r.t. size and usage.
yes, I saw that. Just thought as long as you can get reliable usage data
one wouldn't mind the particular tool all too much.
It's kinda similar when people were reading CPU and memory usage. With
cgroups (which is how lxc limits and account resource usage), the correct
place to look was /sys/fs/cgroups. Yet people (and tools they use, like
htop) continue to look at "traditional" places (e.g. /proc/cpuinfo,
/proc/meminfo, etc). And people complain because "top" gives "incorrect"
result inside a container. Thus lxcfs was created to satisfy that
requirement.
--
Fajar
Mark Constable
2017-05-27 05:04:17 UTC
Permalink
Just to complete this thread and kind of mark it [SOLVED] I got
back to getting this script(s) to 99% work after losing my entire
primary BTRFS drive because some typo set my boot partition to
"zfs_volume" (yikes!)

https://raw.githubusercontent.com/netserva/sh/master/bin/setup-lxd

I know to the experts this is probably brainlessly simple but if
I had a working example like this months ago it would have saved
me a man-week of rtfm and testing... and a complete OS reinstall.

Thread summary and solution, lxd init itself accepts most of the
useful arguments so no need to independently add them in a script...

lxd init --auto \
--network-address 12.34.56.78 \
--network-port 8443 \
--storage-backend zfs \
--storage-create-loop 50 \
--storage-pool lxd-pool \
--trust-password changeme

Continue reading on narkive:
Loading...