Discussion:
access to snapshots from within the containers
(too old to reply)
Michel Jansens
2017-06-13 12:37:59 UTC
Permalink
Hi all,

I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
I’m trying to access the (ZFS) snapshots data from within containers.

I’ve shared the “.zfs/snapshot” directory with the associated container like this:
lxc config device add obliging-panda snapshot disk path=/snapshot source=/var/lib/lxd/containers/obliging-panda.zfs/.zfs/snapshot

From inside the container, I see the list of snapshots:

***@obliging-panda:/snapshot# ls -l /snapshot/
total 2
drwxr-xr-x 4 root root 5 May 23 09:12 snapshot-2017_06_12_08h56
drwxr-xr-x 4 root root 5 May 23 09:12 snapshot-2017_06_13_09h06
drwxr-xr-x 4 root root 5 May 23 09:12 snapshot-abcd
dr-xr-xr-x 1 nobody nogroup 0 Jun 13 07:11 snapshot-newsnap
drwxr-xr-x 4 root root 5 May 23 09:12 snapshot-test

But they all look inaccessible:
***@obliging-panda:/snapshot# ls -l /snapshot/snapshot-newsnap/rootfs
ls: cannot access '/snapshot/snapshot-newsnap/rootfs': Object is remote



until you list them in the main Server

ls /var/lib/lxd/containers/obliging-panda.zfs/.zfs/snapshot/snapshot-newsnap/rootfs/

Then they appear in the container:

***@obliging-panda:/snapshot# ls -l snapshot-newsnap/rootfs
total 99
drwxr-xr-x 2 root root 173 Jun 12 12:01 bin
drwxr-xr-x 3 root root 3 May 16 14:19 boot
drwxr-xr-x 5 root root 91 May 16 14:18 dev
...

The funny thing, is that this same weird behaviour happened a long time ago in Solaris zones
 so I imagine this has to do with ZFS




Is there another more “standard" way to access snapshots?
I saw there is a /snap (empty) directory in the containers. Is it meant to access snapshots? if yes how do you have them mounted?

Sorry if there is something obvious I’m missing. I’m new to Ubuntu/LXD (coming from Solaris & SmartOS zones).

Thanks.

Michel
gunnar.wagner
2017-06-14 07:41:27 UTC
Permalink
not directly related to your snapshot issue but still maybe good to know
fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version
of LXD on an ubuntu 16.04 host you'd install it from the
xenial-backports sources

sudo apt install -t xenial-backports lxd lxd-client

this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if
you don;t have that) but it was recommended somewhere to get that as well



-

Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
District, 201112 Shanghai, P.R. CHINA
mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
Michel Jansens
2017-06-14 08:21:46 UTC
Permalink
Hi Gunnar,

Thanks for your comment, it brings up some issue that are not clear to me:
I’m looking to build a production environment based on Ubuntu servers with ZFS storage and LXD ( similar architecture to what I have now on SmartOS).
I intend to buy Ubuntu server licences with support.
I understand that version 2.0.9 is not the latest version available upstream, but what I don’t get, is will I get support from Canonical if I use a more recent version?
If Canonical offers LXD2.0.x in 16.04LTS, maybe it is for stability concerns?

Thank you for any information on this.

Cheers,

Michel
not directly related to your snapshot issue but still maybe good to know fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of LXD on an ubuntu 16.04 host you'd install it from the xenial-backports sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the lxd-client package exactly does (or which feature your are missing if you don;t have that) but it was recommended somewhere to get that as well
-
Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang District, 201112 Shanghai, P.R. CHINA
mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Fajar A. Nugraha
2017-06-14 10:21:22 UTC
Permalink
Post by Michel Jansens
I understand that version 2.0.9 is not the latest version available
upstream, but what I don’t get, is will I get support from Canonical if I
use a more recent version?
If Canonical offers LXD2.0.x in 16.04LTS, maybe it is for stability concerns?
https://help.ubuntu.com/community/UbuntuBackports

Personally I use lxd from xenial-backports to get multiple storage pool
feature.
--
Fajar
Stéphane Graber
2017-06-14 17:10:05 UTC
Permalink
Post by gunnar.wagner
not directly related to your snapshot issue but still maybe good to know
fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of
LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if you
don;t have that) but it was recommended somewhere to get that as well
Please don't tell people to do that unless they understand the implications!

Doing the above will get your system from the LXD LTS branch (2.0.x) to
the LXD feature branch. Downgrading isn't possible, so once someone does
that, there's no going back.

The LXD LTS branch (2.0.x) is supported for 5 years and only gets
bugfixes and security updates. This is typically recommended for
production environments where new features are considered a risk rather
than benefit.

The LXD feature branch (currently at 2.14) is updated monthly, is only
supported until the next release is out and will receive new features
which may require user intervention to setup after upgrade.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
mjansens
2017-06-15 05:58:33 UTC
Permalink
Hi,

Thank you Stéphane for this clarification.
I'll indeed try to stick with the LTS version if I can. The snapshot glitch has an easy work around: just need to do a ‘ls’ of the new snapshot contents in the host (can even happen in a cron). And anyway, nobody said this issue was fixed in later updates...

Where I might get stuck is in the network part: I will need at some point to lock some containers in specific VLANs. I more or less have gathered from various info on the web that LXD2.0.x networking is limited to a simple bridge (my actual config) or the standard NAT.


Thanks,

Michel
Post by Stéphane Graber
Post by gunnar.wagner
not directly related to your snapshot issue but still maybe good to know
fact
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of
LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if you
don;t have that) but it was recommended somewhere to get that as well
Please don't tell people to do that unless they understand the implications!
Doing the above will get your system from the LXD LTS branch (2.0.x) to
the LXD feature branch. Downgrading isn't possible, so once someone does
that, there's no going back.
The LXD LTS branch (2.0.x) is supported for 5 years and only gets
bugfixes and security updates. This is typically recommended for
production environments where new features are considered a risk rather
than benefit.
The LXD feature branch (currently at 2.14) is updated monthly, is only
supported until the next release is out and will receive new features
which may require user intervention to setup after upgrade.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Fajar A. Nugraha
2017-06-15 06:06:30 UTC
Permalink
Post by mjansens
Where I might get stuck is in the network part: I will need at some point
to lock some containers in specific VLANs. I more or less have gathered
from various info on the web that LXD2.0.x networking is limited to a
simple bridge (my actual config)
simple bridge, but not ONE bridge. You can always create multiple bridges
(e.g. bridge on top of vlan on top of trunk), and assign the appropriate
bridge to the container. It's quite common case for other virtualization
tools as well (e.g. xen, kvm)
--
Fajar
Stéphane Graber
2017-06-15 17:13:03 UTC
Permalink
Hi,
Thank you Stéphane for this clarification.
I'll indeed try to stick with the LTS version if I can. The snapshot glitch has an easy work around: just need to do a ‘ls’ of the new snapshot contents in the host (can even happen in a cron). And anyway, nobody said this issue was fixed in later updates...
Yeah, I don't expect this to be any different on the LXD feature branch.
This behavior is an internal ZFS behavior and short of having LXD clone
every snapshot and mount the resulting clone, I can't think of another
way to easily expose that data.
Where I might get stuck is in the network part: I will need at some point to lock some containers in specific VLANs. I more or less have gathered from various info on the web that LXD2.0.x networking is limited to a simple bridge (my actual config) or the standard NAT.
LXD 2.0.x doesn't have an API to let you define additional bridges.

There's however nothing preventing you from defining additional bridges
at the system level and then telling LXD to use them.
Thanks,
Michel
Post by Stéphane Graber
Post by gunnar.wagner
not directly related to your snapshot issue but still maybe good to know
fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of
LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if you
don;t have that) but it was recommended somewhere to get that as well
Please don't tell people to do that unless they understand the implications!
Doing the above will get your system from the LXD LTS branch (2.0.x) to
the LXD feature branch. Downgrading isn't possible, so once someone does
that, there's no going back.
The LXD LTS branch (2.0.x) is supported for 5 years and only gets
bugfixes and security updates. This is typically recommended for
production environments where new features are considered a risk rather
than benefit.
The LXD feature branch (currently at 2.14) is updated monthly, is only
supported until the next release is out and will receive new features
which may require user intervention to setup after upgrade.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Michel Jansens
2017-06-16 10:01:56 UTC
Permalink
Thanks a lot Stéphane for this information,

I succeeded in attaching a bridge device from a specific vlan following your advise from https://github.com/lxc/lxd/issues/2551 <https://github.com/lxc/lxd/issues/2551>
command I used is: lxc config device add welcome-lemur eth1 nic nictype=macvlan parent=brvlan3904 name=eth1

In /etc/network/interfaces I added:

#vlan 3904 interface on enp1s0f0
auto vlan3904
iface vlan3904 inet manual
vlan_raw_device enp1s0f0
#add a bridge for vlan3904
auto brvlan3904
iface brvlan3904 inet manual
bridge_ports vlan3904


I managed to add the brvlan3904 to multiple containers, but this doesn’t create an interface for each container in the brvlan3904 bridge, and I don’t know what the security consequences are

Is This OK like this?


Alternatively, to mimic how lxc br0 bridge looks (one interface for each container with vethXXXXXX like names), I tried to add more ports to the bridge,with dummy interfaces:

ip link add welcomelemur type dummy
brctl addif brvlan3904 welcomelemur
ifconfig welcomelemur up
lxc config device add welcome-lemur eth1 nic nictype=macvlan parent=brvlan3904 name=eth1

But this gave me: error: Failed to create the new macvlan interface: exit status 2
I tried using nictype=veth instead of mtacvlan but got 'error: Bad nic type: veth’

How should I do this properly?



I must say what I’d really like is a way to do networking like I used to in Solaris 10 with “shared IP interfaces”:
-the network interface is created in the host (one for each container) like eth0:1,eth0:2,...
-the containers sees the interface in ifconfig but cannot change network IP, mask or anything.
Some apps don’t work (e.g.: tcpdump needs promiscuous mode), but someone cannot just change IP from within the container (maybe this can be prevented in LXC, but I’m not experienced enough yet to know how).




Thanks for any additional information


—

Michel
Post by Stéphane Graber
Hi,
Thank you Stéphane for this clarification.
I'll indeed try to stick with the LTS version if I can. The snapshot glitch has an easy work around: just need to do a ‘ls’ of the new snapshot contents in the host (can even happen in a cron). And anyway, nobody said this issue was fixed in later updates...
Yeah, I don't expect this to be any different on the LXD feature branch.
This behavior is an internal ZFS behavior and short of having LXD clone
every snapshot and mount the resulting clone, I can't think of another
way to easily expose that data.
Where I might get stuck is in the network part: I will need at some point to lock some containers in specific VLANs. I more or less have gathered from various info on the web that LXD2.0.x networking is limited to a simple bridge (my actual config) or the standard NAT.
LXD 2.0.x doesn't have an API to let you define additional bridges.
There's however nothing preventing you from defining additional bridges
at the system level and then telling LXD to use them.
Thanks,
Michel
Post by Stéphane Graber
Post by gunnar.wagner
not directly related to your snapshot issue but still maybe good to know
fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of
LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if you
don;t have that) but it was recommended somewhere to get that as well
Please don't tell people to do that unless they understand the implications!
Doing the above will get your system from the LXD LTS branch (2.0.x) to
the LXD feature branch. Downgrading isn't possible, so once someone does
that, there's no going back.
The LXD LTS branch (2.0.x) is supported for 5 years and only gets
bugfixes and security updates. This is typically recommended for
production environments where new features are considered a risk rather
than benefit.
The LXD feature branch (currently at 2.14) is updated monthly, is only
supported until the next release is out and will receive new features
which may require user intervention to setup after upgrade.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Fajar A. Nugraha
2017-06-16 11:14:19 UTC
Permalink
Post by Michel Jansens
Thanks a lot Stéphane for this information,
I succeeded in attaching a bridge device from a specific vlan following
your advise from https://github.com/lxc/lxd/issues/2551
command I used is: lxc config device add welcome-lemur eth1 nic
nictype=macvlan parent=brvlan3904 name=eth1
#vlan 3904 interface on enp1s0f0
auto vlan3904
iface vlan3904 inet manual
vlan_raw_device enp1s0f0
#add a bridge for vlan3904
auto brvlan3904
iface brvlan3904 inet manual
bridge_ports vlan3904
I managed to add the brvlan3904 to multiple containers, but this doesn’t
create an interface for each container in the brvlan3904 bridge,
That's what macvlan does. It works for some usecase (and can be easier,
since you DON'T need to create a bridge), but can cause some problems (e.g.
host can't connect to container's macvlan interface).
Post by Michel Jansens
and I don’t know what the security consequences are

Is This OK like this?
Alternatively, to mimic how lxc br0 bridge looks (one interface for each
container with vethXXXXXX like names), I tried to add more ports to the
ip link add welcomelemur type dummy
brctl addif brvlan3904 welcomelemur
ifconfig welcomelemur up
lxc config device add welcome-lemur eth1 nic nictype=macvlan
parent=brvlan3904 name=eth1
But this gave me: error: Failed to create the new macvlan interface: exit status 2
I tried using nictype=veth instead of mtacvlan but got 'error: Bad nic type: veth’
How should I do this properly?
Did you want "nictype=bridged"?

https://github.com/lxc/lxd/blob/master/doc/containers.md#type-nic
--
Fajar
Michel Jansens
2017-06-16 13:45:08 UTC
Permalink
Thanks a lot Fajar,

I did :
lxc config device add welcome-lemur eth1 nic nictype=bridged parent=brvlan3904 name=eth1
And ‘brctl show' shows the interface ‘veth41aa07e1’ was added to the brvlan3904 bridge.

What I don’t get is where to find the documentation for this. I thought I had to look to in “man lxc.container.conf” but I don’t find any reference to network type ‘bridge' (found macvlan, veth,
) 


But it works.

Thanks

Michel
Post by Michel Jansens
Thanks a lot Stéphane for this information,
I succeeded in attaching a bridge device from a specific vlan following your advise from https://github.com/lxc/lxd/issues/2551 <https://github.com/lxc/lxd/issues/2551>
command I used is: lxc config device add welcome-lemur eth1 nic nictype=macvlan parent=brvlan3904 name=eth1
#vlan 3904 interface on enp1s0f0
auto vlan3904
iface vlan3904 inet manual
vlan_raw_device enp1s0f0
#add a bridge for vlan3904
auto brvlan3904
iface brvlan3904 inet manual
bridge_ports vlan3904
I managed to add the brvlan3904 to multiple containers, but this doesn’t create an interface for each container in the brvlan3904 bridge,
That's what macvlan does. It works for some usecase (and can be easier, since you DON'T need to create a bridge), but can cause some problems (e.g. host can't connect to container's macvlan interface).
and I don’t know what the security consequences are

Is This OK like this?
ip link add welcomelemur type dummy
brctl addif brvlan3904 welcomelemur
ifconfig welcomelemur up
lxc config device add welcome-lemur eth1 nic nictype=macvlan parent=brvlan3904 name=eth1
But this gave me: error: Failed to create the new macvlan interface: exit status 2
I tried using nictype=veth instead of mtacvlan but got 'error: Bad nic type: veth’
How should I do this properly?
Did you want "nictype=bridged"?
https://github.com/lxc/lxd/blob/master/doc/containers.md#type-nic <https://github.com/lxc/lxd/blob/master/doc/containers.md#type-nic>
--
Fajar
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Fajar A. Nugraha
2017-06-16 15:23:30 UTC
Permalink
Post by Michel Jansens
Thanks a lot Fajar,
lxc config device add welcome-lemur eth1 nic nictype=bridged
parent=brvlan3904 name=eth1
And ‘brctl show' shows the interface ‘veth41aa07e1’ was added to the
brvlan3904 bridge.
What I don’t get is where to find the documentation for this. I thought I
had to look to in “man lxc.container.conf”
lxc.container.conf is for old lxc1 configuration.
Post by Michel Jansens
but I don’t find any reference to network type ‘bridge' (found macvlan,
veth,
) 

Best doc I found is from github:
https://github.com/lxc/lxd/blob/master/doc/configuration.md -> newer lxd
version (e.g. one from xenial-backports)
https://github.com/lxc/lxd/blob/stable-2.0/doc/configuration.md -> 2.0.x
(e.g. default version in ubuntu xenial)
--
Fajar
mjansens
2017-06-16 15:58:48 UTC
Permalink
Thanks Fajar,

I stumbled on the GitHub doc, but didn’t think to go looking in the branches.

Great!

Michel
Post by Michel Jansens
Thanks a lot Fajar,
lxc config device add welcome-lemur eth1 nic nictype=bridged parent=brvlan3904 name=eth1
And ‘brctl show' shows the interface ‘veth41aa07e1’ was added to the brvlan3904 bridge.
What I don’t get is where to find the documentation for this. I thought I had to look to in “man lxc.container.conf”
lxc.container.conf is for old lxc1 configuration.
but I don’t find any reference to network type ‘bridge' (found macvlan, veth,
) 

https://github.com/lxc/lxd/blob/master/doc/configuration.md <https://github.com/lxc/lxd/blob/master/doc/configuration.md> -> newer lxd version (e.g. one from xenial-backports)
https://github.com/lxc/lxd/blob/stable-2.0/doc/configuration.md <https://github.com/lxc/lxd/blob/stable-2.0/doc/configuration.md> -> 2.0.x (e.g. default version in ubuntu xenial)
--
Fajar
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
gunnar.wagner
2017-06-15 00:13:11 UTC
Permalink
thanks for clarifying
Post by Stéphane Graber
Post by gunnar.wagner
not directly related to your snapshot issue but still maybe good to know
fact
Post by Michel Jansens
I’m busy discovering LXD v2.0.9 on Ubuntu 16.04
if you want the most recent (yet regarded stable for production) version of
LXD on an ubuntu 16.04 host you'd install it from the xenial-backports
sources
sudo apt install -t xenial-backports lxd lxd-client
this gives you 2.13 at this point in time. I am not really sure what the
lxd-client package exactly does (or which feature your are missing if you
don;t have that) but it was recommended somewhere to get that as well
Please don't tell people to do that unless they understand the implications!
Doing the above will get your system from the LXD LTS branch (2.0.x) to
the LXD feature branch. Downgrading isn't possible, so once someone does
that, there's no going back.
The LXD LTS branch (2.0.x) is supported for 5 years and only gets
bugfixes and security updates. This is typically recommended for
production environments where new features are considered a risk rather
than benefit.
The LXD feature branch (currently at 2.14) is updated monthly, is only
supported until the next release is out and will receive new features
which may require user intervention to setup after upgrade.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
---
This email has been checked for viruses by AVG.
http://www.avg.com
--
Gunnar Wagner | Yongfeng Village Group 12 #5, Pujiang Town, Minhang
District, 201112 Shanghai, P.R. CHINA
mob +86.159.0094.1702 | skype: professorgunrad | wechat: 15900941702
Continue reading on narkive:
Loading...