Discussion:
[lxc-users] Converting from libvirt lxc
Peter Steele
2015-11-30 22:43:39 UTC
Permalink
This message is a bit long and I apologize for that, although the bulk
is cut-and-paste output. I'm migrating our container project from
libvirt-lxc under CentOS 7.1 to LXC and I'm seeing some errors in
/var/log/messages that I don't see in libvirt-lxc. The LXC containers I
am creating are based on the same custom CentOS image that I've been
using with libvirt-lxc. My assumption is that this image should be able
to be used without any significant changes as long as I have the
appropriate config file defined for this image when an LXC container is
installed.

The lxc-create command I'm using looks generally like this:

# lxc-create -f /hf/cs/vm-03/config -t /bin/true -n vm-03
--dir=/hf/cs/vm-03/rootfs

where the config file has the following options defined:

lxc.tty = 4
lxc.pts = 1024
lxc.kmsg = 0
lxc.utsname = vm-03
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:f2:aa:e6
lxc.rootfs = /hf/cs/vm-03/rootfs

When a container boots, I'm seeing the set of errors below:

Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb,
10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdb1, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda,
10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdb2, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdb4, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdb3, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sda4, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sda3, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sda2, 10) failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sda1, 10) failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc,
10) failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdc2, 10) failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7,
/dev/sdc1, 10) failed: No such file or directory
...
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/hwC0D0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/controlC0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0c: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0p: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/dri/card0: No such file or directory

The host's drives have not been made available in the containers, and
that's intentional. These errors are all being created by the udev
service of course, and that's the ultimate cause. When I create a
container under libvirt-lxc though, the udev service is not enabled and
I therefore do not see these errors. Containers created with LXC using
the same CentOS image have the udev suite of services enabled, and even
if I explicitly disable them using

# systemctl disable systemd-udevd-kernel.socket
# systemctl disable systemd-udevd-control.socket
# systemctl disable systemd-udevd.service
# systemctl disable systemd-udev-trigger.service

when I restart the container the services are enabled and I still see
these errors. My guess is I'm missing something in the config file for
my LXC containers but I'm not sure what's needed. This appears to be
further indicated by the set of sys services that are running in my
libvirt-lxc containers:

# systemctl|grep sys-
sys-fs-fuse-connections.mount loaded active mounted FUSE Control
File System
sys-kernel-config.mount loaded active mounted Configuration
File System
sys-kernel-debug.mount loaded active mounted Debug File System

compared to what I see in my equivalent LXC container:

# systemctl|grep sys-
sys-devices-pci0000:00-0000:00:04.0-sound-card0.device loaded active
plugged QEMU Virtual Machine
sys-devices-pci0000:00-0000:00:05.7-usb1-1\x2d1-1\x2d1:1.0-host8-target8:0:0-8:0:0:0-block-sdc-sdc1.device
loaded active plugged QEMU_HARDDISK
sys-devices-pci0000:00-0000:00:05.7-usb1-1\x2d1-1\x2d1:1.0-host8-target8:0:0-8:0:0:0-block-sdc-sdc2.device
loaded active plugged QEMU_HARDDISK
sys-devices-pci0000:00-0000:00:05.7-usb1-1\x2d1-1\x2d1:1.0-host8-target8:0:0-8:0:0:0-block-sdc.device
loaded active plugged QEMU_HARDDISK
sys-devices-pci0000:00-0000:00:06.0-ata4-host3-target3:0:0-3:0:0:0-block-sda-sda1.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata4/host3/target3:0:0/3:0:0:0/block/sda/sda1
sys-devices-pci0000:00-0000:00:06.0-ata4-host3-target3:0:0-3:0:0:0-block-sda-sda2.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata4/host3/target3:0:0/3:0:0:0/block/sda/sda2
sys-devices-pci0000:00-0000:00:06.0-ata4-host3-target3:0:0-3:0:0:0-block-sda-sda3.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata4/host3/target3:0:0/3:0:0:0/block/sda/sda3
sys-devices-pci0000:00-0000:00:06.0-ata4-host3-target3:0:0-3:0:0:0-block-sda-sda4.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata4/host3/target3:0:0/3:0:0:0/block/sda/sda4
sys-devices-pci0000:00-0000:00:06.0-ata4-host3-target3:0:0-3:0:0:0-block-sda.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata4/host3/target3:0:0/3:0:0:0/block/sda
sys-devices-pci0000:00-0000:00:06.0-ata5-host4-target4:0:0-4:0:0:0-block-sdb-sdb1.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata5/host4/target4:0:0/4:0:0:0/block/sdb/sdb1
sys-devices-pci0000:00-0000:00:06.0-ata5-host4-target4:0:0-4:0:0:0-block-sdb-sdb2.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata5/host4/target4:0:0/4:0:0:0/block/sdb/sdb2
sys-devices-pci0000:00-0000:00:06.0-ata5-host4-target4:0:0-4:0:0:0-block-sdb-sdb3.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata5/host4/target4:0:0/4:0:0:0/block/sdb/sdb3
sys-devices-pci0000:00-0000:00:06.0-ata5-host4-target4:0:0-4:0:0:0-block-sdb-sdb4.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata5/host4/target4:0:0/4:0:0:0/block/sdb/sdb4
sys-devices-pci0000:00-0000:00:06.0-ata5-host4-target4:0:0-4:0:0:0-block-sdb.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:06.0/ata5/host4/target4:0:0/4:0:0:0/block/sdb
sys-devices-pci0000:00-0000:00:07.0-virtio0-virtio\x2dports-vport0p1.device
loaded active plugged
/sys/devices/pci0000:00/0000:00:07.0/virtio0/virtio-ports/vport0p1
sys-devices-platform-serial8250-tty-ttyS1.device loaded active plugged
/sys/devices/platform/serial8250/tty/ttyS1
sys-devices-platform-serial8250-tty-ttyS2.device loaded active plugged
/sys/devices/platform/serial8250/tty/ttyS2
sys-devices-platform-serial8250-tty-ttyS3.device loaded active plugged
/sys/devices/platform/serial8250/tty/ttyS3
sys-devices-pnp0-00:04-tty-ttyS0.device loaded active plugged
/sys/devices/pnp0/00:04/tty/ttyS0
sys-devices-virtual-block-md0.device loaded active plugged
/sys/devices/virtual/block/md0
sys-devices-virtual-block-md1.device loaded active plugged
/sys/devices/virtual/block/md1
sys-devices-virtual-net-eth0.device loaded active plugged
/sys/devices/virtual/net/eth0
sys-module-configfs.device loaded active plugged /sys/module/configfs
sys-module-fuse.device loaded active plugged /sys/module/fuse
sys-subsystem-net-devices-eth0.device loaded active plugged
/sys/subsystem/net/devices/eth0
proc-sys-fs-binfmt_misc.mount loaded active mounted Arbitrary
Executable File Formats File System
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File
System
sys-kernel-config.mount loaded active mounted Configuration File System
sys-kernel-debug.mount loaded active mounted Debug File System

Is the udev service needed in LXC and if so, how do I keep it from
complaining?
Serge Hallyn
2015-12-01 02:38:18 UTC
Permalink
Post by Peter Steele
This message is a bit long and I apologize for that, although the
bulk is cut-and-paste output. I'm migrating our container project
from libvirt-lxc under CentOS 7.1 to LXC and I'm seeing some errors
in /var/log/messages that I don't see in libvirt-lxc. The LXC
containers I am creating are based on the same custom CentOS image
that I've been using with libvirt-lxc. My assumption is that this
image should be able to be used without any significant changes as
long as I have the appropriate config file defined for this image
when an LXC container is installed.
# lxc-create -f /hf/cs/vm-03/config -t /bin/true -n vm-03
--dir=/hf/cs/vm-03/rootfs
lxc.tty = 4
lxc.pts = 1024
lxc.kmsg = 0
lxc.utsname = vm-03
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:f2:aa:e6
lxc.rootfs = /hf/cs/vm-03/rootfs
Hi Peter,

my guess is that udev is starting because the container has
the capabilities to start. If you look at stock containers
created using the lxc templates, the tend to include files
like /usr/share/lxc/config/common.conf, which has

lxc.cap.drop = mac_admin mac_override sys_time sys_module

Likewise, libvirt-lxc by default drops several capabilities,
but your config file isn't doing that. (You also should probably
configure the devices cgroup.)

-serge
Peter Steele
2015-12-01 21:32:39 UTC
Permalink
Post by Serge Hallyn
Hi Peter,
my guess is that udev is starting because the container has
the capabilities to start. If you look at stock containers
created using the lxc templates, the tend to include files
like /usr/share/lxc/config/common.conf, which has
lxc.cap.drop = mac_admin mac_override sys_time sys_module
Likewise, libvirt-lxc by default drops several capabilities,
but your config file isn't doing that. (You also should probably
configure the devices cgroup.)
-serge
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Thanks Serge. I installed lxc-templates and got a copy of
centos.common.conf. I incorporated the definitions there into my own
scripts and an installed container's config file now looks something
like this:

lxc.mount.auto = proc:rw sys:rw
lxc.tty = 4
lxc.pts = 1024
lxc.devttydir = lxc
lxc.kmsg = 0
lxc.autodev = 1
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 1:7 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.utsname = vm-03
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:b7:ba:2e
lxc.rootfs = /hf/cs/vm-03/rootfs
lxc.cgroup.memory.limit_in_bytes = 1073741824
lxc.cgroup.memory.memsw.limit_in_bytes = 2147483648

My containers are coming up but things are running really slowly,
although CPU usage is low. I'm not entirely sure what's going on and
need to do some more digging.

The centos.common.conf file listed several cap.drop entries, but none
seemed particularly relevant to our needs:

lxc.cap.drop = mac_admin mac_override setfcap setpcap
lxc.cap.drop = sys_module sys_nice sys_pacct
lxc.cap.drop = sys_rawio sys_time

Our containers are privileged and our software is written with that in
mind, although we certainly don't need a full environment in our
containers. You suggested udev is starting due to capabilities that are
enabled but I'm not sure what ones I need to explicitly drop. I don't
drop any capabilities in my libvirt containers, although I think the
default for libvirt is to automatically drop a large predefined set.
It's clear libvirt has more trimmed from the base configuration than
LXC. My libvirt /dev directory has the following entries:

lrwxrwxrwx 1 root root 10 Nov 30 08:21 console -> /dev/pts/0
lrwxrwxrwx 1 root root 11 Nov 30 08:21 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 Nov 30 08:21 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Nov 30 08:21 full
crwx------ 1 root root 10, 229 Nov 30 08:21 fuse
drwxr-xr-x 2 root root 0 Nov 30 08:21 hugepages
prw------- 1 root root 0 Nov 30 08:21 initctl
srw-rw-rw- 1 root root 0 Nov 30 08:21 log
drwxrwxrwt 2 root root 40 Nov 30 08:21 mqueue
crw-rw-rw- 1 root root 1, 3 Nov 30 08:21 null
-rw-r--r-- 1 root root 0 Nov 30 08:21 nulld
crw-rw-rw- 1 root root 5, 2 Dec 1 13:15 ptmx
drwxr-xr-x 2 root root 0 Nov 30 08:21 pts
crw-rw-rw- 1 root root 1, 8 Nov 30 08:21 random
drwxrwxrwt 2 root root 40 Nov 30 08:21 shm
lrwxrwxrwx 1 root root 15 Nov 30 08:21 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Nov 30 08:21 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Nov 30 08:21 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Nov 30 08:21 tty
lrwxrwxrwx 1 root root 10 Nov 30 08:21 tty1 -> /dev/pts/0
crw-rw-rw- 1 root root 1, 9 Nov 30 08:21 urandom
crw-rw-rw- 1 root root 1, 5 Nov 30 08:21 zero

whereas under LXC /dev has the following:

crw------- 1 root root 10, 234 Dec 1 11:07 btrfs-control
drwxr-xr-x 2 root root 220 Dec 1 11:07 char
lrwxrwxrwx 1 root root 11 Dec 1 11:07 console -> lxc/console
lrwxrwxrwx 1 root root 11 Dec 1 11:07 core -> /proc/kcore
crw------- 1 root root 10, 203 Dec 1 11:07 cuse
lrwxrwxrwx 1 root root 13 Dec 1 11:07 fd -> /proc/self/fd
crw-rw-rw- 1 root root 1, 7 Dec 1 11:48 full
crw-rw-rw- 1 root root 10, 229 Dec 1 11:48 fuse
drwxr-xr-x 2 root root 0 Dec 1 11:07 hugepages
prw------- 1 root root 0 Dec 1 11:07 initctl
srw-rw-rw- 1 root root 0 Dec 1 11:07 log
crw------- 1 root root 10, 237 Dec 1 11:07 loop-control
drwxr-xr-x 2 root root 140 Dec 1 11:07 lxc
drwxr-xr-x 2 root root 60 Dec 1 11:07 mapper
drwxrwxrwt 2 root root 40 Dec 1 11:07 mqueue
drwxr-xr-x 2 root root 60 Dec 1 11:07 net
crw-rw-rw- 1 root root 1, 3 Dec 1 11:48 null
-rw-r--r-- 1 root root 0 Dec 1 11:08 nulld
crw------- 1 root root 108, 0 Dec 1 11:07 ppp
lrwxrwxrwx 1 root root 13 Dec 1 11:07 ptmx -> /dev/pts/ptmx
drwxr-xr-x 2 root root 0 Dec 1 11:07 pts
crw-rw-rw- 1 root root 1, 8 Dec 1 11:48 random
drwxrwxrwt 2 root root 40 Dec 1 11:07 shm
drwxr-xr-x 2 root root 80 Dec 1 11:07 snd
lrwxrwxrwx 1 root root 15 Dec 1 11:07 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Dec 1 11:07 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Dec 1 11:07 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root tty 5, 0 Dec 1 11:48 tty
lrwxrwxrwx 1 root root 8 Dec 1 11:07 tty1 -> lxc/tty1
lrwxrwxrwx 1 root root 8 Dec 1 11:07 tty2 -> lxc/tty2
lrwxrwxrwx 1 root root 8 Dec 1 11:07 tty3 -> lxc/tty3
lrwxrwxrwx 1 root root 8 Dec 1 11:07 tty4 -> lxc/tty4
crw------- 1 root root 10, 239 Dec 1 11:07 uhid
crw------- 1 root root 10, 223 Dec 1 11:07 uinput
crw-rw-rw- 1 root root 1, 9 Dec 1 11:48 urandom
drwxr-xr-x 2 root root 60 Dec 1 11:07 vfio
crw------- 1 root root 10, 137 Dec 1 11:07 vhci
crw------- 1 root root 10, 238 Dec 1 11:07 vhost-net
crw-rw-rw- 1 root root 1, 5 Dec 1 11:48 zero

I know how to trim the /dev/ttyN entries to match libvirt, but I'm not
sure what's needed for the others. For example, how do I get rid of
/dev/snd?
Fajar A. Nugraha
2015-12-02 04:25:09 UTC
Permalink
Post by Peter Steele
Post by Serge Hallyn
Hi Peter,
my guess is that udev is starting because the container has
the capabilities to start. If you look at stock containers
created using the lxc templates, the tend to include files
like /usr/share/lxc/config/common.conf, which has
Thanks Serge. I installed lxc-templates and got a copy of
Post by Peter Steele
centos.common.conf. I incorporated the definitions there into my own
scripts and an installed container's config file now looks something like
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the things you
asked wouldn't even be an issue.
Post by Peter Steele
My containers are coming up but things are running really slowly, although
CPU usage is low. I'm not entirely sure what's going on and need to do some
more digging.
Works for me. However I seem to recall an issue with centos' version of
systemd sometime ago.
Post by Peter Steele
I know how to trim the /dev/ttyN entries to match libvirt, but I'm not
sure what's needed for the others. For example, how do I get rid of
/dev/snd?
Here's mine. Centos 7 container, ubuntu 14.04 host, lxc-1.1.4 and
lxcfs-0.10 from ubuntu ppa:

c7 / # ls /dev
console core fd full hugepages initctl log lxc mqueue null ptmx
pts random shm stderr stdin stdout tty tty1 tty2 tty3 tty4
urandom zero
--
Fajar
Peter Steele
2015-12-02 14:49:06 UTC
Permalink
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the
things you asked wouldn't even be an issue.
Well, there's a bit of history involved. Originally we were building
systems using KVM based virtual machines. As part of this, we developed
a custom installation process where the user burns our boot image onto a
USB stick and boots a server with this image. There is a corresponding
UI a user installs on his workstation that lets the user communicate
with the server being installed and customize the installation. After
the user hits "start install" in the UI the process is mostly hands-free
where the server gets installed with our CentOS based hypervisor and
then VMs being automatically created on top of that. The OS image that's
used for the VMs gets created on the fly as part of this process.
Everything is self-contained and we cannot make the assumption that the
server has access to the external internet. Everything that's needed is
on the boot stick.

This past year we moved to libvirt based containers instead of VMs, and
we were able to use the same image we build for the VMs for the libvirt
containers, with only a few minor changes, and our CentOS hypervisor now
manages containers instead of VMs. The installation process is largely
unchanged from the user's perspective and the fact that containers are
being used is completely hidden.

Unfortunately, libvirt-lxc is being deprecated by Redhat, and that means
it will eventually disappear in CentOS. That's why we're moving to LXC,
but it is a transitional process. With the work I'm doing, when I create
a boot image it can be flagged as either using libvirt containers or LXC
containers. I need to keep the overall process as similar as possible
and eventually, when the port to LXC is complete, we can make the
official switch to LXC. We will need to handle upgrades in the field as
well for our existing customers, converting their libvirt based systems
to LXC based systems in-place. That means we'll need to create LXC
flavored config files that match the XML definitions of the libvirt
containers. We ultimately had to do the same thing when we transitioned
from VMs to libvirt containers, so we know the process well. We just
have to learn the particulars of LXC containers.

So, that long winded answer is why we can't just use the LXC template
for CentOS directly. I was assuming (hoping) that the libvirt container
image we build would be largely LXC friendly. Apparently it's not going
to be quite as straightforward as I'd hoped. I'm going to have to
dissect the steps used for creating a CentOS LXC template and make sure
our container image provides what is needed/expected by LXC.
Post by Fajar A. Nugraha
My containers are coming up but things are running really slowly,
although CPU usage is low. I'm not entirely sure what's going on
and need to do some more digging.
Works for me. However I seem to recall an issue with centos' version
of systemd sometime ago.
Yes, I hit that systemd issue early on and found the fix for it. The
slowness I'm seeing now is something else.
Post by Fajar A. Nugraha
I know how to trim the /dev/ttyN entries to match libvirt, but I'm
not sure what's needed for the others. For example, how do I get
rid of /dev/snd?
Here's mine. Centos 7 container, ubuntu 14.04 host, lxc-1.1.4 and
c7 / # ls /dev
console core fd full hugepages initctl log lxc mqueue null
ptmx pts random shm stderr stdin stdout tty tty1 tty2 tty3
tty4 urandom zero
That ultimately is very similar to my libvirt dev list. I clearly need
to dig into the CentOS template to see what's being done differently
compared to my libvirt image.

Peter
Fajar A. Nugraha
2015-12-02 15:23:42 UTC
Permalink
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the things you
asked wouldn't even be an issue.
So, that long winded answer is why we can't just use the LXC template for
CentOS directly. I was assuming (hoping) that the libvirt container image
we build would be largely LXC friendly. Apparently it's not going to be
quite as straightforward as I'd hoped. I'm going to have to dissect the
steps used for creating a CentOS LXC template and make sure our container
image provides what is needed/expected by LXC.
Actually my point was about the config file :)

The rootfs should be OK as is, as any systemd-related problem inside the
container should've also been fixed if you've managed to run it under
libvirt. I was suggesting to create a centos7 container from the download
template (which would reference the common configs, and use lxcfs), then
copy its config file.
Post by Fajar A. Nugraha
Post by Peter Steele
My containers are coming up but things are running really slowly,
although CPU usage is low. I'm not entirely sure what's going on and need
to do some more digging.
Works for me. However I seem to recall an issue with centos' version of
systemd sometime ago.
Yes, I hit that systemd issue early on and found the fix for it. The
slowness I'm seeing now is something else.
Post by Peter Steele
I know how to trim the /dev/ttyN entries to match libvirt, but I'm not
sure what's needed for the others. For example, how do I get rid of
/dev/snd?
Here's mine. Centos 7 container, ubuntu 14.04 host, lxc-1.1.4 and
c7 / # ls /dev
console core fd full hugepages initctl log lxc mqueue null ptmx
pts random shm stderr stdin stdout tty tty1 tty2 tty3 tty4
urandom zero
That ultimately is very similar to my libvirt dev list. I clearly need to
dig into the CentOS template to see what's being done differently compared
to my libvirt image.
It occurs to me that the difference might be related to lxcfs. It provides
a private, customized copy of parts of /sys and /proc to the container, so
the container doesn't need to see what the host has. And IIRC libvirt has
something that functions similarly to lxcfs.

Do you also have lxcfs installed? What version of lxc are you using?
Try installing lxcfs and use lxc-1.1.x. Then try to install a new container
using download template to see if it's similar to what you want. If it is,
copy it's config file (and modify things like name and paths, obviously)
for your former-libvirt container.
--
Fajar
Peter Steele
2015-12-02 18:14:58 UTC
Permalink
Post by Peter Steele
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container using
the download template? It would've been MUCH easier, and some of
the things you asked wouldn't even be an issue.
So, that long winded answer is why we can't just use the LXC
template for CentOS directly. I was assuming (hoping) that the
libvirt container image we build would be largely LXC friendly.
Apparently it's not going to be quite as straightforward as I'd
hoped. I'm going to have to dissect the steps used for creating a
CentOS LXC template and make sure our container image provides
what is needed/expected by LXC.
Actually my point was about the config file :)
D'oh! My mistake; sorry for the history lesson then, I hope it was
interesting reading... :-)

As for the config file, I believe what I am now using is the same config
file, more or less, that's used by LXC containers created with the
CentOS template. I just incorporated the centos.common.conf settings
into my own config file directly. Although I did tweak some things a bit
and eliminated things that weren't needed (like lxc.seccomp). I did a
quick test and ran the command

lxc-create -t centos -n test1

to create a container using the centos default settings. The resulting
config file doesn't look a whole lot different than my manually crafted
version. Something doesn't seem quite right though; when I run lxc-start
-n test1 the container takes forever to boot. I could log in eventually
but it's not working too well:

[***@test1 ~]# systemctl
Starting Trigger Flushing of Journal to Persistent Storage...
[FAILED] Failed to start LSB: Bring up/down networking.
See 'systemctl status network.service' for details.
<28>systemd-sysctl[261]: Failed to write '1' to
'/proc/sys/kernel/core_uses_pid': Read-only file system
Failed to get D-Bus connection: Failed to authenticate in time.

Shouldn't a container built with the stock config work "out of the box"?
Post by Peter Steele
The rootfs should be OK as is, as any systemd-related problem inside
the container should've also been fixed if you've managed to run it
under libvirt. I was suggesting to create a centos7 container from the
download template (which would reference the common configs, and use
lxcfs), then copy its config file.
There was no explicit reference to lxcfs in the centos.common.conf file,
nor in any of the config files for the other templates. My impression is
that this is not part of the LXC version that I am using.
Post by Peter Steele
It occurs to me that the difference might be related to lxcfs. It
provides a private, customized copy of parts of /sys and /proc to the
container, so the container doesn't need to see what the host has. And
IIRC libvirt has something that functions similarly to lxcfs.
Containers in libvirt have private versions of /sys and /proc, although
there is nothing to configure to provide this functionality, this is the
default behavior. There is nothing really quite like lxcfs.
Post by Peter Steele
Do you also have lxcfs installed? What version of lxc are you using?
Try installing lxcfs and use lxc-1.1.x. Then try to install a new
container using download template to see if it's similar to what you
want. If it is, copy it's config file (and modify things like name and
paths, obviously) for your former-libvirt container.
I am using the version 1.0.7 RPMs that are available on EPEL. I assume
there are no RPMs available for 1.1? We tend to use binary versions of
the third party packages we've included in our system but I will check
out 1.1 and investigate lxcfs. The set of LXC RPMs I installed from EPEL
are:

lua-lxc.1.0.7-4.el7.x86_64
lxc.1.0.7-4.el7.x86_64
lxc-libs.1.0.7-4.el7.x86_64
lxc-templates.1.0.7-4.el7.x86_64

Peter
Thomas Moschny
2015-12-02 18:29:02 UTC
Permalink
I am using the version 1.0.7 RPMs that are available on EPEL. I assume there
are no RPMs available for 1.1? We tend to use binary versions of the third
party packages we've included in our system but I will check out 1.1 and
lua-lxc.1.0.7-4.el7.x86_64
lxc.1.0.7-4.el7.x86_64
lxc-libs.1.0.7-4.el7.x86_64
lxc-templates.1.0.7-4.el7.x86_64
LXC 1.0.8 RPMs are currently in testing. I also maintain a copr
repository for LXC 1.1.x here:
http://copr.fedoraproject.org/coprs/thm/lxc1.1/

- Thomas
Saint Michael
2015-12-02 18:38:26 UTC
Permalink
In my unauthorized opinion, Ubuntu has a much sold LXC that the Red Hat
derivatives. That is why I run my apps in Fedora containers and my LXC
servers in Ubuntu, The Fedora management does not quite understand that LXC
is the only possible game, not Docker,
Post by Thomas Moschny
Post by Peter Steele
I am using the version 1.0.7 RPMs that are available on EPEL. I assume
there
Post by Peter Steele
are no RPMs available for 1.1? We tend to use binary versions of the
third
Post by Peter Steele
party packages we've included in our system but I will check out 1.1 and
lua-lxc.1.0.7-4.el7.x86_64
lxc.1.0.7-4.el7.x86_64
lxc-libs.1.0.7-4.el7.x86_64
lxc-templates.1.0.7-4.el7.x86_64
LXC 1.0.8 RPMs are currently in testing. I also maintain a copr
http://copr.fedoraproject.org/coprs/thm/lxc1.1/
- Thomas
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Peter Steele
2015-12-02 18:49:46 UTC
Permalink
Post by Saint Michael
In my unauthorized opinion, Ubuntu has a much sold LXC that the Red
Hat derivatives. That is why I run my apps in Fedora containers and my
LXC servers in Ubuntu, The Fedora management does not quite understand
that LXC is the only possible game, not Docker,
Our product is based around a CentOS environment and switching to
Ubuntu/Fedora would unfortunately not be a trivial process. Even moving
from CentOS 6.5 to CentOS 7.1 was a big project for us. Once you have
customers, you sort of get locked in...
Saint Michael
2015-12-02 19:39:53 UTC
Permalink
I don't explain myself.
You need an Ubuntu 14.04 server with nothing else running, but LXC. 100% of
the real work gets done via Centos containers.It works perfectly and it is
rock solid.
The only thing on top is the latest available kernel 3.19.0-33-generic. Yo
never have to login or otherwise touch Ubuntu, it becomes a simple
container host. I have literally hundreds of containers with this
architecture. For some reason, the fact that Ubuntu does not use systemd,
makes it stable and almost perfect. I cannot explain it, but it becomes
like the engine of a Mercedes, you know it is there, but you don't need to
see it, it becomes invisible. I could never use Fedora as a good container
host, for you end up having to compile your own RPMs and it fails often.
They just don't take LXC seriously, or they would be at the same level of
Ubuntu.
Post by Saint Michael
In my unauthorized opinion, Ubuntu has a much sold LXC that the Red Hat
derivatives. That is why I run my apps in Fedora containers and my LXC
servers in Ubuntu, The Fedora management does not quite understand that LXC
is the only possible game, not Docker,
Our product is based around a CentOS environment and switching to
Ubuntu/Fedora would unfortunately not be a trivial process. Even moving
from CentOS 6.5 to CentOS 7.1 was a big project for us. Once you have
customers, you sort of get locked in...
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Peter Steele
2015-12-02 21:16:19 UTC
Permalink
Post by Saint Michael
I don't explain myself.
You need an Ubuntu 14.04 server with nothing else running, but LXC.
100% of the real work gets done via Centos containers.It works
perfectly and it is rock solid.
The only thing on top is the latest available kernel
3.19.0-33-generic. Yo never have to login or otherwise touch Ubuntu,
it becomes a simple container host. I have literally hundreds of
containers with this architecture. For some reason, the fact that
Ubuntu does not use systemd, makes it stable and almost perfect. I
cannot explain it, but it becomes like the engine of a Mercedes, you
know it is there, but you don't need to see it, it becomes invisible.
I could never use Fedora as a good container host, for you end up
having to compile your own RPMs and it fails often. They just don't
take LXC seriously, or they would be at the same level of Ubuntu.
Our software runs in CentOS containers which in turn run under CentOS
based hypervisors, working together in a cluster. Even switching out our
servers to run Ubuntu instead of CentOS would be a non-trivial process.
We'd need to support an upgrade path for example where we would upgrade
our customers' servers, swapping out CentOS in-place in favor or Ubuntu.
Doable but not something we really have the bandwidth to take on and
keep with our release schedule.

Peter
Peter Steele
2015-12-02 18:42:51 UTC
Permalink
Post by Thomas Moschny
I am using the version 1.0.7 RPMs that are available on EPEL. I assume there
are no RPMs available for 1.1? We tend to use binary versions of the third
party packages we've included in our system but I will check out 1.1 and
lua-lxc.1.0.7-4.el7.x86_64
lxc.1.0.7-4.el7.x86_64
lxc-libs.1.0.7-4.el7.x86_64
lxc-templates.1.0.7-4.el7.x86_64
LXC 1.0.8 RPMs are currently in testing. I also maintain a copr
http://copr.fedoraproject.org/coprs/thm/lxc1.1/
Perfect! Thanks.

Peter
Peter Steele
2015-12-02 22:39:39 UTC
Permalink
Post by Peter Steele
Post by Thomas Moschny
I am using the version 1.0.7 RPMs that are available on EPEL. I assume there
are no RPMs available for 1.1? We tend to use binary versions of the third
party packages we've included in our system but I will check out 1.1 and
lua-lxc.1.0.7-4.el7.x86_64
lxc.1.0.7-4.el7.x86_64
lxc-libs.1.0.7-4.el7.x86_64
lxc-templates.1.0.7-4.el7.x86_64
LXC 1.0.8 RPMs are currently in testing. I also maintain a copr
http://copr.fedoraproject.org/coprs/thm/lxc1.1/
Perfect! Thanks.
I've downloaded 1.1.5-1 rpms for the set of LXC packages that I'm
using--thanks very much for pointing me to this site. It's been
suggested that I should check out lxcfs but I don't see an rpm for this
in your copr repository. Is there an rpm available for this somewhere or
do I need to build it from source?
Fajar A. Nugraha
2015-12-03 04:47:50 UTC
Permalink
Post by Peter Steele
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the things you
asked wouldn't even be an issue.
lxc-create -t centos -n test1
Post by Peter Steele
to create a container using the centos default settings. The resulting
config file doesn't look a whole lot different than my manually crafted
version.
You DID notice that repeatedly say "DOWNLOAD template"? as in someting like

# lxc-create -t download -n c7 -- -d centos -r 7 -a amd64
Post by Peter Steele
Something doesn't seem quite right though; when I run lxc-start -n test1
the container takes forever to boot. I could log in eventually but it's
Starting Trigger Flushing of Journal to Persistent Storage...
[FAILED] Failed to start LSB: Bring up/down networking.
See 'systemctl status network.service' for details.
<28>systemd-sysctl[261]: Failed to write '1' to
'/proc/sys/kernel/core_uses_pid': Read-only file system
Failed to get D-Bus connection: Failed to authenticate in time.
Shouldn't a container built with the stock config work "out of the box"?
Short version: if you use http://copr.fedoraproject.org/coprs/thm/lxc1.1/ ,
you need to do some things first:
- edit /etc/sysconfig/lxc, USE_LXC_BRIDGE="true"
- systemctl enable lxc-net
- systemctl enable lxc
- systemctl start lxc-net
- brctl show
- ip ad li lxcbr0

If you HAVE lxcbr0 with the default ip 10.0.3.1 (you can change this
later), you're all set. If not, doublecheck your setup.
If you're asking "where's the docs that mention this", as the package
manager :)

The alternative is to configure your own bridge and configure your
containers to use that. After you get the bridge working, you can start and
monitor its boot progress with something like this:

# lxc-start -n c7;lxc-console -n c7 -t 0

The benefit of using this approach instead of "lxc-start -F" is that you
can detach the console session later using "ctrl-a q". Note that you can
NOT login on this console yet, as by default the root password is not set.
From another shell session, you need to do

# lxc-attach -n c7 -- passwd

Then you can login from the console session. You'll then see on the
container (I tested this just now on up-to-date centos7)

[***@c7 ~]# ls /dev
console core fd full hugepages initctl log lxc mqueue null ptmx
pts random shm stderr stdin stdout tty tty1 tty2 tty3 tty4
urandom zero

Apparently this works even without lxfs.


If you DO manage to get lxcfs installed and working later (disclaimer: I've
only use it on ubuntu and debian), you'll be able to get some additional
benefits like the container only seeing its allocated resources (set using
"lxc.cgroup" settings on lxc config file). For example, if
"lxc.cgroup.cpuset.cpus = 0", then the container will only use cpu0, and
"htop" or "cat /proc/cpuinfo" will only show 1 cpu even when your host has
multiple cpus.
--
Fajar
Peter Steele
2015-12-03 14:27:35 UTC
Permalink
Post by Peter Steele
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container
using the download template? It would've been MUCH easier,
and some of the things you asked wouldn't even be an issue.
lxc-create -t centos -n test1
to create a container using the centos default settings. The
resulting config file doesn't look a whole lot different than my
manually crafted version.
You DID notice that repeatedly say "DOWNLOAD template"? as in someting like
# lxc-create -t download -n c7 -- -d centos -r 7 -a amd64
The template was downloaded automatically when I ran the lxc-create
command the first time. Is there a difference in how the download is
done using the command you've listed above?
Post by Peter Steele
Short version: if you use
http://copr.fedoraproject.org/coprs/thm/lxc1.1/ , you need to do some
- edit /etc/sysconfig/lxc, USE_LXC_BRIDGE="true"
- systemctl enable lxc-net
- systemctl enable lxc
- systemctl start lxc-net
- brctl show
- ip ad li lxcbr0
If you HAVE lxcbr0 with the default ip 10.0.3.1 (you can change this
later), you're all set. If not, doublecheck your setup.
If you're asking "where's the docs that mention this", as the package
manager :)
The alternative is to configure your own bridge and configure your
containers to use that. After you get the bridge working, you can
That's exactly what I did. I realized this later that the default centos
container assumes you have a lxcbr0 defined (I had hit this issue
before). My servers use br0 so I just changed my test container's config
and it came up fine. Most importantly, the udev service was not running.
So I tweaked the lxc config I had in my custom install process to more
closely match what was used in my standalone test and my containers are
now coming up fine, or at least udev is no longer running. The /dev
directory still has more entries than my libvirt containers (for
example, /dev/snd is still present), but at least there are no udev
errors in /var/log/messages.

There *are* other issues (our software isn't running properly), but I
think the major container issues have been resolved. I changed a few
things, including the version of LXC that I'm using, so it's hard to say
what the culprit was with regards to this udev issue.
Post by Peter Steele
# lxc-start -n c7;lxc-console -n c7 -t 0
The benefit of using this approach instead of "lxc-start -F" is that
you can detach the console session later using "ctrl-a q". Note that
you can NOT login on this console yet, as by default the root password
is not set. From another shell session, you need to do
# lxc-attach -n c7 -- passwd
Then you can login from the console session. You'll then see on the
container (I tested this just now on up-to-date centos7)
console core fd full hugepages initctl log lxc mqueue null
ptmx pts random shm stderr stdin stdout tty tty1 tty2 tty3
tty4 urandom zero
Apparently this works even without lxfs.
I've only use it on ubuntu and debian), you'll be able to get some
additional benefits like the container only seeing its allocated
resources (set using "lxc.cgroup" settings on lxc config file). For
example, if "lxc.cgroup.cpuset.cpus = 0", then the container will only
use cpu0, and "htop" or "cat /proc/cpuinfo" will only show 1 cpu even
when your host has multiple cpus.
That would definitely be nice. Libvirt does a reasonably good job in
this area but it is far from complete, with /proc/cpuinfo being one of
the weak points. I'll definitely have to check out lxcfs.
Fajar A. Nugraha
2015-12-03 15:25:48 UTC
Permalink
Post by Peter Steele
Post by Peter Steele
Post by Fajar A. Nugraha
Is there a reason why you can't install a centos7 container using the
download template? It would've been MUCH easier, and some of the things you
asked wouldn't even be an issue.
lxc-create -t centos -n test1
Post by Peter Steele
to create a container using the centos default settings. The resulting
config file doesn't look a whole lot different than my manually crafted
version.
You DID notice that repeatedly say "DOWNLOAD template"? as in someting like
# lxc-create -t download -n c7 -- -d centos -r 7 -a amd64
The template was downloaded automatically when I ran the lxc-create
command the first time. Is there a difference in how the download is done
using the command you've listed above?
centos template -> download lost of packages (i.e. RPM) one by one using
yum, and then install it

download template -> download one big tar.xz file (plus several small
config files), and then extract it. MUCH faster, and works for unpriv
containers as well (not sure what the current state of unpriv containers on
centos though)

However I was actually more concerned about the fact that the templates are
maintained separately, so there could be some difference in the resulting
container/config. The download template works (I've tested it), while
(based on your previous output) the centos template doesn't provide the
desired /dev entries.
Post by Peter Steele
The alternative is to configure your own bridge and configure your
containers to use that. After you get the bridge working, you can start and
That's exactly what I did. I realized this later that the default centos
container assumes you have a lxcbr0 defined (I had hit this issue before).
My servers use br0 so I just changed my test container's config and it came
up fine. Most importantly, the udev service was not running. So I tweaked
the lxc config I had in my custom install process to more closely match
what was used in my standalone test and my containers are now coming up
fine, or at least udev is no longer running. The /dev directory still has
more entries than my libvirt containers (for example, /dev/snd is still
present), but at least there are no udev errors in /var/log/messages.
Which is why I suggested the download template.

I also tested using the resulting config with rootfs replaced by a "native"
centos7 install (to be exact, a disk clone of minimal centos7 install on
virtualbox), still result in the desired /dev entries (i.e. minimal /dev
entries, no /dev/snd).
Post by Peter Steele
There *are* other issues (our software isn't running properly), but I
think the major container issues have been resolved.
Which is?
Post by Peter Steele
I changed a few things, including the version of LXC that I'm using, so
it's hard to say what the culprit was with regards to this udev issue.
IIRC systemd containers are only supported on lxc-1.1.x, so upgrading lxc
probably has a big part in that.
Post by Peter Steele
I've only use it on ubuntu and debian), you'll be able to get some
additional benefits like the container only seeing its allocated resources
(set using "lxc.cgroup" settings on lxc config file). For example, if
"lxc.cgroup.cpuset.cpus = 0", then the container will only use cpu0, and
"htop" or "cat /proc/cpuinfo" will only show 1 cpu even when your host has
multiple cpus.
That would definitely be nice. Libvirt does a reasonably good job in this
area but it is far from complete, with /proc/cpuinfo being one of the weak
points. I'll definitely have to check out lxcfs.
It should be easier now since latest lxcfs shouldn't need cgmanager anymore.

And the usual generic suggestion, if you encounter kernel or
namespace-related problems, testing latest kernel from kernel-ml (
http://elrepo.org/tiki/kernel-ml) might help.
--
Fajar
Peter Steele
2015-12-03 17:10:29 UTC
Permalink
Post by Fajar A. Nugraha
centos template -> download lost of packages (i.e. RPM) one by one
using yum, and then install it
download template -> download one big tar.xz file (plus several small
config files), and then extract it. MUCH faster, and works for unpriv
containers as well (not sure what the current state of unpriv
containers on centos though)
However I was actually more concerned about the fact that the
templates are maintained separately, so there could be some difference
in the resulting container/config. The download template works (I've
tested it), while (based on your previous output) the centos template
doesn't provide the desired /dev entries.
I just did a test using the download approach and it worked nicely,
obviously much cleaner than downloading the rpms individually. The
containers created from the two approaches seem to be identical, as far
as a cursory glance is concerned, with identical config files.
Post by Fajar A. Nugraha
Which is why I suggested the download template.
I also tested using the resulting config with rootfs replaced by a
"native" centos7 install (to be exact, a disk clone of minimal centos7
install on virtualbox), still result in the desired /dev entries (i.e.
minimal /dev entries, no /dev/snd).
I can't really use the downloaded template for our rootfs, as I
explained earlier. We already have a process that generates a custom
centos tar ball with the specific set of packages that we need in our
containers. Our tarball includes other third party packages as well,
such as supervisord and ctdb. I've used the downloaded template's config
file to create a custom config for our containers. The container
specific portion of the config looks something like this:

lxc.utsname = pws-vm-03
lxc.rootfs = /hf/cs/vm-03/rootfs
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:dc:c8:db
lxc.rootfs = /hf/cs/vm-03/rootfs
lxc.cgroup.memory.limit_in_bytes = 1073741824
lxc.cgroup.memory.memsw.limit_in_bytes = 2147483648
lxc.include = /var/lib/hf/lxc.conf

and the settings that are common to all containers (lxc.conf) include
the following:

lxc.autodev = 1
lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
lxc.kmsg = 0
lxc.arch = x86_64

# Networking defaults
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0

# Remove capabilities we don't want in containers
lxc.cap.drop = mac_admin mac_override sys_time sys_module

# Set the pivot directory
lxc.pivotdir = lxc_putold

# Control Group devices: all denied except those white-listed
lxc.cgroup.devices.deny = a
## Allow any mknod (but not reading/writing the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
## /dev/null
lxc.cgroup.devices.allow = c 1:3 rwm
## /dev/zero
lxc.cgroup.devices.allow = c 1:5 rwm
## /dev/full
lxc.cgroup.devices.allow = c 1:7 rwm
## /dev/tty
lxc.cgroup.devices.allow = c 5:0 rwm
## /dev/random
lxc.cgroup.devices.allow = c 1:8 rwm
## /dev/urandom
lxc.cgroup.devices.allow = c 1:9 rwm
## /dev/tty[1-4] ptys and lxc console
lxc.cgroup.devices.allow = c 136:* rwm
## /dev/ptmx pty master
lxc.cgroup.devices.allow = c 5:2 rwm

# Setup the default mounts
lxc.mount.auto = cgroup:mixed proc:mixed sys:mixed
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none
bind,optional 0 0

As you can see this was largely pulled from centos.common.conf and
common.conf.I assume something isn't quite right since I see more
entries under /dev than I do when I'm running under libvirt, using the
same custom tarball. I'll be satisfied with this for now though as long
as the extra entries aren't causing issues.
Post by Fajar A. Nugraha
There *are* other issues (our software isn't running properly),
but I think the major container issues have been resolved.
Which is?
Well, mainly the udev issue, plus the fact that the containers booted
*really* slowly.
Post by Fajar A. Nugraha
I changed a few things, including the version of LXC that I'm
using, so it's hard to say what the culprit was with regards to
this udev issue.
IIRC systemd containers are only supported on lxc-1.1.x, so upgrading
lxc probably has a big part in that.
Yeah, things definitely started working better after I upgraded.
Neil Greenwood
2015-12-03 19:27:21 UTC
Permalink
Post by Peter Steele
I can't really use the downloaded template for our rootfs, as I
explained earlier. We already have a process that generates a custom
centos tar ball with the specific set of packages that we need in our
containers. Our tarball includes other third party packages as well,
such as supervisord and ctdb. I've used the downloaded template's config
I am not an expert with LXC, but I think you can get your tarball working using '-t download' a.k.a. the download template. You would use petercentos rather centos as the template to download, and provide a petercentos configuration that points to your tarball. Obviously you will use your company name in the production version :-)

Neil
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Peter Steele
2015-12-03 21:40:25 UTC
Permalink
Post by Neil Greenwood
Post by Peter Steele
I can't really use the downloaded template for our rootfs, as I
explained earlier. We already have a process that generates a custom
centos tar ball with the specific set of packages that we need in our
containers. Our tarball includes other third party packages as well,
such as supervisord and ctdb. I've used the downloaded template's config
I am not an expert with LXC, but I think you can get your tarball working using '-t download' a.k.a. the download template. You would use petercentos rather centos as the template to download, and provide a petercentos configuration that points to your tarball. Obviously you will use your company name in the production version :-)
Neil
That's where I'd like to get to eventually. That said, since this is all
part of an automated process and no one actually runs lxc-create
interactively (it's all done in Python scripts), it doesn't really
matter if our custom tar ball gets installed as an official formal
template. The lxc-create command works quite well using "none" for the
template, as long as the rootfs is put in place through some other
means. Since we're transitioning from libvirt to LXC and need to keep
both frameworks in play for a while, it's easier in our code to install
rootfs explicitly rather than having lxc-create do it.

Peter
Fajar A. Nugraha
2015-12-04 04:28:34 UTC
Permalink
Post by Peter Steele
There *are* other issues (our software isn't running properly), but I
Post by Peter Steele
think the major container issues have been resolved.
Which is?
Well, mainly the udev issue, plus the fact that the containers booted
*really* slowly.
Are you STILL experiencing slow booting? If yes, can you please test using
a clean setup (e.g. in fresh-installl under virtualbox)?

My test with the download template result in fast-booting container. I
wonder if you can run some test on a clean setup:

- host clean centos 7 install on virtualbox, lxc-1.1.5, containers created
fully from download template -> you should achieve same result as mine. If
NOT, then we can start working on that, as it should be a bug in lxc

- host clean centos 7 install on virtualbox, same lxc config as earlier,
but with rootfs replaced with your custom rootfs. I'm GUESSing this is
where things could be different. If this is SLOW for you while the earlier
one is fast, then the problem lies somewhere in your rootfs.

- If it IS still slow, and the boot process shows systemd is somehow
involved in the slowness, you can try my systemd-224 RPMs, and see if it
makes it better:
https://goo.gl/XpKFxS
https://www.mail-archive.com/lxc-***@lists.linuxcontainers.org/msg03829.html
(you should only need step "5" and "7")

Of course, the above is assuming you can work with a COPY of your rootfs
(since upgrading systemd would be a slightly complicated process to undo)
--
Fajar
Fajar A. Nugraha
2015-12-04 04:42:33 UTC
Permalink
I've used the downloaded template's config file to create a custom config
for our containers.
Also, are you SURE this is based on download template's config?
lxc.autodev = 1
That is not common.conf (though I'm not sure whether it matters)
lxc.kmsg = 0
Neither is that. Though it should be the default value
# Remove capabilities we don't want in containers
lxc.cap.drop = mac_admin mac_override sys_time sys_module
centos.common.conf also has lxc.cap.drop = sys_nice sys_pacct sys_rawio.
You don't have that.

lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## /dev/ptmx pty master
lxc.cgroup.devices.allow = c 5:2 rwm
you' re missing 5:1 (console), 10:229 (fuse). Both are in common.conf.
# Setup the default mounts
lxc.mount.auto = cgroup:mixed proc:mixed sys:mixed
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none
bind,optional 0 0
As you can see this was largely pulled from centos.common.conf and
common.conf. I assume something isn't quite right since I see more
entries under /dev than I do when I'm running under libvirt, using the same
custom tarball. I'll be satisfied with this for now though as long as the
extra entries aren't causing issues.
Is there a reason why you didn't test simply using the same config, which
also does the "includes" instead of copying SOME of them? Is there a reason
wht you don't copy ALL of them? It should be easier to start with a known
good setup, then do incremental changes.
--
Fajar
Peter Steele
2015-12-04 17:23:19 UTC
Permalink
Post by Peter Steele
lxc.autodev = 1
That is not common.conf (though I'm not sure whether it matters)
I included this early on when I was encountering the funky udev issue.
it didn't help but I kept it in place, admittedly for no good reason.
Post by Peter Steele
lxc.kmsg = 0
Neither is that. Though it should be the default value
In my original tests with LXC 1.0.7 I hit an issue where systemd on my
containers was running at 100%. I did some research and found the
problem described with the solution suggested being to add this lxc.kmsg
line. This did in fact solve the problem. I just did a test without this
though and the CPU issue did not occur, so presumably LXC 1.1.5 has
fixed this problem.
Post by Peter Steele
# Remove capabilities we don't want in containers
lxc.cap.drop = mac_admin mac_override sys_time sys_module
centos.common.conf also has lxc.cap.drop = sys_nice sys_pacct
sys_rawio. You don't have that.
I excluded this line because we need sys_nice enabled in our containers.
I wasn't sure about sys_pacct and sys_rawio and was going to do more
investigation on these later.
Post by Peter Steele
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## /dev/ptmx pty master
lxc.cgroup.devices.allow = c 5:2 rwm
you' re missing 5:1 (console), 10:229 (fuse). Both are in common.conf.
There was in fact no common.conf in the 1.0.7 release I originally was
using, and the centos.common.conf did not have the console and fuse
entries. When I switched to 1.1.5 common.conf was introduced and these
device definitions were moved there. I took a quick look at these
definitions and added the fuse entry but didn't notice console had been
added as well. Thanks for noticing this.
Post by Peter Steele
Is there a reason why you didn't test simply using the same config,
which also does the "includes" instead of copying SOME of them? Is
there a reason wht you don't copy ALL of them? It should be easier to
start with a known good setup, then do incremental changes.
Well, as I said we need sys_nice and so that was one reason why I didn't
want to use the config files directly. I also noticed that proc was
mounted in mixed mode and we need at least some rw access to a portion
of /proc/sys, and I thought I'd probably need to change this mixed
entry. Since all of our work is based on centos, I also didn't see the
need to include the lxc-templates rpm in my package set. Our server is
based on a minimal centos config and I try to avoid adding additional
rpms if I can avoid it.

That said, I did change my install framework this morning to include
lxc-templates and to use centos.common.conf and common.conf directly
rather rely on than my manually crafted version. This causes sys_nice to
be dropped, as I just mentioned above, and I need to solve that problem.
So, if I have this:

lxc.include = /usr/share/lxc/config/centos.common.conf

can I then add the entry

lxc.cap.keep = sys_nice

after this? Based on the description in the man page I assume this will
not just add this one capability but will instead remove everything
except this. So, what's the correct way to use common.conf and to re-add
dropped capabilities?
Serge Hallyn
2015-12-04 21:43:49 UTC
Permalink
Post by Peter Steele
Post by Peter Steele
lxc.autodev = 1
That is not common.conf (though I'm not sure whether it matters)
I included this early on when I was encountering the funky udev
issue. it didn't help but I kept it in place, admittedly for no good
reason.
Post by Peter Steele
lxc.kmsg = 0
Neither is that. Though it should be the default value
In my original tests with LXC 1.0.7 I hit an issue where systemd on
my containers was running at 100%. I did some research and found the
problem described with the solution suggested being to add this
lxc.kmsg line. This did in fact solve the problem. I just did a test
without this though and the CPU issue did not occur, so presumably
LXC 1.1.5 has fixed this problem.
Post by Peter Steele
# Remove capabilities we don't want in containers
lxc.cap.drop = mac_admin mac_override sys_time sys_module
centos.common.conf also has lxc.cap.drop = sys_nice sys_pacct
sys_rawio. You don't have that.
I excluded this line because we need sys_nice enabled in our
containers. I wasn't sure about sys_pacct and sys_rawio and was
going to do more investigation on these later.
Post by Peter Steele
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## /dev/ptmx pty master
lxc.cgroup.devices.allow = c 5:2 rwm
you' re missing 5:1 (console), 10:229 (fuse). Both are in common.conf.
There was in fact no common.conf in the 1.0.7 release I originally
was using, and the centos.common.conf did not have the console and
fuse entries. When I switched to 1.1.5 common.conf was introduced
and these device definitions were moved there. I took a quick look
at these definitions and added the fuse entry but didn't notice
console had been added as well. Thanks for noticing this.
Post by Peter Steele
Is there a reason why you didn't test simply using the same
config, which also does the "includes" instead of copying SOME of
them? Is there a reason wht you don't copy ALL of them? It should
be easier to start with a known good setup, then do incremental
changes.
Well, as I said we need sys_nice and so that was one reason why I
didn't want to use the config files directly. I also noticed that
proc was mounted in mixed mode and we need at least some rw access
to a portion of /proc/sys, and I thought I'd probably need to change
this mixed entry. Since all of our work is based on centos, I also
didn't see the need to include the lxc-templates rpm in my package
set. Our server is based on a minimal centos config and I try to
avoid adding additional rpms if I can avoid it.
That said, I did change my install framework this morning to include
lxc-templates and to use centos.common.conf and common.conf directly
rather rely on than my manually crafted version. This causes
sys_nice to be dropped, as I just mentioned above, and I need to
lxc.include = /usr/share/lxc/config/centos.common.conf
can I then add the entry
lxc.cap.keep = sys_nice
after this? Based on the description in the man page I assume this
will not just add this one capability but will instead remove
everything except this. So, what's the correct way to use
common.conf and to re-add dropped capabilities?
sadly there's no good way to do that purely through config. You can
do it through the api by querying the current lxc.cap.drop value,
pulling sys_nice out of it, then clearing lxc.cap.drop
(set_config_item(lxc.cap.drop, "")) and re-setting it to the new
full value. But not purely through config files.

Daniel P. Berrange
2015-12-02 15:30:54 UTC
Permalink
This message is a bit long and I apologize for that, although the bulk is
cut-and-paste output. I'm migrating our container project from libvirt-lxc
under CentOS 7.1 to LXC and I'm seeing some errors in /var/log/messages that
I don't see in libvirt-lxc. The LXC containers I am creating are based on
the same custom CentOS image that I've been using with libvirt-lxc. My
assumption is that this image should be able to be used without any
significant changes as long as I have the appropriate config file defined
for this image when an LXC container is installed.
# lxc-create -f /hf/cs/vm-03/config -t /bin/true -n vm-03
--dir=/hf/cs/vm-03/rootfs
lxc.tty = 4
lxc.pts = 1024
lxc.kmsg = 0
lxc.utsname = vm-03
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:f2:aa:e6
lxc.rootfs = /hf/cs/vm-03/rootfs
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb1, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb2, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb4, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb3, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda4, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda3, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda2, 10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda1, 10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc, 10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc2, 10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc1, 10)
failed: No such file or directory
...
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/hwC0D0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/controlC0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0c: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0p: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/dri/card0: No such file or directory
The host's drives have not been made available in the containers, and that's
intentional. These errors are all being created by the udev service of
course, and that's the ultimate cause. When I create a container under
libvirt-lxc though, the udev service is not enabled and I therefore do not
see these errors. Containers created with LXC using the same CentOS image
have the udev suite of services enabled, and even if I explicitly disable
them using
# systemctl disable systemd-udevd-kernel.socket
# systemctl disable systemd-udevd-control.socket
# systemctl disable systemd-udevd.service
# systemctl disable systemd-udev-trigger.service
when I restart the container the services are enabled and I still see these
errors. My guess is I'm missing something in the config file for my LXC
containers but I'm not sure what's needed. This appears to be further
indicated by the set of sys services that are running in my libvirt-lxc
The systemd-udevd.service file has

ConditionPathIsReadWrite=/sys

And libvirt LXC sets /sys as read-only, so if you have /sys as writable
that could explain the difference in behaviour.

The other notable thing libvirt does is drop CAP_SYS_MKNOD. Previously
systemd would look at that capability when starting some things like udev,
but it looks like these days it triggers off /sys read-only status.
Is the udev service needed in LXC and if so, how do I keep it from
complaining?
No, you really don't want udev enabled or running inside containers at all.

Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
Saint Michael
2015-12-02 16:09:56 UTC
Permalink
I could not find on Google any mention of Red Hat killing LXC on Libvirt.
Care to elaborate?
Post by Peter Steele
This message is a bit long and I apologize for that, although the bulk is
cut-and-paste output. I'm migrating our container project from
libvirt-lxc
under CentOS 7.1 to LXC and I'm seeing some errors in /var/log/messages
that
I don't see in libvirt-lxc. The LXC containers I am creating are based on
the same custom CentOS image that I've been using with libvirt-lxc. My
assumption is that this image should be able to be used without any
significant changes as long as I have the appropriate config file defined
for this image when an LXC container is installed.
# lxc-create -f /hf/cs/vm-03/config -t /bin/true -n vm-03
--dir=/hf/cs/vm-03/rootfs
lxc.tty = 4
lxc.pts = 1024
lxc.kmsg = 0
lxc.utsname = vm-03
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.veth.pair = vm-03
lxc.network.hwaddr = fe:d6:e8:f2:aa:e6
lxc.rootfs = /hf/cs/vm-03/rootfs
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb1,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb2,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb4,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdb3,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda4,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda3,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda2,
10)
failed: No such file or directory
Nov 30 09:28:48 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sda1,
10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc,
10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc2,
10)
failed: No such file or directory
Nov 30 09:28:49 vm-03 systemd-udevd: inotify_add_watch(7, /dev/sdc1,
10)
failed: No such file or directory
...
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/hwC0D0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/controlC0: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0c: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/snd/pcmC0D0p: No such file or directory
Nov 30 09:28:56 vm-03 systemd-udevd: Failed to apply ACL on
/dev/dri/card0: No such file or directory
The host's drives have not been made available in the containers, and
that's
intentional. These errors are all being created by the udev service of
course, and that's the ultimate cause. When I create a container under
libvirt-lxc though, the udev service is not enabled and I therefore do
not
see these errors. Containers created with LXC using the same CentOS image
have the udev suite of services enabled, and even if I explicitly disable
them using
# systemctl disable systemd-udevd-kernel.socket
# systemctl disable systemd-udevd-control.socket
# systemctl disable systemd-udevd.service
# systemctl disable systemd-udev-trigger.service
when I restart the container the services are enabled and I still see
these
errors. My guess is I'm missing something in the config file for my LXC
containers but I'm not sure what's needed. This appears to be further
indicated by the set of sys services that are running in my libvirt-lxc
The systemd-udevd.service file has
ConditionPathIsReadWrite=/sys
And libvirt LXC sets /sys as read-only, so if you have /sys as writable
that could explain the difference in behaviour.
The other notable thing libvirt does is drop CAP_SYS_MKNOD. Previously
systemd would look at that capability when starting some things like udev,
but it looks like these days it triggers off /sys read-only status.
Is the udev service needed in LXC and if so, how do I keep it from
complaining?
No, you really don't want udev enabled or running inside containers at all.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Daniel P. Berrange
2015-12-02 16:23:11 UTC
Permalink
Post by Saint Michael
I could not find on Google any mention of Red Hat killing LXC on Libvirt.
Care to elaborate?
What I presume is being referred to is that Docker is the promoted container
technology on RHEL-7, and libvirt LXC is marked tech-preview (as it was in
RHEL-6 too). This does not mean that Red Hat intend to kill LXC on libvirt,
it is solely an indiciation about support of the feature in the context of
the RHEL distribution. It has no bearing on Fedora or any other distro.
CentOS may choose to follow RHEL support policy or not, as they see fit.
ie there's nothing preventing CentOS from supporting LXC even if RHEL
does not.

While Red Hat is heavily involved in libvirt development it does not
get to decide to kill features in libvirt. The libvirt commnuity has
no intention to kill LXC support whatsoever. It is a supported driver
in libvirt and won't be going anywhere. What libvirt features downstream
distros choose to support is upto the respective distro maintainers.

Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
Peter Steele
2015-12-02 16:54:25 UTC
Permalink
Post by Saint Michael
I could not find on Google any mention of Red Hat killing LXC on
Libvirt. Care to elaborate?
Here's the first reference I came across a few months ago:
https://access.redhat.com/articles/1365153. There's no date indicated
here so I really don't know what this means, but I just did another
search to see if I could find some more information. I came across this
thread:

https://www.redhat.com/archives/libvirt-users/2015-August/msg00026.html

This was a fairly recent thread and I'd not found this before. If you
read through the follow-ups apparently libvirt-lxc is *not* being
deprecated:

https://www.redhat.com/archives/libvirt-users/2015-August/msg00030.html

So, it appears I was mistaken. I'm not sure where that leaves me though.
One issue we've had with libvirt-lxc is that although it's a great
product there seems to be very little activity with the project. My own
posts to the libvirt-lxc mailing list often go unanswered, whereas this
mailing list (by comparison) is great. Posts to more generic forums are
more often than not to find information related to LXC and not
libvirt-lxc. The community for libvirt-lxc just doesn't seem that large.

Looks like I have some thinking to do...

Peter
Peter Steele
2015-12-02 17:04:55 UTC
Permalink
Post by Peter Steele
Post by Saint Michael
I could not find on Google any mention of Red Hat killing LXC on
Libvirt. Care to elaborate?
https://access.redhat.com/articles/1365153. There's no date indicated
here so I really don't know what this means, but I just did another
search to see if I could find some more information. I came across
https://www.redhat.com/archives/libvirt-users/2015-August/msg00026.html
This was a fairly recent thread and I'd not found this before. If you
read through the follow-ups apparently libvirt-lxc is *not* being
https://www.redhat.com/archives/libvirt-users/2015-August/msg00030.html
So, it appears I was mistaken. I'm not sure where that leaves me
though. One issue we've had with libvirt-lxc is that although it's a
great product there seems to be very little activity with the project.
My own posts to the libvirt-lxc mailing list often go unanswered,
whereas this mailing list (by comparison) is great. Posts to more
generic forums are more often than not to find information related to
LXC and not libvirt-lxc. The community for libvirt-lxc just doesn't
seem that large.
Looks like I have some thinking to do...
Actually, I guess what this means is just that Redhat is deprecating it,
but the libvirt-lxc project as a whole is still moving forward for
whatever distros want to include it, so we probably could still use it
as a non-Redhat supported package. But, I think we'll continue with our
move to LXC since it does seem to have a lot of momentum going for it. I
think it's the right direction to move for the long term.
Saint Michael
2015-12-02 17:22:40 UTC
Permalink
The ideal architecture right now is, Ubuntu 14.04 on the server and Centos
7 or Fedora 20 LXC containers. I am even getting rid of Vmware Vsphere
altogether, since I feel everything twice as fast.
Post by Saint Michael
I could not find on Google any mention of Red Hat killing LXC on Libvirt.
Care to elaborate?
https://access.redhat.com/articles/1365153. There's no date indicated
here so I really don't know what this means, but I just did another search
https://www.redhat.com/archives/libvirt-users/2015-August/msg00026.html
This was a fairly recent thread and I'd not found this before. If you read
https://www.redhat.com/archives/libvirt-users/2015-August/msg00030.html
So, it appears I was mistaken. I'm not sure where that leaves me though.
One issue we've had with libvirt-lxc is that although it's a great product
there seems to be very little activity with the project. My own posts to
the libvirt-lxc mailing list often go unanswered, whereas this mailing list
(by comparison) is great. Posts to more generic forums are more often than
not to find information related to LXC and not libvirt-lxc. The community
for libvirt-lxc just doesn't seem that large.
Looks like I have some thinking to do...
Actually, I guess what this means is just that Redhat is deprecating it,
but the libvirt-lxc project as a whole is still moving forward for whatever
distros want to include it, so we probably could still use it as a
non-Redhat supported package. But, I think we'll continue with our move to
LXC since it does seem to have a lot of momentum going for it. I think it's
the right direction to move for the long term.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Loading...