Discussion:
bionic image not getting IPv4 address
(too old to reply)
Tomasz Chmielewski
2018-05-03 02:42:02 UTC
Permalink
Today or yesterday, bionic image launched in LXD is not getting an IPv4
address. It is getting an IPv6 address.


I'm launching the container like this:

lxc launch images:ubuntu/bionic/amd64 bionictest



Inside the container:

44: ***@if45: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 00:16:3e:4b:61:41 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fd42:b5d:f6dd:6b21:216:3eff:fe4b:6141/64 scope global dynamic
mngtmpaddr
valid_lft 3553sec preferred_lft 3553sec
inet6 fe80::216:3eff:fe4b:6141/64 scope link
valid_lft forever preferred_lft forever


I was able to reproduce it on two different LXD servers.

This used to work a few days ago.


Did anything change in bionic images recently?


Tomasz Chmielewski
https://lxadm.com
Tomasz Chmielewski
2018-05-03 02:56:34 UTC
Permalink
Post by Tomasz Chmielewski
I was able to reproduce it on two different LXD servers.
This used to work a few days ago.
Also - xenial containers are getting IPv4 address just fine.

Here is the output of "systemctl status systemd-networkd" on a bionic
container launched yesterday, with working DHCP (it's also getting IPv4
after restart etc.):

# systemctl status systemd-networkd
● systemd-networkd.service - Network Service
Loaded: loaded (/lib/systemd/system/systemd-networkd.service;
enabled-runtime; vendor preset: enabled)
Active: active (running) since Thu 2018-05-03 02:46:14 UTC; 6min ago
Docs: man:systemd-networkd.service(8)
Main PID: 45 (systemd-network)
Status: "Processing requests..."
Tasks: 1 (limit: 4915)
CGroup: /system.slice/systemd-networkd.service
└─45 /lib/systemd/systemd-networkd

May 03 02:46:14 a19ea62218-2018-05-02-11-12-12 systemd-networkd[45]:
Enumeration completed
May 03 02:46:14 a19ea62218-2018-05-02-11-12-12 systemd[1]: Started
Network Service.
May 03 02:46:14 a19ea62218-2018-05-02-11-12-12 systemd-networkd[45]:
eth0: DHCPv4 address 10.190.0.95/24 via 10.190.0.1
May 03 02:46:14 a19ea62218-2018-05-02-11-12-12 systemd-networkd[45]: Not
connected to system bus, not setting hostname.
May 03 02:46:16 a19ea62218-2018-05-02-11-12-12 systemd-networkd[45]:
eth0: Gained IPv6LL
May 03 02:46:16 a19ea62218-2018-05-02-11-12-12 systemd-networkd[45]:
eth0: Configured



Here is the output of "systemctl status systemd-networkd" on a bionic
container launched today - DHCPv4 is not working (I can get IPv4 there
by running "dhclient eth0" manually, but that's not how it should work):

# systemctl status systemd-networkd
● systemd-networkd.service - Network Service
Loaded: loaded (/lib/systemd/system/systemd-networkd.service;
enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-05-03 02:49:10 UTC; 3min 36s
ago
Docs: man:systemd-networkd.service(8)
Main PID: 54 (systemd-network)
Status: "Processing requests..."
Tasks: 1 (limit: 4915)
CGroup: /system.slice/systemd-networkd.service
└─54 /lib/systemd/systemd-networkd

May 03 02:49:10 tomasztest systemd[1]: Starting Network Service...
May 03 02:49:10 tomasztest systemd-networkd[54]: Enumeration completed
May 03 02:49:10 tomasztest systemd[1]: Started Network Service.
May 03 02:49:11 tomasztest systemd-networkd[54]: eth0: Gained IPv6LL



Tomasz Chmielewski
https://lxadm.com
Mark Constable
2018-05-03 02:58:32 UTC
Permalink
Post by Tomasz Chmielewski
Today or yesterday, bionic image launched in LXD is not getting an IPv4
address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that includes...

lxc network attach-profile lxdbr0 [profile name] eth0

then when I manually launch a container I use something like...

lxc launch images:ubuntu-core/16 uc1 -p [profile name]
Tomasz Chmielewski
2018-05-03 03:09:40 UTC
Permalink
Post by Mark Constable
Post by Tomasz Chmielewski
Today or yesterday, bionic image launched in LXD is not getting an IPv4
address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that
includes...
lxc network attach-profile lxdbr0 [profile name] eth0
then when I manually launch a container I use something like...
lxc launch images:ubuntu-core/16 uc1 -p [profile name]
The bionic container is attached to a bridge with IPv4 networking.

Besides, xenial container is getting IPv4 address just fine, while
bionic is not.


The issue is not LXD 3.0 specific - I'm able to reproduce this on
servers with LXD 2.21 and LXD 3.0.



Tomasz Chmielewski
https://lxadm.com
Tomasz Chmielewski
2018-05-03 03:58:44 UTC
Permalink
Post by Tomasz Chmielewski
Post by Mark Constable
Post by Tomasz Chmielewski
Today or yesterday, bionic image launched in LXD is not getting an IPv4
address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that
includes...
lxc network attach-profile lxdbr0 [profile name] eth0
then when I manually launch a container I use something like...
lxc launch images:ubuntu-core/16 uc1 -p [profile name]
The bionic container is attached to a bridge with IPv4 networking.
Besides, xenial container is getting IPv4 address just fine, while
bionic is not.
The issue is not LXD 3.0 specific - I'm able to reproduce this on
servers with LXD 2.21 and LXD 3.0.
I'm able to reproduce this issue with these LXD servers:

- Ubuntu 16.04 with LXD 2.21 from deb
- Ubuntu 18.04 with LXD 3.0.0 from deb
- Ubuntu 16.04 with LXD 3.0.0 from snap



Reproducing is easy:

# lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp


Then wait a few secs until it starts - "lxc list" will show it has IPv6
address (if your bridge was configured to provide IPv6), but not IPv4
(and you can confirm by doing "lxc shell", too):

# lxc list


On the other hand, this works fine with xenial, and "lxc list" will show
this container is getting an IPv4 address:

# lxc launch images:ubuntu/bionic/amd64 xenial-working-dhcp


Tomasz Chmielewski
https://lxadm.com
Kees Bos
2018-05-03 06:09:16 UTC
Permalink
Post by Tomasz Chmielewski
# lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
Then wait a few secs until it starts - "lxc list" will show it has IPv6
address (if your bridge was configured to provide IPv6), but not IPv4
# lxc list
I can confirm this. Seeing the same issue.
Mark Constable
2018-05-03 06:17:44 UTC
Permalink
Post by Kees Bos
Post by Tomasz Chmielewski
# lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
Then wait a few secs until it starts - "lxc list" will show it has
IPv6 address (if your bridge was configured to provide IPv6), but
I can confirm this. Seeing the same issue.
Works as I would expect for me because I am using a profile that has a
network attached... ie; it's not a problem with the bionic image.

mbox ~ lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp -p medium
Creating bionic-broken-dhcp
Starting bionic-broken-dhcp
mbox ~ lx
+--------------------+---------+----------------------+
| NAME | STATE | IPV4 |
+--------------------+---------+----------------------+
| bionic-broken-dhcp | RUNNING | 192.168.0.129 (eth0) |
+--------------------+---------+----------------------+


mbox ~ lxc profile show medium
config:
limits.cpu: "2"
limits.memory: 500MB
description: ""
devices:
eth0:
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: lxd-pool
size: 5000MB
type: disk
name: medium
used_by:
- /1.0/containers/bionic-broken-dhcp
- /1.0/containers/c2
- /1.0/containers/uc1
Kees Bos
2018-05-03 06:28:47 UTC
Permalink
Post by Kees Bos
Post by Tomasz Chmielewski
# lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
Then wait a few secs until it starts - "lxc list" will show it has IPv6
address (if your bridge was configured to provide IPv6), but not IPv4
# lxc list
I can confirm this. Seeing the same issue.
BTW. It's the /etc/netplan/10-lxc.yaml

Not working (current) version:
network:
ethernets:
eth0: {dhcp4: true}
version: 2


Working version (for me):
network:
version: 2
ethernets:
eth0:
dhcp4: true
Fajar A. Nugraha
2018-05-03 12:57:13 UTC
Permalink
Post by Kees Bos
Post by Kees Bos
Post by Tomasz Chmielewski
# lxc launch images:ubuntu/bionic/amd64 bionic-broken-dhcp
Then wait a few secs until it starts - "lxc list" will show it has IPv6
address (if your bridge was configured to provide IPv6), but not IPv4
# lxc list
I can confirm this. Seeing the same issue.
BTW. It's the /etc/netplan/10-lxc.yaml
eth0: {dhcp4: true}
version: 2
version: 2
dhcp4: true
Works for me. Both with images:ubuntu/bionic (which has
/etc/netplan/10-lxc.yaml, identical to your 'not working' one) and
ubuntu:bionic (which has /etc/netplan/50-cloud-init.yaml).

Then again the images:ubuntu/bionic one has '20180503_11:06' in its
description, so it's possible that the bug was fixed recently.
--
Fajar
Tomasz Chmielewski
2018-05-03 13:03:13 UTC
Permalink
Post by Fajar A. Nugraha
Post by Kees Bos
Post by Kees Bos
I can confirm this. Seeing the same issue.
BTW. It's the /etc/netplan/10-lxc.yaml
eth0: {dhcp4: true}
version: 2
version: 2
dhcp4: true
Works for me. Both with images:ubuntu/bionic (which has
/etc/netplan/10-lxc.yaml, identical to your 'not working' one) and
ubuntu:bionic (which has /etc/netplan/50-cloud-init.yaml).
Then again the images:ubuntu/bionic one has '20180503_11:06' in its
description, so it's possible that the bug was fixed recently.
Indeed, the bug seems now fixed in the bionic image and new containers
are getting IPv4 via DHCP again:

| | 88a22ac497ad | no | Ubuntu bionic amd64 (20180503_03:49)
| x86_64 | 104.71MB | May 3, 2018 at 8:51am (UTC) |



This one was producing broken /etc/netplan/10-lxc.yaml:

| | 87b5c0fec8ff | no | Ubuntu bionic amd64 (20180502_09:49) |
x86_64 | 118.15MB | May 3, 2018 at 2:39am (UTC) |


Tomasz Chmielewski
https://lxadm.com
Tomasz Chmielewski
2018-05-03 12:57:12 UTC
Permalink
Post by Kees Bos
Post by Kees Bos
I can confirm this. Seeing the same issue.
BTW. It's the /etc/netplan/10-lxc.yaml
eth0: {dhcp4: true}
version: 2
version: 2
dhcp4: true
Indeed, I can confirm it's some netplan-related issue with
/etc/netplan/10-lxc.yaml.

Working version for bionic containers set up before 2018-May-02:

network:
ethernets:
eth0: {dhcp4: true}
version: 2



Broken version for bionic containers set up after 2018-May-02:

network:
ethernets:
eth0: {dhcp4: true}
version: 2


Please note that the broken one has no indentation (two spaces) before
"version: 2", this is the only thing that differs and which breaks
DHCPv4.


What's responsible for this?


Tomasz Chmielewski
https://lxadm.com
Fajar A. Nugraha
2018-05-03 13:01:17 UTC
Permalink
Post by Tomasz Chmielewski
Indeed, I can confirm it's some netplan-related issue with
/etc/netplan/10-lxc.yaml.
eth0: {dhcp4: true}
version: 2
eth0: {dhcp4: true}
version: 2
Please note that the broken one has no indentation (two spaces) before
"version: 2", this is the only thing that differs and which breaks DHCPv4.
Ah, sorry, I was not thorough enough when comparing my resulting
/etc/netplan/10-lxc.yaml. It looks like this now:

# cat /etc/netplan/10-lxc.yaml
network:
version: 2
ethernets:
eth0: {dhcp4: true}

So the new image update apparently fixed the bug.
--
Fajar
David Favor
2018-05-03 13:13:45 UTC
Permalink
Post by Fajar A. Nugraha
Post by Tomasz Chmielewski
Indeed, I can confirm it's some netplan-related issue with
/etc/netplan/10-lxc.yaml.
eth0: {dhcp4: true}
version: 2
eth0: {dhcp4: true}
version: 2
Please note that the broken one has no indentation (two spaces) before
"version: 2", this is the only thing that differs and which breaks DHCPv4.
Ah, sorry, I was not thorough enough when comparing my resulting
# cat /etc/netplan/10-lxc.yaml
version: 2
eth0: {dhcp4: true}
So the new image update apparently fixed the bug.
This must be some custom Netplan setup.

This file is best generated via cloud-init or subtle trouble will likely ensue.

Default cloud-init generated file is...

lxd: net10-template-ubuntu-bionic-lamp # cat 50-cloud-init.yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
dhcp4: true

Which is identical to your file, it's just that using a /etc/netplan/10-lxc.yaml
may conflict with future cloud-init updates.

Be sure if you do use a custom file, you follow the 50-cloud-init.yaml instructions
to disable cloud-init generating it's own file, which is where conflicts may arise
in the future.
David Favor
2018-05-03 03:14:21 UTC
Permalink
Post by Mark Constable
Post by Tomasz Chmielewski
Today or yesterday, bionic image launched in LXD is not getting an IPv4
address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that includes...
lxc network attach-profile lxdbr0 [profile name] eth0
then when I manually launch a container I use something like...
lxc launch images:ubuntu-core/16 uc1 -p [profile name]
Be aware there is a bug in Bionic packaging, so if you upgrade
machine level OS from any previous OS version to Bionic, LXD
networking becomes broken... so badly... no Ubuntu or LXD developer
has figured out a fix.

To avoid this, move all containers off the machine... via...

lxc stop
lxc copy local:cname offsite:cname

Then do a fresh Bionic install at machine level. Then install
LXD via SNAP (which is only LXD install option on Bionic).

Once done, you're good to go... Just ensure...

1) You've setup routes for all your IP ranges to lxcbr0.

2) You've added your IPV4 address to one of...

/etc/netplan/*
/etc/network/interfaces

Very simple.
Tomasz Chmielewski
2018-05-03 03:53:07 UTC
Permalink
Post by David Favor
Post by Mark Constable
Post by Tomasz Chmielewski
Today or yesterday, bionic image launched in LXD is not getting an
IPv4 address. It is getting an IPv6 address.
If you do a "lxc profile show default" you will probably find it doesn't
have an IPv4 network attached by default. I haven't yet found a simple
step by step howto example of how to setup a network for v3.0 but in my
case I use a bridge on my host and create a new profile that
includes...
lxc network attach-profile lxdbr0 [profile name] eth0
then when I manually launch a container I use something like...
lxc launch images:ubuntu-core/16 uc1 -p [profile name]
Be aware there is a bug in Bionic packaging, so if you upgrade
machine level OS from any previous OS version to Bionic, LXD
networking becomes broken... so badly... no Ubuntu or LXD developer
has figured out a fix.
To avoid this, move all containers off the machine... via...
lxc stop
lxc copy local:cname offsite:cname
Then do a fresh Bionic install at machine level. Then install
LXD via SNAP (which is only LXD install option on Bionic).
Once done, you're good to go... Just ensure...
I'm having an issue with *new* (2018-May-02 onwards) bionic containers
not getting IPv4 addresses.
Bionic containers created before 2018-May-02 are getting IPv4 just fine.

Here is how I launch *new* bionic containers:

lxc launch images:ubuntu/bionic/amd64 bionictest

Just try it yourself and see if this container is getting an IPv4
address.
Post by David Favor
1) You've setup routes for all your IP ranges to lxcbr0.
All routes are fine.
Post by David Favor
2) You've added your IPV4 address to one of...
/etc/netplan/*
/etc/network/interfaces
I'm talking about DHCP, not a static IP address.


Tomasz Chmielewski
https://lxadm.com
Fajar A. Nugraha
2018-05-03 12:49:22 UTC
Permalink
Post by David Favor
Be aware there is a bug in Bionic packaging, so if you upgrade
machine level OS from any previous OS version to Bionic, LXD
networking becomes broken... so badly... no Ubuntu or LXD developer
has figured out a fix.
Wait, what?

I've upgraded three physical machines (and a custom AWS AMI) from
16.04 (somewhat minimal install, with lxd) to 18.04. All have lxdbr0
working fine. Of course that also means I don't have netplan installed
(since 16.04 doesn't have it, and the upgrade process doesn't install
it), which is perfect for me. I like old fashioned
/etc/network/interfaces.d/*.cfg.


Not sure about 17.04/17.10 to 18.04 though.
Post by David Favor
LXD via SNAP (which is only LXD install option on Bionic).
Not true. It's not the ONLY option.

# apt policy lxd
lxd:
Installed: 3.0.0-0ubuntu4
Candidate: 3.0.0-0ubuntu4
Version table:
*** 3.0.0-0ubuntu4 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
--
Fajar
David Favor
2018-05-03 13:09:44 UTC
Permalink
Post by Fajar A. Nugraha
Post by David Favor
Be aware there is a bug in Bionic packaging, so if you upgrade
machine level OS from any previous OS version to Bionic, LXD
networking becomes broken... so badly... no Ubuntu or LXD developer
has figured out a fix.
Wait, what?
I've upgraded three physical machines (and a custom AWS AMI) from
16.04 (somewhat minimal install, with lxd) to 18.04. All have lxdbr0
working fine. Of course that also means I don't have netplan installed
(since 16.04 doesn't have it, and the upgrade process doesn't install
it), which is perfect for me. I like old fashioned
/etc/network/interfaces.d/*.cfg.
This is tricky... Netplan forced abuse is similar to systemd... No one
likes systemd + it works abysmally + it was crammed down everyone's
throat.

It appears Netplan will be the same.

Eventually some update will likely wipe out old networking + force upgrade
to Netplan.

To avoid the side effects, likely best to just stop/move all your containers
to a new machine. Then do a fresh Bionic install. Since Bionic is LTS, you
can run Bionic for 5 years.

After you have your fresh Bionic install working, then just move all
your containers back.

Note: Be sure you read text of this bug before starting this process:

https://github.com/lxc/lxd/issues/4522

Which includes a fix for maintaining correct uid/gid mapping
when moving containers between machines.

In short, you must actually start/stop containers on all machines
where containers are moved, else uid/gid mapping get lost.

This might not apply if you've done a complete remove of APT LXD + then
done a fresh install of LXD via SNAP...

On both machines.
Post by Fajar A. Nugraha
Not sure about 17.04/17.10 to 18.04 though.
Post by David Favor
LXD via SNAP (which is only LXD install option on Bionic).
Not true. It's not the ONLY option.
# apt policy lxd
Installed: 3.0.0-0ubuntu4
Candidate: 3.0.0-0ubuntu4
*** 3.0.0-0ubuntu4 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
Currently APT packages are being maintained for backwards compatibility.

And be aware. The APT packages no longer receive updates, so for example
the patches produced this week fixing many LXD bugs will only be available
to you, if you switch to SNAP.

LXD 3.0 initial (no patches) is the last APT supported LXD release.

This is covered somewhere on the LXD site.

I host 100s of high traffic, high speed, WordPress sites, so having all
LXD updates (bug fixes) is essential.

If updates aren't essential for you, running LXD which will never update
might be acceptable.
Fajar A. Nugraha
2018-05-03 13:40:00 UTC
Permalink
Post by David Favor
This is tricky... Netplan forced abuse is similar to systemd... No one
likes systemd + it works abysmally + it was crammed down everyone's
throat.
It appears Netplan will be the same.
Eventually some update will likely wipe out old networking + force upgrade
to Netplan.
From what I can tell so far, netplan is similar to network-manager, in
the sense that both can manage network, and both can be uninstalled
just fine (obviously with some functionality loss, but perfectly fine
for minimal server install running zfs + lxd). It was that way in
16.04 (the network-manager part, that is), and it's that way currently
in 18.04.

I find it hard to see ubuntu breaking that functionality on LTS
release. On the next releases, perhaps.

Of course, if you have a reference that says otherwise, do share the link.
Post by David Favor
Post by Fajar A. Nugraha
Post by David Favor
LXD via SNAP (which is only LXD install option on Bionic).
Not true. It's not the ONLY option.
# apt policy lxd
Installed: 3.0.0-0ubuntu4
Candidate: 3.0.0-0ubuntu4
*** 3.0.0-0ubuntu4 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
Currently APT packages are being maintained for backwards compatibility.
And be aware. The APT packages no longer receive updates, so for example
the patches produced this week fixing many LXD bugs will only be available
to you, if you switch to SNAP.
LXD 3.0 initial (no patches) is the last APT supported LXD release.
This is covered somewhere on the LXD site.
Is there a link?

I know of the PPA deprecation (not ubuntu official repository, but the
ppa), i.e. https://www.mail-archive.com/lxc-***@lists.linuxcontainers.org/msg07938.html

https://linuxcontainers.org/lxd/getting-started-cli/ says apt with official repo
https://help.ubuntu.com/lts/serverguide/lxd.html also says apt
(although to be fair, the page hierarcy starts with 'ubuntu 18.04',
but the page content still has 16.04)
--
Fajar
Continue reading on narkive:
Loading...