Discussion:
[lxc-users] LXD cluster is unresponsible: all lxc related commands hangs
Andriy Tovstik
2018-08-17 10:20:42 UTC
Permalink
Hi, all!

Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.

Any suggestion?
--
WBR, Andriy Tovstik
Stéphane Graber
2018-08-17 14:46:35 UTC
Permalink
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?

LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Andriy Tovstik
2018-08-17 14:56:24 UTC
Permalink
Hi!
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to
the
Post by Andriy Tovstik
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with
my
Post by Andriy Tovstik
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
snap output:
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 3.4 8297 stable canonical -

the same on both nodes
Post by Stéphane Graber
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Stéphane Graber
2018-08-17 15:02:59 UTC
Permalink
Post by Andriy Tovstik
Hi!
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to
the
Post by Andriy Tovstik
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with
my
Post by Andriy Tovstik
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 3.4 8297 stable canonical -
the same on both nodes
Thanks, can you provide the following from both nodes:
- ps fauxww
- cat /var/snap/lxd/common/lxd/logs/lxd.log

And can you try running "lxc cluster list" see if that gets stuck too?
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Andriy Tovstik
2018-08-17 19:46:29 UTC
Permalink
Hi!

Both lxd.log are empty :(

ps output - in attachment
Post by Stéphane Graber
Post by Andriy Tovstik
Hi!
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in
to
Post by Andriy Tovstik
Post by Stéphane Graber
the
Post by Andriy Tovstik
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact
with
Post by Andriy Tovstik
Post by Stéphane Graber
my
Post by Andriy Tovstik
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 3.4 8297 stable canonical -
the same on both nodes
- ps fauxww
- cat /var/snap/lxd/common/lxd/logs/lxd.log
And can you try running "lxc cluster list" see if that gets stuck too?
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Stéphane Graber
2018-08-18 14:50:24 UTC
Permalink
Hi,

Your logs show multiple LXD processes running at the same time.
The latest revision of the stable snap (8393) should have a fix which
detects that and clean things up.

Stéphane
Post by Andriy Tovstik
Hi!
Both lxd.log are empty :(
ps output - in attachment
Post by Stéphane Graber
Post by Andriy Tovstik
Hi!
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in
to
Post by Andriy Tovstik
Post by Stéphane Graber
the
Post by Andriy Tovstik
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact
with
Post by Andriy Tovstik
Post by Stéphane Graber
my
Post by Andriy Tovstik
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 3.4 8297 stable canonical -
the same on both nodes
- ps fauxww
- cat /var/snap/lxd/common/lxd/logs/lxd.log
And can you try running "lxc cluster list" see if that gets stuck too?
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Andriy Tovstik
2018-08-20 09:30:39 UTC
Permalink
Hi
I stopped lxd via systemctl, then ran 'killlall lxd' and started lxd again.
Problem was fixed.
Post by Stéphane Graber
Hi,
Your logs show multiple LXD processes running at the same time.
The latest revision of the stable snap (8393) should have a fix which
detects that and clean things up.
Stéphane
Post by Andriy Tovstik
Hi!
Both lxd.log are empty :(
ps output - in attachment
Post by Stéphane Graber
Post by Andriy Tovstik
Hi!
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I
logged in
Post by Andriy Tovstik
Post by Stéphane Graber
to
Post by Andriy Tovstik
Post by Stéphane Graber
the
Post by Andriy Tovstik
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to
interact
Post by Andriy Tovstik
Post by Stéphane Graber
with
Post by Andriy Tovstik
Post by Stéphane Graber
my
Post by Andriy Tovstik
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help.
journalctl -u
Post by Andriy Tovstik
Post by Stéphane Graber
Post by Andriy Tovstik
Post by Stéphane Graber
Post by Andriy Tovstik
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap
list`?
Post by Andriy Tovstik
Post by Stéphane Graber
Post by Andriy Tovstik
Post by Stéphane Graber
LXD cluster nodes must all run the exact same version, otherwise
they
Post by Andriy Tovstik
Post by Stéphane Graber
Post by Andriy Tovstik
Post by Stéphane Graber
effectively wait until this becomes the case before they start
replying
Post by Andriy Tovstik
Post by Stéphane Graber
Post by Andriy Tovstik
Post by Stéphane Graber
to API queries.
$ snap list lxd
Name Version Rev Tracking Publisher Notes
lxd 3.4 8297 stable canonical -
the same on both nodes
- ps fauxww
- cat /var/snap/lxd/common/lxd/logs/lxd.log
And can you try running "lxc cluster list" see if that gets stuck too?
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
ronkaluta
2018-08-17 15:26:08 UTC
Permalink
I just had roughly the same problem.

The way I cured it was to snap refresh

then sudo apt update and then

sudo apt upgrade

current lxd snap is 3.4

current linux-image-4.15.0-32-generic

I then rebooted.

(same procedure on all machines)
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Stéphane Graber
2018-08-17 15:56:25 UTC
Permalink
Apt upgrades shouldn't really be needed, though certainly good to make
sure the rest of the system stays up to date :)

The most important part is ensuring that all cluster nodes run the same
version of LXD (3.4 in this case), once they all do, the cluster should
allow queries.

This upgrade procedure isn't so great and we're well aware of it.
I'll open a Github issue to track some improvements we should be making
to make such upgrades much more seamless.
Post by ronkaluta
I just had roughly the same problem.
The way I cured it was to snap refresh
then sudo apt update and then
sudo apt upgrade
current lxd snap is 3.4
current linux-image-4.15.0-32-generic
I then rebooted.
(same procedure on all machines)
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
ronkaluta
2018-08-17 15:58:35 UTC
Permalink
I tried snap refresh alone but did not fix the problem.

Could it be something involved with the kernel update

linux-image-4.15.0-32?
Post by Stéphane Graber
Apt upgrades shouldn't really be needed, though certainly good to make
sure the rest of the system stays up to date :)
The most important part is ensuring that all cluster nodes run the same
version of LXD (3.4 in this case), once they all do, the cluster should
allow queries.
This upgrade procedure isn't so great and we're well aware of it.
I'll open a Github issue to track some improvements we should be making
to make such upgrades much more seamless.
Post by ronkaluta
I just had roughly the same problem.
The way I cured it was to snap refresh
then sudo apt update and then
sudo apt upgrade
current lxd snap is 3.4
current linux-image-4.15.0-32-generic
I then rebooted.
(same procedure on all machines)
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Stéphane Graber
2018-08-17 16:07:21 UTC
Permalink
That's somewhat unlikely though it's hard to tell post-reboot.

It could be that something upset the kernel (zfs has been known to do
that sometimes) and LXD couldn't be killed anymore as it was stuck on
the kernel.

If this happens to you again, try recording the following before rebooting:
- journalctl -u snap.lxd.daemon -n300
- dmesg
- ps fauxww

This will usually be enough to determine what was at fault.

Stéphane
Post by ronkaluta
I tried snap refresh alone but did not fix the problem.
Could it be something involved with the kernel update
linux-image-4.15.0-32?
Post by Stéphane Graber
Apt upgrades shouldn't really be needed, though certainly good to make
sure the rest of the system stays up to date :)
The most important part is ensuring that all cluster nodes run the same
version of LXD (3.4 in this case), once they all do, the cluster should
allow queries.
This upgrade procedure isn't so great and we're well aware of it.
I'll open a Github issue to track some improvements we should be making
to make such upgrades much more seamless.
Post by ronkaluta
I just had roughly the same problem.
The way I cured it was to snap refresh
then sudo apt update and then
sudo apt upgrade
current lxd snap is 3.4
current linux-image-4.15.0-32-generic
I then rebooted.
(same procedure on all machines)
Post by Stéphane Graber
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
Are both nodes running the same snap revision according to `snap list`?
LXD cluster nodes must all run the exact same version, otherwise they
effectively wait until this becomes the case before they start replying
to API queries.
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Free Ekanayaka
2018-08-17 14:51:40 UTC
Permalink
Hello,

does "ps aux | grep lxd" show more than one lxd process running? (on
either of the two nodes).
Post by Andriy Tovstik
Hi, all!
Some time ago I installed a dual node LXD cluster. Today I logged in to the
node and tried to execute
lxc exec container -- bash
but command hanged.
Also, all lxc commands are unresponsible: i'm not able to interact with my
cluster and my containers.
I tried to restart snap.lxd.daemon but it didn't help. journalctl -u
snap.lxd.daemon - in attachment.
Any suggestion?
--
WBR, Andriy Tovstik
-- Logs begin at Fri 2018-07-20 11:50:12 CEST, end at Fri 2018-08-17 12:18:41 CEST. --
Jul 20 13:52:52 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-20T11:52:52+0000
Jul 23 11:53:53 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event: exiting." t=2018-07-23T09:53:53+0000
Jul 23 11:53:53 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-23T09:53:53+0000
Jul 23 21:27:02 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-23T19:27:02+0000
Jul 27 22:16:35 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-27T20:16:35+0000
Jul 28 16:27:20 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-28T14:27:20+0000
Jul 28 16:27:40 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-28T14:27:40+0000
Jul 28 16:55:53 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Heartbeat timeout from \"10.0.0.20:8443\" reached, starting election" t=2018-07-28T14:55:53+0000
Jul 28 16:55:59 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:55:59+0000
Jul 28 16:56:00 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:56:00+0000
Jul 28 16:56:04 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:04+0000
Jul 28 16:56:08 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:08+0000
Jul 28 16:56:12 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:12+0000
Jul 28 16:56:14 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:56:14+0000
Jul 28 16:56:15 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:15+0000
Jul 28 16:56:21 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:21+0000
Jul 28 16:56:25 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:25+0000
Jul 28 16:56:27 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:56:27+0000
Jul 28 16:56:28 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:28+0000
Jul 28 16:56:34 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:34+0000
Jul 28 16:56:38 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:38+0000
Jul 28 16:56:39 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:56:39+0000
Jul 28 16:56:42 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:42+0000
Jul 28 16:56:45 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:45+0000
Jul 28 16:56:50 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:50+0000
Jul 28 16:56:51 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:56:51+0000
Jul 28 16:56:53 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:53+0000
Jul 28 16:56:57 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:56:57+0000
Jul 28 16:57:03 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:03+0000
Jul 28 16:57:03 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:57:03+0000
Jul 28 16:57:08 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:08+0000
Jul 28 16:57:12 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:12+0000
Jul 28 16:57:15 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:57:15+0000
Jul 28 16:57:17 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:17+0000
Jul 28 16:57:22 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:22+0000
Jul 28 16:57:25 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:25+0000
Jul 28 16:57:27 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Failed to get current cluster nodes: failed to begin transaction: gRPC grpcConnection failed: context deadline exceeded" t=2018-07-28T14:57:27+0000
Jul 28 16:57:31 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:31+0000
Jul 28 16:57:35 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-28T14:57:35+0000
Jul 29 11:10:52 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-29T09:10:52+0000
Jul 29 23:32:05 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-29T21:32:05+0000
Jul 30 17:00:45 jobpulz1 systemd[1]: Stopping Service for snap application lxd.daemon...
Jul 30 17:00:45 jobpulz1 lxd.daemon[33557]: => Stop reason is: snap refresh
Jul 30 17:00:45 jobpulz1 lxd.daemon[33557]: => Stopping LXD
Jul 30 17:00:46 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Unable to get address for server id 1, using fallback address 0: failed to begin transaction: sql: database is closed" t=2018-07-30T15:00:46+0000
Jul 30 17:00:46 jobpulz1 lxd.daemon[2074]: lvl=warn msg="Raft: Unable to get address for server id 2, using fallback address 10.0.0.20:8443: failed to begin transaction: sql: database is closed" t=2018-07-30T15:00:46+0000
Jul 30 17:00:46 jobpulz1 lxd.daemon[2074]: => LXD exited cleanly
Jul 30 17:00:46 jobpulz1 systemd[1]: Stopped Service for snap application lxd.daemon.
Jul 30 17:00:48 jobpulz1 systemd[1]: Started Service for snap application lxd.daemon.
Jul 30 17:00:48 jobpulz1 lxd.daemon[33779]: => Preparing the system
Jul 30 17:00:48 jobpulz1 lxd.daemon[33779]: ==> Loading snap configuration
Jul 30 17:00:48 jobpulz1 lxd.daemon[33779]: ==> Setting up mntns symlink (mnt:[4026532566])
Jul 30 17:00:48 jobpulz1 lxd.daemon[33779]: ==> Setting up kmod wrapper
Jul 30 17:00:48 jobpulz1 lxd.daemon[33779]: ==> Preparing /boot
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Preparing a clean copy of /run
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Preparing a clean copy of /etc
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Creating "lxd" user
Jul 30 17:00:49 jobpulz1 useradd[33836]: new user: name=lxd, UID=999, GID=100, home=/var/snap/lxd/common/lxd, shell=/bin/false
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Setting up ceph configuration
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Setting up LVM configuration
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Rotating logs
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Setting up ZFS (0.6)
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Escaping the systemd cgroups
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: ==> Escaping the systemd process resource limits
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: => Re-using existing LXCFS
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: => Starting LXD
Jul 30 17:00:49 jobpulz1 lxd.daemon[33779]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-07-30T15:00:49+0000
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: mount namespace: 7
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 0: fd: 8: pids
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 1: fd: 9: hugetlb
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 2: fd: 10: blkio
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 3: fd: 11: freezer
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 4: fd: 12: memory
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 5: fd: 13: devices
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 6: fd: 14: perf_event
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 7: fd: 15: cpu,cpuacct
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 8: fd: 16: cpuset
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 9: fd: 17: net_cls,net_prio
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: 10: fd: 18: name=systemd
Jul 30 17:00:50 jobpulz1 lxd.daemon[2074]: lxcfs.c: 105: do_reload: lxcfs: reloaded
Jul 30 17:00:53 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Raft: Heartbeat timeout from \"\" reached, starting election" t=2018-07-30T15:00:53+0000
Jul 30 17:00:54 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Raft: Skipping application of old log: 854705" t=2018-07-30T15:00:54+0000
Jul 30 22:52:39 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Raft: Heartbeat timeout from \"10.0.0.20:8443\" reached, starting election" t=2018-07-30T20:52:39+0000
Jul 30 22:52:42 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Raft: Election timeout reached, restarting election" t=2018-07-30T20:52:42+0000
Jul 30 22:52:44 jobpulz1 lxd.daemon[33779]: => LXD is ready
Jul 31 17:26:47 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-31T15:26:47+0000
Jul 31 17:26:51 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-31T15:26:51+0000
Jul 31 17:35:52 jobpulz1 lxd.daemon[33779]: lvl=warn msg="Detected poll(POLLNVAL) event." t=2018-07-31T15:35:52+0000
Aug 17 00:28:54 jobpulz1 systemd[1]: Stopping Service for snap application lxd.daemon...
Aug 17 00:28:55 jobpulz1 lxd.daemon[11079]: => Stop reason is: snap refresh
Aug 17 00:28:55 jobpulz1 lxd.daemon[11079]: => Stopping LXD
Aug 17 00:28:55 jobpulz1 lxd.daemon[33779]: => LXD exited cleanly
Aug 17 00:28:56 jobpulz1 systemd[1]: Stopped Service for snap application lxd.daemon.
Aug 17 00:28:58 jobpulz1 systemd[1]: Started Service for snap application lxd.daemon.
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: => Preparing the system
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Loading snap configuration
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Setting up mntns symlink (mnt:[4026533163])
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Setting up persistent shmounts path
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ====> Making LXD shmounts use the persistent path
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ====> Making LXCFS use the persistent path
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Setting up kmod wrapper
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Preparing /boot
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Preparing a clean copy of /run
Aug 17 00:28:58 jobpulz1 lxd.daemon[11330]: ==> Preparing a clean copy of /etc
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Setting up ceph configuration
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Setting up LVM configuration
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Rotating logs
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Setting up ZFS (0.6)
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Escaping the systemd cgroups
Aug 17 00:28:59 jobpulz1 lxd.daemon[11330]: ==> Escaping the systemd process resource limits
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: => Starting LXCFS
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: => Starting LXD
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: mount namespace: 5
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 0: fd: 6: pids
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 1: fd: 7: hugetlb
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 2: fd: 8: blkio
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 3: fd: 9: freezer
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 4: fd: 10: memory
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 5: fd: 11: devices
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 6: fd: 12: perf_event
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 7: fd: 13: cpu,cpuacct
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 8: fd: 14: cpuset
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 9: fd: 15: net_cls,net_prio
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: 10: fd: 16: name=systemd
Aug 17 00:29:00 jobpulz1 lxd.daemon[11330]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-08-17T00:29:00+0200
Aug 17 00:29:01 jobpulz1 lxd.daemon[11330]: lvl=warn msg="Raft: Skipping application of old log: 1224295" t=2018-08-17T00:29:01+0200
Aug 17 11:48:10 jobpulz1 systemd[1]: Reloading Service for snap application lxd.daemon.
Aug 17 11:48:10 jobpulz1 systemd[1]: Reloaded Service for snap application lxd.daemon.
Aug 17 11:48:11 jobpulz1 lxd.daemon[11330]: => LXD failed to start
Aug 17 11:48:11 jobpulz1 systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=137/n/a
Aug 17 11:48:11 jobpulz1 lxd.daemon[121885]: => Stop reason is: reload
Aug 17 11:48:11 jobpulz1 systemd[1]: snap.lxd.daemon.service: Unit entered failed state.
Aug 17 11:48:11 jobpulz1 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Aug 17 11:48:11 jobpulz1 systemd[1]: snap.lxd.daemon.service: Service hold-off time over, scheduling restart.
Aug 17 11:48:11 jobpulz1 systemd[1]: Stopped Service for snap application lxd.daemon.
Aug 17 11:48:11 jobpulz1 systemd[1]: Started Service for snap application lxd.daemon.
Aug 17 11:48:11 jobpulz1 lxd.daemon[121919]: => Preparing the system
Aug 17 11:48:11 jobpulz1 lxd.daemon[121919]: ==> Loading snap configuration
Aug 17 11:48:11 jobpulz1 lxd.daemon[121919]: ==> Setting up mntns symlink (mnt:[4026533163])
Aug 17 11:48:11 jobpulz1 lxd.daemon[121919]: ==> Setting up kmod wrapper
Aug 17 11:48:11 jobpulz1 lxd.daemon[121919]: ==> Preparing /boot
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Preparing a clean copy of /run
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Preparing a clean copy of /etc
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Setting up ceph configuration
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Setting up LVM configuration
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Rotating logs
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Setting up ZFS (0.6)
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Escaping the systemd cgroups
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: ==> Escaping the systemd process resource limits
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: => Re-using existing LXCFS
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: => Starting LXD
Aug 17 11:48:12 jobpulz1 lxd.daemon[121919]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-08-17T11:48:12+0200
Aug 17 11:48:55 jobpulz1 systemd[1]: Stopping Service for snap application lxd.daemon...
Aug 17 11:58:55 jobpulz1 systemd[1]: snap.lxd.daemon.service: Stopping timed out. Terminating.
Aug 17 11:58:55 jobpulz1 systemd[1]: snap.lxd.daemon.service: Unit entered failed state.
Aug 17 11:58:55 jobpulz1 systemd[1]: snap.lxd.daemon.service: Failed with result 'timeout'.
Aug 17 11:58:55 jobpulz1 systemd[1]: Started Service for snap application lxd.daemon.
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: => Preparing the system
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Loading snap configuration
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Setting up mntns symlink (mnt:[4026533163])
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Setting up kmod wrapper
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Preparing /boot
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Preparing a clean copy of /run
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Preparing a clean copy of /etc
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Setting up ceph configuration
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Setting up LVM configuration
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Rotating logs
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Setting up ZFS (0.6)
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Escaping the systemd cgroups
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: ==> Escaping the systemd process resource limits
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: mount namespace: 7
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 0: fd: 8: pids
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 1: fd: 9: hugetlb
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 2: fd: 10: blkio
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 3: fd: 11: freezer
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 4: fd: 12: memory
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 5: fd: 13: devices
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 6: fd: 14: perf_event
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 7: fd: 15: cpu,cpuacct
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 8: fd: 16: cpuset
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 9: fd: 17: net_cls,net_prio
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: 10: fd: 18: name=systemd
Aug 17 11:58:55 jobpulz1 lxd.daemon[11330]: lxcfs.c: 105: do_reload: lxcfs: reloaded
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: => Re-using existing LXCFS
Aug 17 11:58:55 jobpulz1 lxd.daemon[124466]: => Starting LXD
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Loading...