Discussion:
Containers and checkpoint/restart micro-conference at LPC2018
(too old to reply)
Stéphane Graber
2018-08-13 16:10:15 UTC
Permalink
Hello,

This year's edition of the Linux Plumbers Conference will once again
have a containers micro-conference but this time around we'll have twice
the usual amount of time and will include the content that would
traditionally go into the checkpoint/restore micro-conference.

LPC2018 will be held in Vancouver, Canada from the 13th to the 15th of
November, co-located with the Linux Kernel Summit.


We're looking for discussion topics around kernel work related to
containers and namespacing, resource control, access control,
checkpoint/restore of kernel structures, filesystem/mount handling for
containers and any related userspace work.


The format of the event will mostly be discussions where someone
introduces a given topic/problem and it then gets discussed for 20-30min
before moving on to something else. There will also be limited room for
short demos of recent work with shorter 15min slots.


Details can be found here:

https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2018/2417


Looking forward to seeing you in Vancouver!
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
Stéphane Graber
2018-09-08 04:59:35 UTC
Permalink
Post by Stéphane Graber
Hello,
This year's edition of the Linux Plumbers Conference will once again
have a containers micro-conference but this time around we'll have twice
the usual amount of time and will include the content that would
traditionally go into the checkpoint/restore micro-conference.
LPC2018 will be held in Vancouver, Canada from the 13th to the 15th of
November, co-located with the Linux Kernel Summit.
We're looking for discussion topics around kernel work related to
containers and namespacing, resource control, access control,
checkpoint/restore of kernel structures, filesystem/mount handling for
containers and any related userspace work.
The format of the event will mostly be discussions where someone
introduces a given topic/problem and it then gets discussed for 20-30min
before moving on to something else. There will also be limited room for
short demos of recent work with shorter 15min slots.
https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2018/2417
Looking forward to seeing you in Vancouver!
Hello,

We've added an extra week to the CFP, new deadline is Friday 14th of September.

If you were thinking about sending something bug then forgot or just
missed the deadline, now is your chance to send it!

Stéphane
Christian Brauner
2018-09-09 01:31:08 UTC
Permalink
Post by Stéphane Graber
Post by Stéphane Graber
Hello,
This year's edition of the Linux Plumbers Conference will once again
have a containers micro-conference but this time around we'll have twice
the usual amount of time and will include the content that would
traditionally go into the checkpoint/restore micro-conference.
LPC2018 will be held in Vancouver, Canada from the 13th to the 15th of
November, co-located with the Linux Kernel Summit.
We're looking for discussion topics around kernel work related to
containers and namespacing, resource control, access control,
checkpoint/restore of kernel structures, filesystem/mount handling for
containers and any related userspace work.
The format of the event will mostly be discussions where someone
introduces a given topic/problem and it then gets discussed for 20-30min
before moving on to something else. There will also be limited room for
short demos of recent work with shorter 15min slots.
https://discuss.linuxcontainers.org/t/containers-micro-conference-at-linux-plumbers-2018/2417
Looking forward to seeing you in Vancouver!
Hello,
We've added an extra week to the CFP, new deadline is Friday 14th of September.
If you were thinking about sending something bug then forgot or just
missed the deadline, now is your chance to send it!
[cc: overlayfs developers]
Hi Stéphane!
Hey Amir,

I'm one of the co-organizers of the microconf.
I am not planing to travel to LPC this year, so this is more of an FYI than
a CFP, but maybe another overlayfs developer can pick up this glove??
Sure, that would be great.
For the past two years I have participated in the effort to fix overlayfs
https://github.com/amir73il/overlayfs/wiki/Overlayfs-non-standard-behavior
Yes, this is an issue that we were aware of for a long time and it
something that has made overlayfs somewhat more difficult to use than it
should be.
Allegedly, this effort went underway to improve the experience of overlayfs
users, who are mostly applications running inside containers. For backward
compatibility reasons, container runtimes will need to opt-in for fixing some
of the legacy behavior.
In reality, I have seen very little cross list interaction between linux-unionfs
and containers mailing lists. The only interaction I recall in the
past two years
ended up in a fix in overlayfs to require opt-in for fixing yet another backward
compatible bad behavior, although docker did follow up shortly after to fix
https://github.com/moby/moby/issues/34672
So the questions I would like to relay to the micro-conf participants w.r.t the
1. Did you know?
I personally did not know about the new opt-in behavior. More reason to
give a talk! :)
2. Do you care?
Yes, we do care. However - speaking as LXC upstream now - we have
recently focused on getting shiftfs to work rather than overlayfs.

We are more than happy to have a overlayfs talk at the microconf. If
someone were to talk about:
- What non-standard behavior has already been fixed?
- How has it been fixed?
- What non-standard behavior still needs to be fixed?
- Outstanding problems that either still need a solution or
are solved but one would like feedback on the implementation. This way
we can have a good discussion.

Thanks!
Christian
Amir Goldstein
2018-09-09 06:31:02 UTC
Permalink
On Sun, Sep 9, 2018 at 4:31 AM Christian Brauner <***@brauner.io> wrote:
...
Post by Christian Brauner
[cc: overlayfs developers]
Hi Stéphane!
Hey Amir,
I'm one of the co-organizers of the microconf.
I am not planing to travel to LPC this year, so this is more of an FYI than
a CFP, but maybe another overlayfs developer can pick up this glove??
Sure, that would be great.
For the past two years I have participated in the effort to fix overlayfs
https://github.com/amir73il/overlayfs/wiki/Overlayfs-non-standard-behavior
Yes, this is an issue that we were aware of for a long time and it
something that has made overlayfs somewhat more difficult to use than it
should be.
Allegedly, this effort went underway to improve the experience of overlayfs
users, who are mostly applications running inside containers. For backward
compatibility reasons, container runtimes will need to opt-in for fixing some
of the legacy behavior.
In reality, I have seen very little cross list interaction between linux-unionfs
and containers mailing lists. The only interaction I recall in the
past two years
ended up in a fix in overlayfs to require opt-in for fixing yet another backward
compatible bad behavior, although docker did follow up shortly after to fix
https://github.com/moby/moby/issues/34672
So the questions I would like to relay to the micro-conf participants w.r.t the
1. Did you know?
I personally did not know about the new opt-in behavior. More reason to
give a talk! :)
2. Do you care?
Yes, we do care. However - speaking as LXC upstream now - we have
recently focused on getting shiftfs to work rather than overlayfs.
IMO, as I expressed it in the past, the fact that shiftfs development is not
collaborated with overlayfs developers is a pitty.
Yes shiftfs has a different purpose than overlayfs, but they have common
use cases and common problems as well.
Post by Christian Brauner
We are more than happy to have a overlayfs talk at the microconf. If
- What non-standard behavior has already been fixed?
- How has it been fixed?
IMO, those questions are covered quite well by the wiki and overlayfs.txt
documentation in kernel tree.
Post by Christian Brauner
- What non-standard behavior still needs to be fixed?
There's the mmap MAP_SHARED case covered in the wiki
and there may be other small stuff, but not sure if anyone cares
about them, so the question should really be directed back to the audience...
Post by Christian Brauner
- Outstanding problems that either still need a solution or
are solved but one would like feedback on the implementation. This way
we can have a good discussion.
I think one of the chsallange that distros and container runtime will need to
deal with is managing format versions of overlay "images".
The reason the new features require user or distro to opt-in is because
the new features create overlayfs images that are not fully compatible with old
kernels and existing container image tools (i.e. export/migrate image).

The new overlayfs-progs project by Zhangyi is going to help in that respect:
https://github.com/hisilicon/overlayfs-progs
As well as Zhangyi's work on overlayfs feature set support:
https://marc.info/?l=linux-unionfs&m=153302911328159&w=2

Thanks,
Amir.
Christian Brauner
2018-09-09 09:18:54 UTC
Permalink
Post by Amir Goldstein
...
Post by Christian Brauner
[cc: overlayfs developers]
Hi Stéphane!
Hey Amir,
I'm one of the co-organizers of the microconf.
I am not planing to travel to LPC this year, so this is more of an FYI than
a CFP, but maybe another overlayfs developer can pick up this glove??
Sure, that would be great.
For the past two years I have participated in the effort to fix overlayfs
https://github.com/amir73il/overlayfs/wiki/Overlayfs-non-standard-behavior
Yes, this is an issue that we were aware of for a long time and it
something that has made overlayfs somewhat more difficult to use than it
should be.
Allegedly, this effort went underway to improve the experience of overlayfs
users, who are mostly applications running inside containers. For backward
compatibility reasons, container runtimes will need to opt-in for fixing some
of the legacy behavior.
In reality, I have seen very little cross list interaction between linux-unionfs
and containers mailing lists. The only interaction I recall in the
past two years
ended up in a fix in overlayfs to require opt-in for fixing yet another backward
compatible bad behavior, although docker did follow up shortly after to fix
https://github.com/moby/moby/issues/34672
So the questions I would like to relay to the micro-conf participants w.r.t the
1. Did you know?
I personally did not know about the new opt-in behavior. More reason to
give a talk! :)
2. Do you care?
Yes, we do care. However - speaking as LXC upstream now - we have
recently focused on getting shiftfs to work rather than overlayfs.
IMO, as I expressed it in the past, the fact that shiftfs development is not
collaborated with overlayfs developers is a pitty.
Yes shiftfs has a different purpose than overlayfs, but they have common
use cases and common problems as well.
My team hast just started to be more involved with shifts development a
few months back. Overlayfs is definitely an inspiration and we even once
thought about making shifts an extension of overlayfs.
Seth Forshee on my team is currently actively working on shifts and
getting a POC ready.
When he has a POC based on James' patchset there will be an RFC that
will go to fsdevel and all parties of interest.
There will also be an update on shifts development during the microconf.
So even more reason for developers from overlayfs to stop by.
Post by Amir Goldstein
Post by Christian Brauner
We are more than happy to have a overlayfs talk at the microconf. If
- What non-standard behavior has already been fixed?
- How has it been fixed?
IMO, those questions are covered quite well by the wiki and overlayfs.txt
documentation in kernel tree.
It's still worth bringing this in front of other developers in the form
of a talk.
Post by Amir Goldstein
Post by Christian Brauner
- What non-standard behavior still needs to be fixed?
There's the mmap MAP_SHARED case covered in the wiki
and there may be other small stuff, but not sure if anyone cares
about them, so the question should really be directed back to the audience...
The audience that cares enough to send patches for it will likely be at
the microconf so it's a good place to discuss it.
Post by Amir Goldstein
Post by Christian Brauner
- Outstanding problems that either still need a solution or
are solved but one would like feedback on the implementation. This way
we can have a good discussion.
I think one of the chsallange that distros and container runtime will need to
deal with is managing format versions of overlay "images".
The reason the new features require user or distro to opt-in is because
the new features create overlayfs images that are not fully compatible with old
kernels and existing container image tools (i.e. export/migrate image).
As I said we will be very thankful for any talk about such problems.

Thanks!
Christian
Post by Amir Goldstein
https://github.com/hisilicon/overlayfs-progs
https://marc.info/?l=linux-unionfs&m=153302911328159&w=2
Thanks,
Amir.
Vivek Goyal
2018-09-11 13:52:59 UTC
Permalink
On Sun, Sep 09, 2018 at 11:18:54AM +0200, Christian Brauner wrote:
[..]
Post by Christian Brauner
My team hast just started to be more involved with shifts development a
few months back. Overlayfs is definitely an inspiration and we even once
thought about making shifts an extension of overlayfs.
Seth Forshee on my team is currently actively working on shifts and
getting a POC ready.
When he has a POC based on James' patchset there will be an RFC that
will go to fsdevel and all parties of interest.
There will also be an update on shifts development during the microconf.
So even more reason for developers from overlayfs to stop by.
So we need both shiftfs and overlayfs in container deployments, right?
shiftfs to make sure each container can run in its own user namespace
and uid/gid mappings can be setup on the fly and overlayfs to provide
union of multiple layers and copy on write filesystem. I am assuming that
shiftfs is working on top of overlayfs here?

Doing shifting at VFS level using mount API was another idea discussed
at last plumbers. I saw David Howells was pushing all the new mount
API patches. Not sure if he ever got time to pursue shifting at VFS
level.

BTW, now we have metadata only copy up patches in overlayfs as
well(4.19-rc). That speeds up chown operation with overlayfs,
needed for changing ownership of files in images for making sure
they work fine with user namespaces. In my simple testing in a VM,
a fedora image was taking around 30 seconds to chown. With metadata
only copy up that time drops to around 2-3 seconds. So till shiftfs
or shiting at VFS level gets merged, it can be used as a stop gap
solution.

Thanks
Vivek
James Bottomley
2018-09-11 15:13:40 UTC
Permalink
Post by Vivek Goyal
[..]
Post by Christian Brauner
My team hast just started to be more involved with shifts
development a few months back. Overlayfs is definitely an
inspiration and we even once thought about making shifts an
extension of overlayfs. Seth Forshee on my team is currently
actively working on shifts and getting a POC ready.
When he has a POC based on James' patchset there will be an RFC
that will go to fsdevel and all parties of interest.
There will also be an update on shifts development during the
microconf. So even more reason for developers from overlayfs to
stop by.
So we need both shiftfs and overlayfs in container deployments, right?
Well, no; only docker style containers need some form of overlay graph
driver, but even there it doesn't have to be the overlayfs one. When I
build unprivileged containers, I never use overlays so for me having to
use it will be problematic as it would be even in docker for the non-
overlayfs graph drivers.

Perhaps we should consider this when we look at the use cases.
Post by Vivek Goyal
shiftfs to make sure each container can run in its own user namespace
and uid/gid mappings can be setup on the fly and overlayfs to provide
union of multiple layers and copy on write filesystem. I am assuming
that shiftfs is working on top of overlayfs here?
Doing shifting at VFS level using mount API was another idea
discussed at last plumbers. I saw David Howells was pushing all the
new mount API patches. Not sure if he ever got time to pursue
shifting at VFS level.
I wasn't party to the conversation, but when I discussed it with Ted
(who wants something similar for a feature changing bind mount) we need
the entire VFS api to be struct path based instead of dentry/inode
based. That's the way it's going, but we'd need to get to the end
point so we have a struct vfsmnt available for every VFS call.
Post by Vivek Goyal
BTW, now we have metadata only copy up patches in overlayfs as
well(4.19-rc). That speeds up chown operation with overlayfs,
needed for changing ownership of files in images for making sure
they work fine with user namespaces. In my simple testing in a VM,
a fedora image was taking around 30 seconds to chown. With metadata
only copy up that time drops to around 2-3 seconds. So till shiftfs
or shiting at VFS level gets merged, it can be used as a stop gap
solution.
Most of the snapshot based filesystem (btrfs, xfs) do this without any
need for overlayfs.

James
Vivek Goyal
2018-09-11 15:36:30 UTC
Permalink
Post by James Bottomley
Post by Vivek Goyal
[..]
Post by Christian Brauner
My team hast just started to be more involved with shifts
development a few months back. Overlayfs is definitely an
inspiration and we even once thought about making shifts an
extension of overlayfs. Seth Forshee on my team is currently
actively working on shifts and getting a POC ready.
When he has a POC based on James' patchset there will be an RFC
that will go to fsdevel and all parties of interest.
There will also be an update on shifts development during the
microconf. So even more reason for developers from overlayfs to
stop by.
So we need both shiftfs and overlayfs in container deployments, right?
Well, no; only docker style containers need some form of overlay graph
driver, but even there it doesn't have to be the overlayfs one. When I
build unprivileged containers, I never use overlays so for me having to
use it will be problematic as it would be even in docker for the non-
overlayfs graph drivers.
Hi James,

Ok. For us, overlayfs is now default for docker containers as it was
much faster as compared to devicemapper and vfs (due to page cache
sharing). So please keep in mind overlayfs graph driver use case
as well while designing a solution.

For non docker containers, I am assuming all the image is in one directory
so no union is required. Also these probably are read-only containers
or this image directory is not shared with other containers for it to
work.
Post by James Bottomley
Perhaps we should consider this when we look at the use cases.
Post by Vivek Goyal
shiftfs to make sure each container can run in its own user namespace
and uid/gid mappings can be setup on the fly and overlayfs to provide
union of multiple layers and copy on write filesystem. I am assuming
that shiftfs is working on top of overlayfs here?
Doing shifting at VFS level using mount API was another idea
discussed at last plumbers. I saw David Howells was pushing all the
new mount API patches. Not sure if he ever got time to pursue
shifting at VFS level.
I wasn't party to the conversation, but when I discussed it with Ted
(who wants something similar for a feature changing bind mount) we need
the entire VFS api to be struct path based instead of dentry/inode
based. That's the way it's going, but we'd need to get to the end
point so we have a struct vfsmnt available for every VFS call.
Ok, thanks. So mappings will be per mount and available in vfsmnt and
hence pass around path so that one can get to vfsmnt (instead of
dentry/inode). Makes sense.
Post by James Bottomley
Post by Vivek Goyal
BTW, now we have metadata only copy up patches in overlayfs as
well(4.19-rc). That speeds up chown operation with overlayfs,
needed for changing ownership of files in images for making sure
they work fine with user namespaces. In my simple testing in a VM,
a fedora image was taking around 30 seconds to chown. With metadata
only copy up that time drops to around 2-3 seconds. So till shiftfs
or shiting at VFS level gets merged, it can be used as a stop gap
solution.
Most of the snapshot based filesystem (btrfs, xfs) do this without any
need for overlayfs.
Right. But they don't share page cache yet (same with devicemapper). So
till we get page cache sharing in these file systems, overlayfs still
has the advantage of being able to launch many more containers using
same image with smaller memory requirements (and its faster too as image
does not have to be read from disk).

Thanks
Vivek

James Bottomley
2018-09-09 15:30:39 UTC
Permalink
  https://discuss.linuxcontainers.org/t/containers-micro-conference-a
t-linux-plumbers-2018/2417
This website was giving a 503 error when I looked.

However, if you want a discussion on the requirements for shiftfs (and
whether we still need it), I'm up for that.

James
Loading...