Discussion:
LXC container isolation with iptables?
(too old to reply)
bkw - lxc-user
2018-02-27 17:21:19 UTC
Permalink
I have an LXC host. On that host, there are several unprivileged
containers. All containers and the host are on the same subnet, shared
via bridge interface br0.

If container A (IP address 192.168.1.4) is listening on port 80, can I
put an iptables rule in place on the LXC host machine, that would
prevent container B (IP address 192.168.1.5) from having access to
container A on port 80?

I've tried this set of rules on the LXC host, but they don't work:

iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4

Container B still has access to container A's port 80.

Thanks, in advance, for any assistance you can provide.
Fajar A. Nugraha
2018-02-28 04:04:26 UTC
Permalink
On Wed, Feb 28, 2018 at 12:21 AM, bkw - lxc-user
Post by bkw - lxc-user
I have an LXC host. On that host, there are several unprivileged
containers. All containers and the host are on the same subnet, shared via
bridge interface br0.
If container A (IP address 192.168.1.4) is listening on port 80, can I put
an iptables rule in place on the LXC host machine, that would prevent
container B (IP address 192.168.1.5) from having access to container A on
port 80?
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4
Container B still has access to container A's port 80.
That's how generic bridges work.

Some possible ways to achieve what you want:
- don't use bridge. Use routed method. IIRC this is possible in lxc,
but not easy in lxd.
- create separate bridges for each container, e.g with /30 subnet
- use 'external' bridge managed by openvswitch, with additional
configuration (on openvswitch side) to enforce the rule. IIRC there
were examples on this list to do that (try searching the archives)
--
Fajar
bkw - lxc-user
2018-02-28 20:27:35 UTC
Permalink
Post by Fajar A. Nugraha
That's how generic bridges work.
Thanks for the reply! Looking into these alternatives.

-Bryan
Jan Kowalsky
2018-03-01 20:18:35 UTC
Permalink
Post by Fajar A. Nugraha
On Wed, Feb 28, 2018 at 12:21 AM, bkw - lxc-user
Post by bkw - lxc-user
I have an LXC host. On that host, there are several unprivileged
containers. All containers and the host are on the same subnet, shared via
bridge interface br0.
If container A (IP address 192.168.1.4) is listening on port 80, can I put
an iptables rule in place on the LXC host machine, that would prevent
container B (IP address 192.168.1.5) from having access to container A on
port 80?
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4
Container B still has access to container A's port 80.
That's how generic bridges work.
- don't use bridge. Use routed method. IIRC this is possible in lxc,
but not easy in lxd.
- create separate bridges for each container, e.g with /30 subnet
- use 'external' bridge managed by openvswitch, with additional
configuration (on openvswitch side) to enforce the rule. IIRC there
were examples on this list to do that (try searching the archives)
you could also use the --physdev-in / --physdev-out extension of
iptables to address the devices of the containers directly. Of course
you have to fix the device name of the network devices with
lxc.network.veth.pair. Problem could be that according to manpage
lxc.container.conf this seems not possible for unprivileged containers.
For this reason probably also the routed method could hve it's difficulties.

Regards
Jan
Steven Spencer
2018-03-03 23:26:53 UTC
Permalink
Honestly, unless I'm spinning up a container on my local desktop, I always
use the routed method. Because our organization always thinks of a
container as a separate machine, it makes the build pretty similar whether
the machine is on the LAN or WAN side of the network. It does, of course,
require that each container run its own firewall, but that's what we would
do with any machine on our network.
Post by Jan Kowalsky
Post by Fajar A. Nugraha
On Wed, Feb 28, 2018 at 12:21 AM, bkw - lxc-user
Post by bkw - lxc-user
I have an LXC host. On that host, there are several unprivileged
containers. All containers and the host are on the same subnet, shared
via
Post by Fajar A. Nugraha
Post by bkw - lxc-user
bridge interface br0.
If container A (IP address 192.168.1.4) is listening on port 80, can I
put
Post by Fajar A. Nugraha
Post by bkw - lxc-user
an iptables rule in place on the LXC host machine, that would prevent
container B (IP address 192.168.1.5) from having access to container A
on
Post by Fajar A. Nugraha
Post by bkw - lxc-user
port 80?
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A FORWARD -j DROP -s 192.168.1.5 -d 192.168.1.4
Container B still has access to container A's port 80.
That's how generic bridges work.
- don't use bridge. Use routed method. IIRC this is possible in lxc,
but not easy in lxd.
- create separate bridges for each container, e.g with /30 subnet
- use 'external' bridge managed by openvswitch, with additional
configuration (on openvswitch side) to enforce the rule. IIRC there
were examples on this list to do that (try searching the archives)
you could also use the --physdev-in / --physdev-out extension of
iptables to address the devices of the containers directly. Of course
you have to fix the device name of the network devices with
lxc.network.veth.pair. Problem could be that according to manpage
lxc.container.conf this seems not possible for unprivileged containers.
For this reason probably also the routed method could hve it's
difficulties.
Regards
Jan
_______________________________________________
lxc-users mailing list
http://lists.linuxcontainers.org/listinfo/lxc-users
Marat Khalili
2018-03-04 10:27:14 UTC
Permalink
Post by Steven Spencer
Honestly, unless I'm spinning up a container on my local desktop, I
always use the routed method. Because our organization always thinks
of a container as a separate machine, it makes the build pretty
similar whether the machine is on the LAN or WAN side of the network.
It does, of course, require that each container run its own firewall,
but that's what we would do with any machine on our network.
Can you please elaborate on your setup?It always seemed like
administrative hassle to me. Outside routers need to known how to find
your container. I can see three ways, each has it's drawbacks:

1. Broadcast container MACs outside, but L3-route packets inside the
server instead of L2-bridging. Seems clean but I don't know how to do it
in [bare] Linux.

2. Create completely virtual LAN (not in 802.1q sense) with separate IP
address space inside the server and teach outside routers to route
corresponding addresses via your server. OKish as long as you have
access to the outside router configuration, but some things like
broadcasts won't work. Also, I'm not sure it solves OP inter-container
isolation problem.

3. Create separate routing table rule for each container/group of them.
Hard to administer and dangerous IMO.

--

With Best Regards,
Marat Khalili
Fajar A. Nugraha
2018-03-04 12:02:52 UTC
Permalink
Post by Steven Spencer
Honestly, unless I'm spinning up a container on my local desktop, I always
use the routed method. Because our organization always thinks of a container
as a separate machine, it makes the build pretty similar whether the machine
is on the LAN or WAN side of the network. It does, of course, require that
each container run its own firewall, but that's what we would do with any
machine on our network.
Can you please elaborate on your setup?It always seemed like administrative
hassle to me. Outside routers need to known how to find your container. I
1. Broadcast container MACs outside, but L3-route packets inside the server
instead of L2-bridging. Seems clean but I don't know how to do it in [bare]
Linux.
Here's one way to do it, with manual networking setup in lxd (making
this automated and converting this to lxc is left as an exercise for
readers. I don't use lxc anymore).


Environment:
- host eth0 is 10.0.3.117/24 with router on 10.0.3.1 (this is actually
an lxd container with nesting enabled, which should behave like a
baremetal lxd host for this purpose)
- guest container name is 'c1' (which is a nested container in this case)
- host will use proxyarp to broadcast c1's MAC
- c1 will use routed setup using veth and p2p ip
- c1 will see a network interface called 'c-c1' instead of 'eth0'
- c1 will use 10.0.3.201
- host side of veth pair will be called 'h-c1', and use 10.0.0.1 (can
be any unused IP in your network, can be used multiple times on
different veths)


Setup in host:
### start with "c1" stopped
### enable proxyarp and ip forwarding
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
echo 1 > /proc/sys/net/ipv4/ip_forward

### create veth pair
ip link add dev h-c1 type veth peer name c-c1

### setup veth pair on host side
ip ad add 10.0.0.1 dev h-c1 peer 10.0.3.201 scope link
ip link set dev h-c1 up

### configure c1 to use the created veth pair. "lxc config edit c1",
then add these lines in "device" section.
### use "eth0" as section name so that it replace "eth0" inherited
from the profile
devices:
eth0:
name: c-c1
nictype: physical
parent: c-c1
type: nic

### start the container
lxc start c1



Setup in c1:
### setup veth pair
ip ad add 10.0.3.201 peer 10.0.0.1 dev c-c1
ip link set dev c-c1 up
ip r add default via 10.0.0.1

### test connectivity with router
ping -n -c 1 10.0.3.1
--
Fajar
Marat Khalili
2018-03-05 14:08:55 UTC
Permalink
Thank you for the explanation, I'll give it a try. proxyarp seem to be
the magic ingredient needed.

--

With Best Regards,
Marat Khalili

Andrey Repin
2018-03-04 18:36:40 UTC
Permalink
Greetings, Steven Spencer!
Post by Steven Spencer
Honestly, unless I'm spinning up a container on my local desktop, I always
use the routed method.
This contradicts to…
Post by Steven Spencer
Because our organization always thinks of a container as a separate machine,
…this.
Post by Steven Spencer
it makes the build pretty similar whether the machine is on the LAN or WAN
side of the network. It does, of course, require that each container run its
own firewall, but that's what we would do with any machine on our network.
To me, macvlan bridging is more natural, all network devices are immediately
aware of the container, you could move containers across your network at will
and you don't have to waste your mind with routing information.


--
With best regards,
Andrey Repin
Sunday, March 4, 2018 21:34:40

Sorry for my terrible english...
Continue reading on narkive:
Loading...