Discussion:
[lxc-users] Connecting container to tagged VLAN
Joshua Schaeffer
2016-01-27 18:43:57 UTC
Permalink
I'm trying to setup a container on a new VLAN that only allows tagged
traffic and I'm getting varied success. Maybe somebody can point me in the
right direction. I can ping the gateway from the host but not from the
container and I can't see what I'm missing. I'm using LXC 1.1.5 on Debian
Jessie. The container is unprivileged. The host itself is a VM running off
of VMware. The VM has 3 NIC's. eth0 is for my management network and the
other two NIC's (eth1 and eth2) are setup to connect to this VLAN (vlan id
500).

/etc/network/interfaces
# The second network interface
auto eth1
iface eth1 inet manual

# The third network interface
auto eth2
iface eth2 inet static
address 10.240.78.4/24
gateway 10.240.78.1

iface eth1.500 inet manual
vlan-raw-device eth1

auto br0-500
iface br0-500 inet manual
bridge_ports eth1.500
bridge_stp off
bridge_fd 0
bridge_maxwait 0

I've setup br0-500 to use with my container:

# Network configuration
lxc.network.type = veth
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:3d:51:af

When I start the container everything seems to be in order:

eth0 Link encap:Ethernet HWaddr 00:16:3e:3d:51:af
inet addr:10.240.78.3 Bcast:10.240.78.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe3d:51af/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:648 (648.0 B) TX bytes:774 (774.0 B)

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 10.240.78.1 0.0.0.0 UG 0 0 0 eth0
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

But when I try to ping the gateway I get no response:

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
From 10.240.78.3 icmp_seq=1 Destination Host Unreachable
From 10.240.78.3 icmp_seq=2 Destination Host Unreachable
From 10.240.78.3 icmp_seq=3 Destination Host Unreachable
From 10.240.78.3 icmp_seq=4 Destination Host Unreachable
From 10.240.78.3 icmp_seq=5 Destination Host Unreachable
From 10.240.78.3 icmp_seq=6 Destination Host Unreachable
^C
--- 10.240.78.1 ping statistics ---
7 packets transmitted, 0 received, +6 errors, 100% packet loss, time 6030ms

Address HWtype HWaddress Flags Mask
Iface
10.240.78.1 (incomplete)
eth0

Running tcpdump on eth1 on the host, I can see the arp requests coming
through the host but there is no reply from the gateway.

***@prvlxc01:~$ su root -c "tcpdump -i eth1 -Uw - | tcpdump -en -r -
vlan 500"
Password:
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size
262144 bytes
reading from file -, link-type EN10MB (Ethernet)
11:35:34.589795 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:35.587647 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:36.587413 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:37.604816 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:38.603408 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:39.603387 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:40.620677 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
11:35:41.619399 00:16:3e:3d:51:af > ff:ff:ff:ff:ff:ff, ethertype 802.1Q
(0x8100), length 46: vlan 500, p 0, ethertype ARP, Request who-has
10.240.78.1 tell 10.240.78.3, length 28
^C
Session terminated, terminating shell...tcpdump: pcap_loop: error reading
dump file: Interrupted system call
16 packets captured
17 packets received by filter
0 packets dropped by kernel

I feel that this is a setup problem with the router, but I'm not getting
much help from my networking team so I'm kind of asking all around to see
if anybody has any good ideas. The only other source of the problem I can
think of is with VMware. Maybe somebody more familiar with the hypervisor
has seen this issue before? I have every port group on the VM host in
promiscuous mode.

Thanks,
Joshua
Fajar A. Nugraha
2016-01-27 21:39:20 UTC
Permalink
Post by Joshua Schaeffer
I'm trying to setup a container on a new VLAN that only allows tagged
traffic and I'm getting varied success.
the other two NIC's (eth1 and eth2) are setup to connect to this VLAN (vlan
Post by Joshua Schaeffer
id 500).
# The third network interface
auto eth2
iface eth2 inet static
address 10.240.78.4/24
gateway 10.240.78.1
iface eth1.500 inet manual
vlan-raw-device eth1
Is eth1 connected to your switch as trunk? If no (e.g. you have the same
settings for eth1 and eth2 on the switch side), then you can't tag it
inside your host.

To put it another way:
- start with known-good configuration, THEN make incremental changes
- in yout case, start by testing whether it works on the HOST side when you
assign an IP address to eth1.500, WITHOUT br0-500 bridge, and WITHOUT any
ip address assigned to eth2.
--
Fajar
Joshua Schaeffer
2016-01-27 22:19:37 UTC
Permalink
Post by Fajar A. Nugraha
Is eth1 connected to your switch as trunk? If no (e.g. you have the same
settings for eth1 and eth2 on the switch side),
Both ports are connected as trunk. As far as the switch side goes each
ports is configured the same. Trunked for VLAN 10, 500 and 501. Native VLAN
is 10.

eth2 already works. I set it up for testing outside of all containers (i.e.
on the host only). From the host:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.54.1 0.0.0.0 UG 0 0 0 eth0
10.0.3.0 0.0.0.0 255.255.255.0 U 0 0 0
lxcbr0
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.54.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=1.76 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=2.22 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=1.90 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.768/1.966/2.229/0.196 ms
Post by Fajar A. Nugraha
then you can't tag it inside your host.
I did have that idea and tried it without success:

# The second network interface
auto eth1
iface eth1 inet manual

#commenting out dot1q
#iface eth1.500 inet manual
# vlan-raw-device eth1

[...]

auto br0-500
iface br0-500 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
Post by Fajar A. Nugraha
- start with known-good configuration, THEN make incremental changes
- in yout case, start by testing whether it works on the HOST side when
you assign an IP address to eth1.500, WITHOUT br0-500 bridge
Okay thanks, I will try different configurations out.
Post by Fajar A. Nugraha
, and WITHOUT any ip address assigned to eth2.
I'm not sure what you mean by not assigning an IP address to eth2. Eth2 is
already working from the host, and I don't plan on using it inside any
container (I may have failed to mention that before). Also how would the
NIC work without an IP address? I feel I'm missing something obvious here.

Thanks,
Joshua
Guido Jäkel
2016-01-27 23:38:56 UTC
Permalink
Dear Joshua,

you wrote, that there's a trunk on eth1 and eth2. But for eth2, i can't see any VLAN (501 ?) detrunking as with eth1 & eth1.500. In the other hand you wrote, that eth2 is working. Are you shure, that you realy receive this trunk of 3 VLANs on your both eth's?

I'm using a (working) comparable setup: On the host, eth0 is used for host management on a detrunked port. On eth1, there's a trunk with the needed VLANs for different network for a staged environment. On eth1, there is a VLAN decoder for each of the needed VLANs. And this is attached to a seperate software bridge for each VLAN. A container's outside veth is attached for the appropriate bridge - this is done in a start script by a calculated configuration statement based on the container's name. But the lxc host is located on plain hardware, not in a VM.
Post by Joshua Schaeffer
Post by Fajar A. Nugraha
Is eth1 connected to your switch as trunk? If no (e.g. you have the same
settings for eth1 and eth2 on the switch side),
Both ports are connected as trunk. As far as the switch side goes each
ports is configured the same. Trunked for VLAN 10, 500 and 501. Native VLAN
is 10.
eth2 already works. I set it up for testing outside of all containers (i.e.
Joshua Schaeffer
2016-01-28 00:46:25 UTC
Permalink
Post by Guido Jäkel
Dear Joshua,
you wrote, that there's a trunk on eth1 and eth2. But for eth2, i can't
see any VLAN (501 ?) detrunking as with eth1 & eth1.500. In the other hand
you wrote, that eth2 is working. Are you shure, that you realy receive this
trunk of 3 VLANs on your both eth's?
I started to think about this as well and I've found the reason. VMware
allows you to tag NICs from the hypervisor level. Eth1 and eth2 were both
setup under VLAN 500 so that is why no tagging on the LXC host was
required, hence why eth2 worked. So the lesson there is don't mix dot1q,
either set it on the hypervisor and leave it completely out of the LXC host
and container or vice versa.

I've completely removed VLAN tagging from my LXC host and making progress,
but still running into odd situations:

***@prvlxc01:~$ sudo ip -d link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:13:94 brd ff:ff:ff:ff:ff:ff promiscuity 0
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master
br0-500 state UP mode DEFAULT group default qlen 1000
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 1
bridge_slave
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:26:4f brd ff:ff:ff:ff:ff:ff promiscuity 0
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
group default qlen 1000
link/ether 00:50:56:be:01:d8 brd ff:ff:ff:ff:ff:ff promiscuity 0
6: br0-500: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default
link/ether 00:50:56:be:46:c5 brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
7: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN mode DEFAULT group default
link/ether de:ef:8c:53:01:0b brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge
9: vethKAG02C: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master br0-500 state UP mode DEFAULT group default qlen 1000
link/ether fe:bf:b5:cf:f0:83 brd ff:ff:ff:ff:ff:ff promiscuity 1
veth
bridge_slave

*Scenario 1*: When assigning an IP directly to eth1 on the host, no
bridging involved, no containers involved (Success):

/etc/network/interfaces
auto eth1
iface eth1 inet static
address 10.240.78.3/24

route -n
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=8.25 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=2.59 ms
^C
--- 10.240.78.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 2.597/5.425/8.254/2.829 ms

*Scenario 2*: When assigning an IP to a bridge and making eth1 a slave to
the bridge, no containers involved (Success):

/etc/network/interfaces
auto eth1
iface eth1 inet manual

auto br0-500
iface br0-500 inet static
address 10.240.78.3/24
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0


route -n
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0
br0-500

PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=3.26 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.51 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=2.30 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.514/2.360/3.262/0.715 ms

*Scenario 3*: Same scenario as above, expect the bridge is not assigned and
IP and a container is created and connects to the same bridge (Failure):

/etc/network/interfaces
auto eth1
iface eth1 inet manual

auto br0-500
iface br0-500 inet static
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0

~/.local/share/lxc/c4/config
# Network configuration
lxc.network.type = veth
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:f7:0a:83

route -n (on host)
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0
br0-500

route -n (inside container)
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

ping (on host)
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=1.12 ms
64 bytes from 10.240.78.1: icmp_seq=2 ttl=255 time=1.17 ms
64 bytes from 10.240.78.1: icmp_seq=3 ttl=255 time=6.54 ms
^C
--- 10.240.78.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 1.125/2.950/6.548/2.544 ms

ping (inside container)
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
From 10.240.78.3 icmp_seq=1 Destination Host Unreachable
From 10.240.78.3 icmp_seq=2 Destination Host Unreachable
From 10.240.78.3 icmp_seq=3 Destination Host Unreachable
^C
--- 10.240.78.1 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3013ms

Here is the odd part, if I sniff the traffic on the bridge from the host I
can see the container arping for the gateway and getting a response.
However nothing is being added to my arp table on the container.


***@prvlxc01:~$ su root -c "tcpdump -i br0-500 -Uw - | tcpdump -en -r -
arp" #this is the host
Password:
reading from file -, link-type EN10MB (Ethernet)
tcpdump: listening on br0-500, link-type EN10MB (Ethernet), capture size
262144 bytes
17:32:10.223168 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 42: Request who-has 10.240.78.1 tell 10.240.78.3, length 28
17:32:10.223337 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 60: Request who-has 10.240.78.1 tell 10.240.78.3, length 46
17:32:10.225821 00:13:c4:f2:64:4d > 00:16:3e:f7:0a:83, ethertype ARP
(0x0806), length 60: Reply 10.240.78.1 is-at 00:13:c4:f2:64:4d, length 46
17:32:11.220216 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 42: Request who-has 10.240.78.1 tell 10.240.78.3, length 28
17:32:11.220418 00:16:3e:f7:0a:83 > ff:ff:ff:ff:ff:ff, ethertype ARP
(0x0806), length 60: Request who-has 10.240.78.1 tell 10.240.78.3, length 46
17:32:11.230455 00:13:c4:f2:64:4d > 00:16:3e:f7:0a:83, ethertype ARP
(0x0806), length 60: Reply 10.240.78.1 is-at 00:13:c4:f2:64:4d, length 46

arp -n (from container)
Address HWtype HWaddress Flags Mask
Iface
10.240.78.1 (incomplete)
eth0

If I manually add the gateway's MAC address to the arp table I can ping it
and have full internet access! Anybody know why the container isn't adding
the MAC address when a response is being given?
Post by Guido Jäkel
I'm using a (working) comparable setup: On the host, eth0 is used for host
management on a detrunked port. On eth1, there's a trunk with the needed
VLANs for different network for a staged environment. On eth1, there is a
VLAN decoder for each of the needed VLANs. And this is attached to a
seperate software bridge for each VLAN. A container's outside veth is
attached for the appropriate bridge - this is done in a start script by a
calculated configuration statement based on the container's name.
But the lxc host is located on plain hardware, not in a VM.
That's basically what I have in my home lab (and everything working
successfully there) and what I'm trying to reproduce here, unfortunately I
don't have the pull here to get a physical LXC host so gotta work with what
I got.

Thanks,
Joshua
Fajar A. Nugraha
2016-01-28 01:09:51 UTC
Permalink
Post by Joshua Schaeffer
Post by Fajar A. Nugraha
Is eth1 connected to your switch as trunk? If no (e.g. you have the same
settings for eth1 and eth2 on the switch side),
Both ports are connected as trunk. As far as the switch side goes each
ports is configured the same. Trunked for VLAN 10, 500 and 501. Native VLAN
is 10.
eth2 already works. I set it up for testing outside of all containers
That doesn't match what you said earlier.

"two NIC's (eth1 and eth2) are setup to connect to this VLAN (vlan id 500)"

"
Native VLAN is 10.
"

"
iface eth2 inet static
address 10.240.78.4/24
gateway 10.240.78.1
"

So 10.240.78.0/24 with gateway 10.240.78.1 is VLAN 10? It must be, since
you use eth2 directly.

Yet on lxc config file, you use
"
lxc.network.link = br0-500
lxc.network.ipv4 = 10.240.78.3/24
lxc.network.ipv4.gateway = 10.240.78.1
"

So vlan 10 and vlan 500 is using the same network?
Post by Joshua Schaeffer
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use
Iface
0.0.0.0 192.168.54.1 0.0.0.0 UG 0 0 0 eth0
10.0.3.0 0.0.0.0 255.255.255.0 U 0 0 0
lxcbr0
10.240.78.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.54.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0
PING 10.240.78.1 (10.240.78.1) 56(84) bytes of data.
64 bytes from 10.240.78.1: icmp_seq=1 ttl=255 time=1.76 ms
That should be vlan10 (native vlan for eth2).

You haven't tested it in vlan500.

then you can't tag it inside your host.
Post by Joshua Schaeffer
# The second network interface
auto eth1
iface eth1 inet manual
#commenting out dot1q
#iface eth1.500 inet manual
# vlan-raw-device eth1
[...]
auto br0-500
iface br0-500 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
If the settings are the same, then br0-500 in this configuration SHOULD be
able to access vlan10, its native vlan. If it DOESN't work, check your
switch.
Post by Joshua Schaeffer
Post by Fajar A. Nugraha
- start with known-good configuration, THEN make incremental changes
- in yout case, start by testing whether it works on the HOST side when
you assign an IP address to eth1.500, WITHOUT br0-500 bridge
Okay thanks, I will try different configurations out.
Post by Fajar A. Nugraha
, and WITHOUT any ip address assigned to eth2.
I'm not sure what you mean by not assigning an IP address to eth2. Eth2 is
already working from the host, and I don't plan on using it inside any
container (I may have failed to mention that before). Also how would the
NIC work without an IP address? I feel I'm missing something obvious here.
What I meant, check that ETH1 works on the host. If eth2 is on the same
network, it might interfere with settings. So disable eth2 first, then test
eth1 on the host. Without bridging.
--
Fajar
Joshua Schaeffer
2016-01-28 17:04:36 UTC
Permalink
Post by Fajar A. Nugraha
Post by Joshua Schaeffer
eth2 already works. I set it up for testing outside of all containers
That doesn't match what you said earlier.
It actually does. Remember that this LXC host is a virtual machine running
off of VMware, which makes this whole situation more complex. I'll try to
clarify.

VLAN10, the native vlan, is 192.168.54.0/25. It's my management vlan
VLAN500 is 10.240.78.0/24.

eth1 and eth2 are setup to connect to vlan500 because they were setup that
way through VMware. Normarlly you would be correct, on a physical server
eth2 would only be able to contact the native vlan, because no tagging
information is provided. However VMware allows you to tag a NIC (its
actually called a port group, but it is essentially VMware's way of saying
a NIC) from outside the VM guest. If you do this (as I have) then you don't
(and shouldn't) need to tag anything on the VM guest itself. So by just
looking at the guest it can look incorrect/confusing.

My original problem was I was tagging the port group (a.k.a. VMware's NIC)
and I was tagging eth1 inside the VM guest (a.k.a. the LXC host). Clearly
this causes problems. Because I was tagging eth1 but not eth2 that is where
the problem resided. I was trying to mimic a setup I have in my home lab
where I tag an Ethernet device, add it to a bridge, then use that bridge in
a container, but my home lab uses a physical LXC host. Hopefully I've
explained it in a way that clears this up.

Either way I have that problem resolved. Now I'm just wondering why the
container is not adding the gateway's MAC address when it ARP's for it (as
I explained in my last email).
Post by Fajar A. Nugraha
What I meant, check that ETH1 works on the host. If eth2 is on the same
network, it might interfere with settings. So disable eth2 first, then test
eth1 on the host. Without bridging.
Okay that makes sense.

Thanks,
Joshua

Loading...