In a recent project we found the issue that the metadata service running on the compute node was not reachable by the VMs running on it.
In the project we use Ubuntu 14.04.3 and vanilla OpenStack (kilo). The implementation uses nova network (with VLANs) in multihost mode with an external gateway for each VLAN. The latter can be achieved by:
- configuring nova in /etc/nova/nova.conf with the option:
- and by supplying a file /etc/dnsmasq-nova.conf where you give the external gateway configuration for the different networks (demo-net, demo2.net in the following example):
In our configuration the VMs are not able to reach the metadata service when being provisioned. In other words 169.254.169.254:80 is not properly DNATed to hypervisor_IP:8775 for the VM.
The underlying reason is that since the VM has a different gateway (the external gateway) than the hypervisors VLAN IP, there will be no IP routing (it is only a L2 not an L3) so the PREROUTING chain in table NAT will not be traversed by these packets.
To solve this issue we force these packets to traverse the IP routing with an ebtables rule like:
ebtables -t nat -I PREROUTING -p ipv4 –ip-dst 169.254.169.254 –ip-protocol 6 –ip-dport 80 -j redirect –redirect-target ACCEPT
Enjoy the ebtables! Find more examples and documentation in here.
For a recent project we (me and Wuming) had to provide an OpenStack cloud in which the raw infiniband protocol would be available to the VMs running on the cloud.
The installation was done on Ubuntu 14.04.3 with vanilla OpenStack. Exposing infiniband requires quite a few steps and reading/googling quite a bit so I will document it here in case somebody needs to do the same.
To expose the native hardware interfaces to the VM:
- The BIOS of the computer has to support it (you may need to activate Intel VT-d (or AMD I/O Virtualization Technology), as with virtualization extensions, it may be off by default). Explore the BIOS of your servers to activate it if necessary;
- The Infiniband cards themselves have to support it. Look for SR-IOV in an “lspci -v” output as below:
$ sudo lspci -v |grep -A40 Mellanox
04:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
Subsystem: Hewlett-Packard Company Device 22f5
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at 96000000 (64-bit, non-prefetchable) [size=1M]
Memory at 94000000 (64-bit, prefetchable) [size=32M]
Capabilities:  Power Management version 3
Capabilities:  Vital Product Data
Capabilities: [9c] MSI-X: Enable+ Count=128 Masked-
Capabilities:  Express Endpoint, MSI 00
Capabilities:  Alternative Routing-ID Interpretation (ARI)
Capabilities:  Device Serial Number 24-be-05-ff-ff-b6-e3-40
Capabilities:  Single Root I/O Virtualization (SR-IOV)
Capabilities:  Advanced Error Reporting
Capabilities: [18c] #19
Kernel driver in use: mlx4_core
- The kernel in linux needs to be configured by passing the option intel_iommu=on. You can do it by editing the file /etc/default/grub so that it contains the option GRUB_CMDLINE_LINUX_DEFAULT=”intel_iommu=on” and running update-grub;
- The Infiniband cards need to be configured to expose the VFs. Edit the file /etc/modprobe.d/mlx4_core.conf to contain options like: options mlx4_core num_vfs=16 (or as high as your card supports. One VM will take one VF so this could be the limiting factor as of how many VMs can be deployed per compute node). You can find more documentation for the mellanox cards in the Mellanox Linux User Manual Mellanox_OFED_Linux_User_Manual_v3.10 or here since you may need to enable this option (in our case it was enabled already);
- Nova has to be configured to allow pci passthrough. Follow the documentation in here on “How to prepare the environment”
- Configure the nova (find the Virtualized Interfaces vendor_id and product_id for your case by running an lspci -vn)
- Create a flavor that automatically adds the interface to the VM
And now… Just launch a VM and log in! You should see the Infiniband card!