Using Containerlab with netlab¶
Containerlab is a Linux-based container orchestration system that creates virtual network topologies using containers as network devices. To use it:
Use netlab install containerlab on Ubuntu, or follow the containerlab installation guide on other Linux distributions.
Install network device container images
Create lab topology file. Use
provider: clab
in lab topology to select the containerlab virtualization provider.Start the lab with netlab up
Supported Versions¶
Recent netlab releases were tested with containerlab version 0.44.3. That’s also the version the netlab install containerlab command installs.
The minimum supported containerlab version is 0.37.1 (2023-2-27) – that version introduced some changes to the location of generated certificate files.
If needed, use sudo containerlab version upgrade
to upgrade to the latest containerlab version.
Container Images¶
Lab topology file created by netlab up or netlab create command uses these container images (use netlab show images to display the actual system settings):
Virtual network device |
Container image |
---|---|
Arista cEOS |
ceos: 4.31.2F |
BIRD |
netlab/bird:latest |
Cisco IOS XRd |
ios-xr/xrd-control-plane:7.11.1 |
Cumulus VX |
networkop/cx:4.4.0 |
Cumulus VX with NVUE |
networkop/cx:5.0.1 |
Dell OS10 |
vrnetlab/vr-ftosv |
dnsmasq |
netlab/dnsmasq:latest |
FRR |
frrouting/frr:v8.4.0 |
Juniper vMX |
vrnetlab/vr-vmx:18.2R1.9 |
Juniper vSRX |
vrnetlab/vr-vsrx:23.1R1.8 |
Linux❗ |
python:3.9-alpine |
Mikrotik RouterOS 7 |
vrnetlab/vr-routeros:7.6 |
Nokia SR Linux |
ghcr.io/nokia/srlinux:latest |
Nokia SR OS |
vrnetlab/vr-sros:latest |
VyOS |
ghcr.io/sysoleg/vyos-container |
Cumulus VX, FRR, Linux, and Nokia SR Linux images are automatically downloaded from Docker Hub.
You must build the BIRD and dnsmasq images with the netlab clab build command.
Arista cEOS image has to be downloaded and installed manually.
Nokia SR OS container image (requires a license); see also vrnetlab instructions.
Follow Cisco’s documentation to install the IOS XRd container, making sure the container image name matches the one netlab uses (alternatively, change the default image name for the IOS XRd container).
You can also use vrnetlab to build VM-in-container images for Cisco CSR 1000v, Nexus 9300v, and IOS XR, OpenWRT, Mikrotik RouterOS, Arista vEOS, Juniper vMX and vQFX, and a few other devices.
Warning
You might have to change the default loopback address pool when using vrnetlab images. See Using vrnetlab Containers for details.
Containerlab Networking¶
LAN Bridges¶
The netlab up command automatically creates additional standard Linux bridges for multi-access network topologies.
You might want to use Open vSwitch bridges instead of standard Linux bridges (OVS interferes less with layer-2 protocols). After installing OVS, set defaults.providers.clab.bridge_type to ovs-bridge, for example:
defaults.device: cumulus
provider: clab
defaults.providers.clab.bridge_type: ovs-bridge
module: [ ospf ]
nodes: [ s1, s2, s3 ]
links: [ s1-s2, s2-s3 ]
Connecting to the Outside World¶
Lab links are modeled as point-to-point veth links or as links to internal Linux bridges. If you want a lab link connected to the outside world, set clab.uplink to the name of the Ethernet interface on your server[1]. The minimum containerlab release supporting this feature is release 0.43.0.
Example: use the following topology to connect your lab to the outside world through r1
on a Linux server that uses enp86s0
as the name of the Ethernet interface:
defaults.device: cumulus
provider: clab
nodes: [ r1,r2 ]
links:
- r1-r2
- r1:
clab:
uplink: enp86s0
Note
In multi-provider topologies, set the uplink parameter only for the primary provider (specified in the topology-level provider attribute); netlab copies the uplink parameter to all secondary providers during the lab topology transformation process.
Containerlab Management Network¶
containerlab creates a dedicated Docker network to connect the container management interfaces to the host TCP/IP stack. You can change the parameters of the management network in the addressing.mgmt pool:
ipv4: The IPv4 prefix used for the management network (default:
192.168.121.0/24
)ipv6: Optional IPv6 management network prefix. It’s not set by default.
start: The offset of the first management IP address in the management network (default:
100
). For example, with start set to 50, the device with node.id set to 1 will get the 51st IP address in the management IP prefix._network: The Docker network name (default:
netlab_mgmt
)_bridge: The name of the underlying Linux bridge (default: unspecified, created by Docker)
Container Management IP Addresses¶
netlab assigns an IPv4 (and optionally IPv6) address to the management interface of each container regardless of whether the container supports SSH access. That IPv4/IPv6 address is used by containerlab to configure the first container interface.
You can change a device management interface’s IPv4/IPv6 address with the mgmt.ipv4/mgmt.ipv6 node parameter, but be aware that nobody checks whether your change will result in overlapping IP addresses.
It’s much better to use the addressing.mgmt pool ipv4/ipv6/start parameters to adjust the address range used for management IP addresses and rely on netlab to assign management IP addresses to containers based on device node ID.
Port Forwarding¶
netlab supports container port forwarding – mapping of TCP ports on the container management IP address to ports on the host. You can use port forwarding to access the lab devices via the host’s external IP address without exposing the management network to the outside world.
Warning
Some containers do not run an SSH server and cannot be accessed via SSH, even if you set up port forwarding for the SSH port.
Port forwarding is turned off by default and can be enabled by configuring the defaults.providers.clab.forwarded dictionary. Dictionary keys are TCP port names (ssh, http, https, netconf), and dictionary values are the starting values of host ports. netlab assigns a unique host port to every forwarded container port based on the start value and container node ID.
For example, when given the following topology…
defaults.providers.clab.forwarded:
ssh: 2000
defaults.device: eos
nodes:
r1:
r2:
id: 42
… netlab maps:
SSH port on management interface of R1 to host port 2001 (R1 gets default node ID 1)
SSH port on management interface of R2 to host port 2042 (R2 has static ID 42)
Using vrnetlab Containers¶
vrnetlab is an open-source project that packages network device virtual machines into containers. The packaged container’s architecture requires an internal network, and it seems that vrnetlab (or the fork used by containerlab) uses the IPv4 prefix 10.0.0.0/24 on that network, which clashes with the netlab loopback address pool.
If you’re experiencing connectivity problems or initial configuration failures with vrnetlab-based containers, add the following parameters to the lab configuration file to change the netlab loopback addressing pool:
addressing:
loopback:
ipv4: 10.255.0.0/24
router_id:
ipv4: 10.255.0.0/24
Advanced Topics¶
Container Runtime Support¶
Containerlab supports multiple container runtimes besides the default docker. The runtime to use can be configured globally or per node, for example:
provider: clab
defaults.providers.clab.runtime: podman
nodes:
s1:
clab.runtime: ignite
Using File Binds¶
You can use clab.binds to map container paths to host file system paths, for example:
nodes:
- name: gnmic
device: linux
image: ghcr.io/openconfig/gnmic:latest
clab:
binds:
gnmic.yaml: '/app/gnmic.yaml:ro'
'/var/run/docker.sock': '/var/run/docker.sock'
Tip
You don’t have to worry about dots in filenames: netlab knows that the keys of the clab.binds and clab.config_templates dictionaries are filenames. They are not expanded into hierarchical dictionaries.
Generating and Binding Custom Configuration Files¶
In addition to binding pre-existing files, netlab can generate custom config files on the fly based on Jinja2 templates. For example, this is used internally to create the list of daemons for the frr container image:
frr:
clab:
image: frrouting/frr:v8.3.1
mtu: 1500
node:
kind: linux
config_templates:
daemons: /etc/frr/daemons
netlab tries to locate the templates in the current directory, in a subdirectory with the name of the device, and within the system directory templates/provider/clab/<device>
. .j2
suffix is always appended to the template name.
For example, the daemons
template used in the above example could be ./daemons.j2
, ./frr/daemons.j2
or <netsim_moddir>/templates/provider/clab/frr/daemons.j2
; the result gets mapped to /etc/frr/daemons
within the container file system.
You can use the clab.config_templates
node attribute to add your own container configuration files[2], for example:
provider: clab
nodes:
t1:
device: linux
clab:
config_templates:
some_daemon: /etc/some_daemon.cf
Faced with the above lab topology, netlab creates clab_files/t1/some_daemon
from some_daemon.j2
(the template could be either in current directory or linux
subdirectory) and maps it to /etc/some_daemon.cf
within the container file system.
Jinja2 Filters Available in Custom Configuration Files¶
The custom configuration files are generated within netlab and can, therefore, use standard Jinja2 filters. If you have Ansible installed as a Python package[3], netlab tries to import the ipaddr family of filters, making filters like ipv4, ipv6, or ipaddr available in custom configuration file templates.
Warning
Ansible developers love to restructure stuff and move it into different directories. This functionality works with two implementations of ipaddr filters (tested on Ansible 2.10 and Ansible 7.4/ Ansible Core 2.14) but might break in the future – we’re effectively playing whack-a-mole with Ansible developers.
Using Other Containerlab Node Parameters¶
You can also change these containerlab parameters:
clab.kind – containerlab device kind. Set in the system defaults for all supported devices; use it only to specify the device type for unknown devices.
clab.type to set node type (used by Nokia SR OS and Nokia SR Linux).
clab.env to set container environment (used to set interface names for Arista cEOS)
clab.ports to map container ports to host ports
clab.cmd to execute a command in a container.
String values (for example, command to execute specified in clab.cmd) are put into single quotes when written into the clab.yml
containerlab configuration file. Ensure you’re not using single quotes in your command line.
The complete list of supported Containerlab attributes is in the system defaults and can be printed with the netlab show defaults providers.clab.attributes
command.
To add other containerlab attributes to the clab.yml
configuration file, modify defaults.providers.clab.node_config_attributes settings, for example:
provider: clab
defaults.providers.clab.node_config_attributes: [ ports, env, user ]