After reading this article you will know why you want to use LXC containers on Turris 1.x and Turris Omnia routers, how to install them and how to can use them.
LXC technology is a light-weight virtualization, where each virtual machine (VM) shares the kernel of the operating system.
LXC take care of isolation and depending on your needs also limits system resources for each VM. You can, for example, choose the file system, limit how much RAM can each container take, how much % of CPU can each container take and so on.
Under no circumstance should you use the internal storage of the router (eMMC flash) for your LXC containers.
Common GNU/Linux distributions do not count with the operation of devices like a router and write to the disk at high frequency.
Excessive writing to the internal flash disk (eMMC) wear it out and this can result in irreparable damage to your device, which doesn't have to be covered by warranty.
For instructions on how to connect and mount an external storage, for example, a USB flash or hard drive, follow the instructions in the article for home NAS.
The easiest and fastest way to install containers is to use the download template. Using this approach, you will download the archive of containers with chosen distribution and it will extract to an appropriate place without having to install some special tools on the host system.
The LXC utilities, which are made available by Turris OS, are configured to download distributions from our server, where we prepared a few images. For Debian and Ubuntu we use images from linuxcontainers.org.
Next, you can manage your LXC containers with one of the two following methods:
Log in to LuCI, which you will by default find here) Then go to Services → LXC Containers, where you can manage them.
As you can see in the screenshot below when creating a new container you choose a name and distribution for it. When you click the button Create
, the creation process will start - this can take a while. In LuCI you cannot see the progress and if it fails, you might not find why.
All new containers are by default configured to have a one virtual network card, which is connected to the LAN bridge of your router. Network wise the container behaves like any other computer on the local network. You can assign a static IP address to DHCP, set port forwarding or even create some rules for the container in your firewall.
If you decide to use CLI (Command-line interface) you need to login to SSH and afterward enter this command:
lxc-create -t download -n name_of_lxc_container
Then you will be asked a couple of questions about the distribution and release of the container, which you would like to create.
A common mistake is a typo in architecture. It is armv7l
(arm seven el) instead of armv71
(arm seventy-one).
The first thing you should do in a new container is to set a strong password. That has to be done through CLI (Command-line interface). If you call this command, you will get a root shell inside the container.
lxc-attach -n name_of_lxc_container
Now you can set your password using the passwd
command. It is also a good idea to make sure that the network is set up correctly, for example, you can enable SSH, so that next time you can SSH directly to your container. You can find out how to set up a network or even enable SSH from the documentation for your distribution of choice.
From LuCI and the CLI (Command-line interface) you can start and stop the container. In LuCI you will find buttons and you will see the current status of the container. If you decide to choose the CLI (Command-line interface), these commands might be useful to you:
lxc-ls -f
– lists information about all configured containerslxc-info -n name_of_lxc_container
– displays information about a specific containerlxc-start -n name_of_lxc_container
– starts the containerlxc-stop -n name_of_lxc_container
– stops the container
To enable automatic startup of your container at boot, you need to edit the configuration file:
/etc/config/lxc-auto
.
Here is an example configuration file:
config container option name my_first_container option timeout 60 config container option name my_second_container option timeout 120
As you can see, you can configure multiple container sections. Every container here will start at boot and each of them will be correctly halted during the shutdown. Set the timeout
option to specify how much time in seconds the containers have to gracefully shut down before being killed. The default value is 300.
Alpine Linux used to have a configured network and was working immediately after the installation. Why isn't it working now?
In the beginning of February 2018, the people at LinuxContainers.org decided to remove architecture armhf
for Alpine. Community asked us if we can bring it back.
Right now we use an image from their official website AlpineLinux.org, but it is mini root file system, which is primaly made for Docker or chroots, which is why you need to setup the network and also a few other things. To find out how to set it up, have a look in our community documentation.
In the meantime we created issue in LinuxContainers.org's Github and they should bring back Alpine, but only for LXC 2.0. This version will be included in Turris OS 4.0.
Is it possible to have Docker on Turris Omnia?
Docker isn't officially supported. However, if you know what are you doing and you really need to have Docker on the Turris Omnia, you can follow these instructions, which you can find on our forum.