Securely Isolating Development Environments for Agentic Workflows
There are many reasons you may want to set up an isolated environment. It may be because you have many projects that your working on and want each project isolated for convenience. It may be that you are experimenting with new tooling and want to test it out in an environment where you can easily destroy the test environment when you're done.
For me, and the reason for even writing this article, it is to isolate an environment where I can let my AI agents run freely without fear of it having access to my personal files and environments.
Prior to the world of AI agents, I used it primarily to isolate work environments. This made it ideal for when I switched projects I could simply destroy the container environment and free up all the space that the project had taken up and remove any dependencies that I do not actually want in my personal environment.
Lastly, this makes it super easy when working with multiple database versions. While there are other configurations that allow you to have multiple database versions, containers make this simple.
Linux Containers
The tools we will be using are Linux Containers (LXC) and Linux Container Daemon (LXD).
Linux Containers are low level containment tools to create isolated environments.
LXD is a daemon built on top of the linux containers that you generally interface with to get into the containers. LXD used to be a part of the Linux Containers project, but the team behind it has chosen to go their own way.
As a result, Incus is a fork of LXD that the Linux Containers team has moved forward with supporting directly. As such, there may come a time that it is better suited to use Incus over LXD assuming you are using LXC.
For the purposes of this article, I will use LXC and LXD. You can learn more about LXC and Incus at linuxcontainers.org and you can go to canonical.com/lxd for details about LXD.
Installation
Prior to installing, it is worth pointing out that the examples provided in this installation process operate with Arch Linux in mind. Nothing here is overly specific to Arch Linux, and you should be able to identify where you would substitute your Linux distro commands.
First, install linux containers (lxc).
yay -Sy lxc
Secondly, install the linux container daemon (lxd).
yay -Sy lxd
Once installed, you will need to enable the linux container daemon service.
sudo systemctl enable --now lxd.service
Add your user to the LXD group.
sudo usermod -aG lxd $USER
Restart your computer or logout and log back in.
Once you are back on, you are ready to initialize. Run the initialization and you will be prompted with configuration details. Generally speaking, you will want to stick with the defaults.
lxd init
Once completed, you are now ready to move onto configuring linux containers.
Creating a Container
There are a lot of options for creating a container. You can review the documentation for the full details and the various configuration options. For simplicity sake, you will create a standard Arch Linux container. To do this, run:
lxc launch images:archlinux <your_unique_container_name>
There are a lot of cool options for the image you can create, such as mimicking what might be available in AWS by limiting the CPU cores and RAM size.
Configure Container Network
Ideally, you will want to be able to connect to the services that you're building in the isolated environments. To facilitate this you will want to configure an entry point to connect to the environment.
Override the eth0 device for the container so you can statically set the ip address. Set the eth0 ip address. And then restart the container.
lxc config device override <your_unique_container_name> eth0
lxc config device set <your_unique_container_name> eth0 ipv4.address <your_ipv4_address>
lxc restart <your_unique_container_name>
Add the entry to /etc/hosts with sudo nvim /etc/hosts on the host machine.
<your_ipv4_address> <your_unique_container_name>
Running the Containerized Environment
Prior to being able to run the containerized environment, you are going to need to update the Subordinate User ID (subuid) and the Subordinate Group ID (subgid) files.
The purpose of the subuid and the subgid are to allow a regular, unpriviledged user to act as if they are a priviledged user within the containerized environment. It secures the host environment by preventing any any potential hackers from accessing any priviledged content within the host machine should they break through the containerized environment.
You will want to update the /etc/subuid and the /etc/subgid files with
something like root:1000000:1000000000.
It is important to understand what this actually does. What adding this snippet is actually saying is, "Give the root user permission to use a block of 1,000,000,000 fake, subordinate user/group IDs starting at 1,000,000".
The coordination of the user IDs and the group IDs being given to the container managed via the Linux Container Daemon and the daemon manages the privileging. The reason that we give such a large block size is to allow the Linux Container Daemon the ability to coordinate IDs between multiple containers. If you have two containers, each the Daemon needs to monitor which ID belongs to which container so that it can properly isolate between the containers.
Once that is added to the appropriate files, you are ready to run the container. You do this by running:
lxc exec <your_unique_container_name> -- /bin/bash
Note: You will be logged in as root within the container. This does NOT have root permission to your host machine.
It is worth pointing out that while the execution command lxc is used, it is
actually a command that is owned by the Linux Container Daemon. This is a poor
design decision made by the Linux Container Daemon team, in my opinion, but it
is important for you to recognize this in case you need to look something up -
you will want to search the Linux Container Daemon documentation, not the Linux
Container (lxc) library.
Setting Up a Container User
Now that you are running your isolated container, the fun work of setting up a new system begins. First, you will want to add your user and groups that you will generally work as within the container.
While it may be obvious, it is worth pointing out that you should use a different user name and password within the containerized environment. This is just an added security measure to prevent anybody or anything from having any potential insights into how they may be able to compromise your host machine.
Don't be like trash_dev. Don't use a singular password.
Add your user and set the password.
useradd -m -G wheel <container_user_name>
passwd <container_user_name>
Then you will want to edit the /etc/sudoers file and uncomment the following
line so all users in the wheel group will have sudo priviledge.
# %wheel ALL=(ALL:ALL) ALL
And now you can switch to your user by executing su - <container_user_name>.
From here on out, it is up to you to configure your containerized environment according to your needs and your specific distro.