Apache Guacamole and docker-compose

Guacamole is a really nifty piece of software to use, but can be somewhat annoying to initially set up. Here we bring up a basic installation (SSL and various MFA/LDAP auth add-ons are beyond the scope of this tutorial) using docker-compose.

downloading the images:

docker pull guacamole/guacamole
docker pull guacamole/guacd
docker pull mariadb/server

creating the database initialization script:

docker run –rm guacamole/guacamole /opt/guacamole/bin/initdb.sh –mysql > guac_db.sql

creating our initial docker-compose.yaml:

version: '3'
services:

  guacdb:
    container_name: guacdb
    image: mariadb/server:latest
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'MariaDBRootPSW'
      MYSQL_DATABASE: 'guacamole_db'
      MYSQL_USER: 'guacamole_user'
      MYSQL_PASSWORD: 'MariaDBUserPSW'
    volumes:
      - 'guacdb-data:/var/lib/mysql'

volumes:
  guacdb-data:

Bringing the db container up:

docker-compose up -d

Copying db initialization script into the container:

docker cp guac_db.sql guacdb:/guac_db.sql

Opening a shell and initializing the db:

docker exec -it guacdb bash
cat /guac_db.sql | mysql -u root -p guacamole_db
exit

Shutting down db container:

docker-compose down

Expanding our docker-compose.yaml:

version: '3'
services:

  guacdb:
    container_name: guacdb
    image: mariadb/server:latest
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'MariaDBRootPSW'
      MYSQL_DATABASE: 'guacamole_db'
      MYSQL_USER: 'guacamole_user'
      MYSQL_PASSWORD: 'MariaDBUserPSW'
    volumes:
      - 'guacdb-data:/var/lib/mysql'

  guacd:
    container_name: guacd
    image: guacamole/guacd
    restart: unless-stopped

  guacamole:
    container_name: guacamole
    image: 'guacamole/guacamole:latest'
    restart: unless-stopped
    ports:
      - '8080:8080'
    environment:
      GUACD_HOSTNAME: "guacd"
      MYSQL_HOSTNAME: "guacdb"
      MYSQL_DATABASE: "guacamole_db"
      MYSQL_USER: "guacamole_user"
      MYSQL_PASSWORD: "MariaDBUserPSW"
    depends_on:
      - guacdb
      - guacd

volumes:
  guacdb-data:

Bringing everything up again:

docker-compose up -d

Logging in:

At this point you should be able to browse to http://my.docker.ip.address:8080/guacamole and login with guacadmin/guacadmin.

P.S: Despite the application container having a dependency on guacdb and guacd in the compose file, you can still run into minor trouble after system reboots: bringing up the containers on the reboot is handled by the docker daemon (and not docker-compose) which is unaware of the dependancy and will happily start all containers at once without waiting for the required dependancies to become healthy.

The “restart: unless-stopped” should bring guacamole right back up and successfully connect, but you might see signs of a previously failed container launch in the logs immediately after a reboot. If this concerns you, you can disable the container autostart and run docker-compose via cron upon reboots to bring up your stack or use some alternative orchestration tool.

Installing Server 2019 Core on Proxmox

Notes on initial VM configuration

During the initial creation of the VM, make sure your hard drive Bus/Device is set to SCSI and that your SCSI Controller Type is set to either “VirtIO SCSI” or “VirtIO SCSI single“. While IDE and SATA bus options being slow might be somewhat obvious to some, you will find a lot of guides recommending VirtIO Block and that’s a trap I fell into when first getting started.

Alas, while it’s raw performance is great, is does not support the discard=on option (make sure the discard checkbox is ticked), an option that makes your life A LOT easier by automatically reclaiming unused space from the Proxmox host as you delete data inside the guest OS from a thin-provisioned virtual disk.

It’s a bit counter-intuitive, but the SCSI single option uses 1 controller per virtual disk while regular SCSI uses 1 controller for all virtual disks. While the performance difference might be negligible for most users, the main difference between the two is that SCSI single allows the use of iothread and multiqueue features, which may be of use for some workloads.

Additionally, make sure that Qemu Agent is set to Enabled in VM options and that your have the latest VirtIO Windows driver .ISO image downloaded to the Proxmox host and mounted as a CD-ROM drive inside the Windows guest VM. While it may sound very tempting to go with the stable VirtIO driver .ISO image based on the name alone, I highly recommend going with latest image instead as the stable release severely lags behind and you are almost guaranteed to run into issues trying to use it with the very latest Windows OS versions.

Initial graphical installation

As you start the installation process, you will inevitably encounter a point where the installer cannot find your virtual hard drive. This is due to Windows not including the drivers for VirtIO SCSI storage, so this is where you press the Load Driver button and browse to the appropriate folder on your VirtIO driver image (for example D:\vioscsi\2k19\ ) and load your storage driver. The Windows installer should now recognize your virtual disk and let you partition it.

If you chose to use VirtIO as your network card during the initial VM configuration instead of the default Intel E1000 option, you should be able to load the appropriate NetKVM driver at this point in a similar fashion.

Setting up drivers and the guest agent post-install:

So you’ve completed the install, rebooted, logged in and are staring at the command-line, now what? Launch powershell.exe and get a list of available drives with:

Get-PSDrive

Create a local driver directory and copy drivers from the .ISO to local storage:

mkdir c:\drivers 
Copy-Item D:\vioscsi\2k19\ -Destination C:\drivers\vioscsi\2k19 -Recurse
Copy-Item D:\NetKVM\2k19\ -Destination C:\drivers\NetKVM\2k19 -Recurse
Copy-Item D:\Balloon\2k19\ -Destination C:\drivers\Balloon\2k19 -Recurse
Copy-Item D:\vioserial\2k19\ -Destination C:\drivers\vioserial\2k19 -Recurse
Copy-Item D:\guest-agent\ -Destination C:\drivers\guest-agent -Recurse

And then install them:

 pnputil -i -a C:\Drivers\NetKVM\2k19\amd64*.inf
 pnputil -i -a C:\Drivers\Balloon\2k19\amd64*.inf
 pnputil -i -a C:\Drivers\vioserial\2k19\amd64*.inf

Set up the guest agent:

 Set-Location C:\drivers\guest-agent
.\qemu-ga-x64.msi

Even if you don’t intend to actually use the balloon feature, you should still install the service as it’s required for the guest to properly report RAM use to the Proxmox host:

Copy-Item C:\drivers\Balloon\2k19\amd64 -Destination 'C:\Program Files\Balloon' -Recurse
Set-Location 'C:\Program Files\Balloon'
blnsvr.exe -i

At this point you should be more or less done with the basics and you will probably want to configure your network settings, set up PSRemoting and perhaps do other things that are beyond the scope of this little guide.

A few gotchas:

“Help! My entire Proxmox host is crashing and rebooting during heavy writes!”

Odds are you are using ZFS on the host and during abnormally high write io (such as running a benchmarking utility inside a guest) are running out of RAM on the host, causing a panic. This is normally not a cause for concern as synthetic write benchmarks rarely approximate real use conditions. If you are still concerned or are running into issues in an actual real world scenario, consider adding more RAM, tuning your cache or adding some swap.

“Reboot / Shutdown refuses to work on Windows guests in the Proxmox GUI, WTF?”

You forgot to enable Qemu Agent in the VM options and/or didn’t install guest-agent.