Popular posts
For many years, I’ve ran my Unifi network controller with Docker Compose using included mongodb server.
But now it is time to change this. To externalize and upgrade it.
Previous situation
Until now, Unifi was deployed using the included MongoDB server.
unifi: image: goofball222/unifi:10.0.160-ubuntu hostname: unifi user: unifi restart: always ports: - 3478:3478/udp # STUN connection - 6789:6789 # throughput measurement from Android/iOS app - 8080:8080 # UAP/USW/USG to inform controller - 8443:8443 # controller GUI / API - 8880:8880 # HTTP portal redirect - 8843:8843 # HTTPS portal redirect - 10001:10001/udp # UBNT discovery broadcasts environment: - DB_MONGO_LOCAL=true volumes: - /srv/unifi/data:/usr/lib/unifi/data - /srv/unifi/log:/usr/lib/unifi/log - /srv/unifi/cert:/usr/lib/unifi/certMigration plan
- Check Unifi backup
- Stop the Unifi container
- Copy/move the MongoDB folder to the new location
- Start the MongoDB container, using the same version
- Update Unifi Docker Compose to use the external MongoDB
- Start the Unifi container
Check Unifi backup
# cd /srv # ls -lh unifi/data/backup/Stop the Unifi container
$ docker compose down unifiCopy the data
# cd /srv # cp -rp unifi/data/db mongounifiNew mongounifi service
Using the same version as in the Unifi container, to be upgraded later.
How to expose Prometheus metrics from a json blob returned by a server.
Here is a walk through, based on json retreived from an EthSwarm node.
Retreive the json from EthSwarm status API
Here is a sample for the status json, from the API documentation,
{ "overlay": "36b7efd913ca4cf880b8eeac5093fa27b0825906c600685b6abdd6566e6cfe8f", "proximity": 0, "beeMode": "light", "reserveSize": 0, "reserveSizeWithinRadius": 0, "pullsyncRate": 0, "storageRadius": 0, "connectedPeers": 0, "neighborhoodSize": 0, "requestFailed": true, "batchCommitment": 0, "isReachable": true, "lastSyncedBlock": 0, "committedDepth": 0 }Prometheus exporter package
ethswarm/status.go start with the package declaration and imports
Since a few time, I have to wait two minutes on each reboot, waiting for a faulty systemd service: systemd-networkd-wait-online.service
server# journalctl --boot -u systemd-networkd-wait-online.service Jun 22 13:38:00 server systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 22 13:40:00 server systemd-networkd-wait-online[472]: Timeout occurred while waiting for network connectivity. Jun 22 13:40:00 server systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE Jun 22 13:40:00 server systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'. Jun 22 13:40:00 server systemd[1]: Failed to start systemd-networkd-wait-online.service - Wait for Network to be Configured.The service is linked to the network-online systemd target.
server# ls -l /etc/systemd/system/network-online.target.wants/ total 4 lrwxrwxrwx 1 root root 42 Mar 23 12:02 networking.service -> /usr/lib/systemd/system/networking.service lrwxrwxrwx 1 root root 60 Apr 26 18:26 systemd-networkd-wait-online.service -> /usr/lib/systemd/system/systemd-networkd-wait-online.serviceIdentify the problem
Reproduce the problem, running the command without any additional argument. Real time is 2 minutes and return 1.
server# time /usr/lib/systemd/systemd-networkd-wait-online ; echo $? Timeout occurred while waiting for network connectivity. real 2m0.236s user 0m0.004s sys 0m0.011s 1You have a small/mid sized server and you want to install new software or custom kernel on it. But it take forever to build anything on it compared to your brand new 8 or 12 core modern laptop with fast NVMe SSD.
In addition, you don´t want to pollute your server with all the build dependencies.
Docker is here to save the day, allowing you to create fast and disposable build environments.
-rw-r--r-- 1 fs fs 8.6M 2024-12-22 17:33 linux-headers-6.12.6-test_6.12.6-1_amd64.deb -rw-r--r-- 1 fs fs 19M 2024-12-22 17:33 linux-image-6.12.6-test_6.12.6-1_amd64.deb -rw-r--r-- 1 fs fs 285M 2024-12-22 17:33 linux-image-6.12.6-test-dbg_6.12.6-1_amd64.deb -rw-r--r-- 1 fs fs 1.4M 2024-12-22 17:33 linux-libc-dev_6.12.6-1_amd64.debHere are two examples with linux kernel image and zfs-linux backport.
1.8Gb docker image
As I encountered some ruby problem with Vagrant on my Archlinux laptop, I decided to use Docker as a workaround.
$ vagrant /usr/lib/ruby/3.3.0/rubygems/specification.rb:2245:in `raise_if_conflicts': Unable to activate vagrant_cloud-3.1.1, because rexml-3.3.2 conflicts with rexml (~> 3.2.5) (Gem::ConflictError) from /usr/lib/ruby/3.3.0/rubygems/specification.rb:1383:in `activate' from /usr/lib/ruby/3.3.0/rubygems/core_ext/kernel_gem.rb:62:in `block in gem' from /usr/lib/ruby/3.3.0/rubygems/core_ext/kernel_gem.rb:62:in `synchronize' from /usr/lib/ruby/3.3.0/rubygems/core_ext/kernel_gem.rb:62:in `gem' from /opt/vagrant/embedded/gems/gems/vagrant-2.4.2/bin/vagrant:17:in `block in <main>' from /opt/vagrant/embedded/gems/gems/vagrant-2.4.2/bin/vagrant:16:in `each' from /opt/vagrant/embedded/gems/gems/vagrant-2.4.2/bin/vagrant:16:in `<main>'So let’s create a Docker image, using a debian slim image and just install vagrant and some vagrant plugins.
Introduction
BOSH is an open source project from Cloud Foundry that allow release engineering, software deployment and application lifecycle management of large scale and distributed system.

I wanted to take a look at it and fortunately it is possible to play on a local playground environment on VirtualBox using the bosh-lite instruction.