October 7, 2019

Technical Note: LXD Database and patch SQL

I deleted some unused zfs storage pools without realizing that they are used by LXD, so today after a machine rebooted, LXD refused to startups with following log messages were output to lxd.log files

t=2019-10-07T23:02:43+0900 lvl=info msg="Initializing storage pools"
t=2019-10-07T23:02:43+0900 lvl=eror msg="Failed to start the daemon: ZFS storage pool \"juju-zfs\" could not be imported: "
t=2019-10-07T23:02:43+0900 lvl=info msg="Starting shutdown sequence"

As lxd settings are stored in dqlite database (distributed sqlite) at /var/snap/lxd/common/lxd/database/global/db.bin, so I go confirm the record settings.

sqlite> .tables
certificates                        networks
config                              networks_config
images                              networks_nodes
images_aliases                      nodes
images_nodes                        operations
images_properties                   profiles
images_source                       profiles_config
instances                           profiles_config_ref
instances_backups                   profiles_devices
instances_config                    profiles_devices_config
instances_config_ref                profiles_devices_ref
instances_devices                   profiles_used_by_ref
instances_devices_config            projects
instances_devices_ref               projects_config
instances_profiles                  projects_config_ref
instances_profiles_ref              projects_used_by_ref
instances_snapshots                 schema
instances_snapshots_config          storage_pools
instances_snapshots_config_ref      storage_pools_config
instances_snapshots_devices         storage_pools_nodes
instances_snapshots_devices_config  storage_volumes
instances_snapshots_devices_ref     storage_volumes_config
sqlite> select * from storage_pools;
sqlite> select * from storage_pools_config;

It seems that the storage pools settings are stored in 2 tables: storage_pools and storage_pools_config.

It tried to delete the related records from the above tables and restarted lxd process but lxd still failed with the same errors. I went back to to database and confirmed that those records were still there even after I deleted it. It seems that lxd recover those records from its log files. I could read the code to see how it recover but it takes time so I decided to look for database documentation in lxd source code while creating a new topic to ask for helps from lxd community.

I skimmed through lxd database documentation and found that I could create a patch.global.sql to remove unnecessary records as these sql statements are run at the very early stage of lxd startup. I created a file call patch.global.sql with statement to remove unneeded settings and start lxd.

And lxd process starts again with all my in-development containers!

Lesson learned: before removing anything, look for all its usages.

October 3, 2019

Visualize Enviro pHat sensors data

Enviro pHat is an low-cost environmental sensing boards that let you measure temperature, light color, motion and analog sensors. The accompanied python SDK makes writing program to get data from enviro pHat as easy as pie. Enviro pHat could be used in combination with RaspPi zero to become a simple room conditions monitoring sensing board.

With grafana and prometheus, it looks so easy to at least build a dashboard for environ pHat sensors data, so I decided to build a enviro pHat dashboard.

System Diagram

Though Enviro pHat provides SDK to get data from sensors, we need methods to that data to server. We also need to care about the connection between the device and server side. In home environment, I used Wifi as Pi Zero supports Wifi connections. Prometheus also provides scrape methods to fetch data from prometheus client, so I will add a prometheus client in the device side for prometheus server to scrape. The data is stored in prometheus data store and visualized by grafana.

System Diagram: Prometheus fetch data from RaspPi Zero over Wifi

Device Side

Soldering sensor board and setup OS

If you buy an un-soldered Enviro pHat, you probably need to solder it with the accompanied 2x20 pin female header. The solder part is not that difficult and Pimoroni has a detailed guide for the task. Below are my raspberry Pi Zero and the Enviro pHat sensor board.

My RaspBerry Pi Zero
Enviro pHAT board
Enviro pHat

For the PiZero device, you need to download Raspbian and write it into the SDCard. Because we are going to connect to the Rasp Pi through Wifi, we need to setup wifi for Rasp Pi. After downloading the OS image and writing the OS image to the sd card, put the sd card into your PC, open the sd card directory and added 2 following files to the boot directory.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
        ssid="WIFI SSID"
        psk="WIFI SSID PASSWORD"

empty content

The purpose of the ssh file is to tell raspbian to enable ssh server by default.

Plugin the power supply in. You should see Raspberry Pi IP address through your home router (Or ping the whole IP address segments to see which IP is assigned).

You should be able to ssh into the raspberry Pi through its IP address. Perform basic server setups

$ ssh pi@$PI_IP_ADDRESS
pi@$PI_IP_ADDRESS password: ****
$ pi@'s password:  0.567, 3: 0.51
Linux pizero 4.19.66+ #1253 Thu Aug 15 11:37:30 BST 2019 armv6l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Oct  3 05:50:11 2019 from

$ sudo apt update
$ sudo vim /etc/hostname
$ ...
$ sudo reboot 

Final result
Final result after soldering enviro phat, setting up OS for rasp pi zero, and plugging in the power supply

Install Enviro pHat SDK

$ curl https://get.pimoroni.com/envirophat | bash
$ pip3 install prometheus_client
$ git clone git@github.com:telescreen/raspizero-playground.git
$ cd raspizero-playground
$ python3 enviro-collectd.py -d -f enviro-collectd.log --pid enviro-collectd.pid

I write a program to export environ data through prometheus client: https://github.com/telescreen/raspizero-playground . After running the program like above, you should be able to confirm the prometheus data at


Server Side

I have LXD running in my local machine. I also have Juju bootstrapped with local cloud as cloud environment. As Juju store already provided prometheus2 and grafana charms, Installation of these middlewares is simply typing following 2 commands.

$ juju deploy prometheus2
$ juju deploy granafa
$ juju add-relation prometheuss:grafana-source grafana:grafana-source

It will take a while for Juju to download / install charm and necessary software. After Juju finishes installing granafa and prometheus, we should see  both applciation active in Juju status

$ juju status
Model    Controller  Cloud/Region         Version  SLA          Timestamp
default  lxd         localhost/localhost  2.6.9    unsupported  16:19:07+09:00

App          Version  Status  Scale  Charm        Store       Rev  OS      Notes
grafana               active      1  grafana      jujucharms   32  ubuntu
prometheus2           active      1  prometheus2  jujucharms   11  ubuntu

Unit            Workload  Agent  Machine  Public address  Ports               Message
grafana/0*      active    idle   1      3000/tcp            Started grafana-server
prometheus2/0*  active    idle   0     9090/tcp,12321/tcp  Ready

Machine  State    DNS          Inst id        Series  AZ  Message
0        started  juju-63ab09-0  bionic      Running
1        started   juju-63ab09-1  bionic      Running

Next, we need to add the exposed prometheus data in the device to Prometheus's scrape job. I prepare a simple scrape job setup at


Adding the scrape job with Juju by following command

$ juju config prometheus2 scrape-jobs=@scrape-job.yaml

After 1 minute, the device IP address should appear in the scrape target with state UP

Login into grafana with password retrieved by following actions. The other actions are performed through the grafana UI and quite intuitively, so I excluded those settings from this post.

$ juju run-action --wait grafana/0 get-admin-password
  id: 43e7d1e6-3251-437b-816d-d2aa182c5554
    password: NJBMMVNqW6ssKywL
  status: completed
    completed: 2019-10-03 07:28:44 +0000 UTC
    enqueued: 2019-10-03 07:28:41 +0000 UTC
    started: 2019-10-03 07:28:44 +0000 UTC
  unit: grafana/0

Final Results

After like 3 hours, I could build the following dashboard for data captured by enviro phat's sensors. The light change is due to I put the sensors in a dark zone in my room. My room is definitely cooler than the temperature that Enviro is capturing. Probably, the temperature is affected by the heat from pi zero.

The storm is approaching my city so I am expecting to see a lower than usual air pressure. The pressure should increase back in a few days after the storm.

I was able to build a simple home conditions' monitoring system in a short time with opensource technology. I can felt more and more the importance of opensource softwares after this project.

Dashboard with temperature, air pressure and light information

October 1, 2019

Use MaaS to manage local computing resource

Besides managing bare metal machines, MaaS (Metal as a Service) can also manage KVM guests as long as those machines are boot from network PXE. This feature allow us to efficiently manage local computing resources, while at the same time, leverage JuJu for automatically middle ware deployment. In this blog post, I will detail steps to manage KVM guest machines using MaaS.

System diagram

MaaS managed Physical Machine Diagram
MaaS Managed Machine: MaaS Virtual Machine creates other Virtual machines through KVM Pod. All VMs access the internet through physical machine's NIC using NAT.

The MaaS server will be deployed to a virtual machine running inside the physical machine. The reason for installing MaaS into a virtual machine instead of directly install to the physical machine is that we want to have the flexibility of VMs as we want to update MaaS version, or backup... The MaaS server will manage its "parent" physical machine resources through KVM pod.

All machines will access to the Internet through physical machine' NIC using NAT.

We could easily extend this diagram by adding more virtual NIC for more complex networks. We could also easily add another physical machines to MaaS, as long as the machine could be reachable by MaaS VM.

The Internet access for MaaS server is necessary because MaaS needs to fetch OS images from the remote repository.

Preinstall preparation

We will need to install kvm to the physical machine. The procedure to install KVM on Ubuntu Linux could be referred at https://help.ubuntu.com/community/KVM/Installation.

Because MaaS server will manage network resources, we need to make the modification to the "default" network to disable the DHCP for this network. MaaS needs to manage its network using DHCP in order to boot machine and automatically install OS.

To disable default, in virt-manager UI, we go to

Virt-Manager UI > Edit > Connection Details > [Virtual Networks] Tab

Remove the default network by stopping and deleting it. Re-add a new network name it "default". Modify its IP if you prefer (in my case, I use, remember to uncheck "Enable DHCPv4" option. Accepts other options as default.

My default network is as below

Default Network with DHCP disabled and is NATed to the physical NIC

With network ready, we go to install MaaS

Install MaaS Server

MaaS Installation

For MaaS server, we will need to use normal kvm management client to create MaaS VM. We also need to download Ubuntu .iso from Ubuntu Linux homepage to install it.

The process of install MaaS is similar to installation of Ubuntu Server.

MaaS Server Settings

Because we disabled DHCP, MaaS will not be allocated IP address, so we need to access the machine in console and setup its network.

I login into the machine console and make the following modification to /etc/netplat/50-cloud-init.yaml

$ vim /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
            dhcp4: false
     version: 2 
$ netplan apply 

Other settings like hostname ... could also be performed here. After setting up, reboot the machine for all settings applied, and to confirm that we have no problem when rebooting machines.

MaaS Installation

MaaS Installation is easily using apt

$ sudo apt install maas
  Applying metadataserver.0011_script_metadata... OK
  Applying metadataserver.0012_store_script_results... OK
  Applying metadataserver.0013_scriptresult_physicalblockdevice... OK
  Applying metadataserver.0014_rename_dhcp_unconfigured_ifaces... OK
  Applying metadataserver.0015_migrate_storage_tests... OK
  Applying metadataserver.0016_script_model_fw_update_and_hw_config... OK
  Applying metadataserver.0017_store_requested_scripts... OK
  Applying metadataserver.0018_script_result_skipped... OK
  Applying piston3.0002_auto_20151209_1652... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
Setting up maas (2.4.2-7034-g2f5deb8b8-0ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.19) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Processing triggers for rsyslog (8.32.0-1ubuntu4) ...
$ sudo maas init 
Create first admin account:
Username: admin
Email: ha@buihanotes.com
Import SSH keys [] (lp:user-id or gh:user-id):

After installing MaaS, confirm that we can open MaaS web interface by access to MaaS Server port 5240. In my case it is:

2 things worth doing after MaaS installation is

  1. Choose OS Image so MaaS can sync it from remote repository
  2. Synchronization your public key from github or launchpad as you could ssh into new virtual machines.

Setup KVM Pods

From MaaS version 2.6, MaaS server has one additional option to install KVM automatically. If you have existed machine, you should probably manually add physical machine to host to not break things.

In order to add physical machine to host, we add KVM pod to MaaS. In order for Pod to talk to physical machine, we add private ssh key to maas user to allow maas user to ssh to physical machine.

$ cd /var/lib/maas
$ sudo mkdir .ssh && sudo chown maas. .ssh && sudo chmod 700 .ssh
$ sudo vim .ssh/id_rsa
## :wq
$ sudo chown maas .ssh/id_rsa && sudo chmod 600 .ssh/id_rsa

Open MaaS Web UI and add pods with virsh address as follows


Because our host ip address from virtual machine will have the ip address of, we add that IP as hypervisor endpoint. One thing to notice here is that the $USERNAME should be able to communicate with local hypervisor without the need to sudo. In the other words, the user in physical machine should belong to libvirt group

Add Virtual Machine as KVM Pods

Provision Virtual Machine

If every step works without problem, you should be able to to see the Pods Summary with the number of virtual machines, its specifications.

In my case, I have a machine with 16 cores and and 32GB of RAM memory. I have 3 storage pools with different size and specifications.

Provisioning a machine is as simple as clicking [Take action] button and choose [Compose]

Compose a new Virtual machine by choosing its spec


In this post, I wrote about steps to add physical machines to MaaS server to allow MaaS to manage local computer resources, turn local computer into a cloud like environment. In next sections, I will write in more details about how to manage virtual machines, how to combine JuJu with local maas to rapidly deploy experiment Linux environment.