November 16, 2019

Perspectives

I find this picture is intriguing.

To one chicken, the other chicken is on the other side
To the viewer of this picture, we have two stupid chickens (or one?).

I find it is interesting because it's just one picture yet we could perceive it differently, depending on the place we put our perspectives.

October 7, 2019

Technical Note: LXD Database and patch SQL

I deleted some unused zfs storage pools without realizing that they are used by LXD, so today after a machine rebooted, LXD refused to startups with following log messages were output to lxd.log files

t=2019-10-07T23:02:43+0900 lvl=info msg="Initializing storage pools"
t=2019-10-07T23:02:43+0900 lvl=eror msg="Failed to start the daemon: ZFS storage pool \"juju-zfs\" could not be imported: "
t=2019-10-07T23:02:43+0900 lvl=info msg="Starting shutdown sequence"
t

As lxd settings are stored in dqlite database (distributed sqlite) at /var/snap/lxd/common/lxd/database/global/db.bin, so I go confirm the record settings.


sqlite> .tables
certificates                        networks
config                              networks_config
images                              networks_nodes
images_aliases                      nodes
images_nodes                        operations
images_properties                   profiles
images_source                       profiles_config
instances                           profiles_config_ref
instances_backups                   profiles_devices
instances_config                    profiles_devices_config
instances_config_ref                profiles_devices_ref
instances_devices                   profiles_used_by_ref
instances_devices_config            projects
instances_devices_ref               projects_config
instances_profiles                  projects_config_ref
instances_profiles_ref              projects_used_by_ref
instances_snapshots                 schema
instances_snapshots_config          storage_pools
instances_snapshots_config_ref      storage_pools_config
instances_snapshots_devices         storage_pools_nodes
instances_snapshots_devices_config  storage_volumes
instances_snapshots_devices_ref     storage_volumes_config
sqlite> select * from storage_pools;
1|lxd|zfs||1
2|juju-zfs|zfs||1
3|juju-btrfs|btrfs||1
sqlite> select * from storage_pools_config;
3|1|1|zfs.pool_name|lxd
4|1|1|source|lxd
5|1|1|volatile.initial_source|lxd
7|2|1|size|21GB
8|2|1|source|/var/snap/lxd/common/lxd/disks/juju-zfs.img
9|2|1|zfs.pool_name|juju-zfs
11|3|1|size|21GB
12|3|1|source|/var/snap/lxd/common/lxd/disks/juju-btrfs.img


It seems that the storage pools settings are stored in 2 tables: storage_pools and storage_pools_config.

It tried to delete the related records from the above tables and restarted lxd process but lxd still failed with the same errors. I went back to to database and confirmed that those records were still there even after I deleted it. It seems that lxd recover those records from its log files. I could read the code to see how it recover but it takes time so I decided to look for database documentation in lxd source code while creating a new topic to ask for helps from lxd community.

I skimmed through lxd database documentation and found that I could create a patch.global.sql to remove unnecessary records as these sql statements are run at the very early stage of lxd startup. I created a file call patch.global.sql with statement to remove unneeded settings and start lxd.

And lxd process starts again with all my in-development containers!

Lesson learned: before removing anything, look for all its usages.

October 3, 2019

Visualize Enviro pHat sensors data

Enviro pHat is an low-cost environmental sensing boards that let you measure temperature, light color, motion and analog sensors. The accompanied python SDK makes writing program to get data from enviro pHat as easy as pie. Enviro pHat could be used in combination with RaspPi zero to become a simple room conditions monitoring sensing board.

With grafana and prometheus, it looks so easy to at least build a dashboard for environ pHat sensors data, so I decided to build a enviro pHat dashboard.

System Diagram


Though Enviro pHat provides SDK to get data from sensors, we need methods to that data to server. We also need to care about the connection between the device and server side. In home environment, I used Wifi as Pi Zero supports Wifi connections. Prometheus also provides scrape methods to fetch data from prometheus client, so I will add a prometheus client in the device side for prometheus server to scrape. The data is stored in prometheus data store and visualized by grafana.



System Diagram: Prometheus fetch data from RaspPi Zero over Wifi


Device Side

Soldering sensor board and setup OS

If you buy an un-soldered Enviro pHat, you probably need to solder it with the accompanied 2x20 pin female header. The solder part is not that difficult and Pimoroni has a detailed guide for the task. Below are my raspberry Pi Zero and the Enviro pHat sensor board.

My RaspBerry Pi Zero
Enviro pHAT board
Enviro pHat


For the PiZero device, you need to download Raspbian and write it into the SDCard. Because we are going to connect to the Rasp Pi through Wifi, we need to setup wifi for Rasp Pi. After downloading the OS image and writing the OS image to the sd card, put the sd card into your PC, open the sd card directory and added 2 following files to the boot directory.

wpa_supplicant.conf
 
country=JP
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
        ssid="WIFI SSID"
        psk="WIFI SSID PASSWORD"
        key_mgmt=WPA-PSK
}

ssh
empty content

The purpose of the ssh file is to tell raspbian to enable ssh server by default.

Plugin the power supply in. You should see Raspberry Pi IP address through your home router (Or ping the whole IP address segments to see which IP is assigned).

You should be able to ssh into the raspberry Pi through its IP address. Perform basic server setups

$ ssh pi@$PI_IP_ADDRESS
pi@$PI_IP_ADDRESS password: ****
$ pi@192.168.88.218's password:  0.567, 3: 0.51
Linux pizero 4.19.66+ #1253 Thu Aug 15 11:37:30 BST 2019 armv6l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Oct  3 05:50:11 2019 from 192.168.88.254

$ sudo apt update
$ sudo vim /etc/hostname
$ ...
$ sudo reboot 

Final result
Final result after soldering enviro phat, setting up OS for rasp pi zero, and plugging in the power supply

Install Enviro pHat SDK



$ curl https://get.pimoroni.com/envirophat | bash
$ pip3 install prometheus_client
$ git clone git@github.com:telescreen/raspizero-playground.git
$ cd raspizero-playground
$ python3 enviro-collectd.py -d -f enviro-collectd.log --pid enviro-collectd.pid


I write a program to export environ data through prometheus client: https://github.com/telescreen/raspizero-playground . After running the program like above, you should be able to confirm the prometheus data at

http://$PI_IP_ADDRESS:9090/


Server Side


I have LXD running in my local machine. I also have Juju bootstrapped with local cloud as cloud environment. As Juju store already provided prometheus2 and grafana charms, Installation of these middlewares is simply typing following 2 commands.


$ juju deploy prometheus2
$ juju deploy granafa
$ juju add-relation prometheuss:grafana-source grafana:grafana-source

It will take a while for Juju to download / install charm and necessary software. After Juju finishes installing granafa and prometheus, we should see  both applciation active in Juju status


$ juju status
Model    Controller  Cloud/Region         Version  SLA          Timestamp
default  lxd         localhost/localhost  2.6.9    unsupported  16:19:07+09:00

App          Version  Status  Scale  Charm        Store       Rev  OS      Notes
grafana               active      1  grafana      jujucharms   32  ubuntu
prometheus2           active      1  prometheus2  jujucharms   11  ubuntu

Unit            Workload  Agent  Machine  Public address  Ports               Message
grafana/0*      active    idle   1        10.4.88.50      3000/tcp            Started grafana-server
prometheus2/0*  active    idle   0        10.4.88.243     9090/tcp,12321/tcp  Ready

Machine  State    DNS          Inst id        Series  AZ  Message
0        started  10.4.88.243  juju-63ab09-0  bionic      Running
1        started  10.4.88.50   juju-63ab09-1  bionic      Running

Next, we need to add the exposed prometheus data in the device to Prometheus's scrape job. I prepare a simple scrape job setup at

https://github.com/telescreen/raspizero-playground/blob/master/scrape-job.yaml

Adding the scrape job with Juju by following command

$ juju config prometheus2 scrape-jobs=@scrape-job.yaml

After 1 minute, the device IP address should appear in the scrape target with state UP


Login into grafana with password retrieved by following actions. The other actions are performed through the grafana UI and quite intuitively, so I excluded those settings from this post.



$ juju run-action --wait grafana/0 get-admin-password
unit-grafana-0:
  id: 43e7d1e6-3251-437b-816d-d2aa182c5554
  results:
    password: NJBMMVNqW6ssKywL
  status: completed
  timing:
    completed: 2019-10-03 07:28:44 +0000 UTC
    enqueued: 2019-10-03 07:28:41 +0000 UTC
    started: 2019-10-03 07:28:44 +0000 UTC
  unit: grafana/0


Final Results

After like 3 hours, I could build the following dashboard for data captured by enviro phat's sensors. The light change is due to I put the sensors in a dark zone in my room. My room is definitely cooler than the temperature that Enviro is capturing. Probably, the temperature is affected by the heat from pi zero.

The storm is approaching my city so I am expecting to see a lower than usual air pressure. The pressure should increase back in a few days after the storm.

I was able to build a simple home conditions' monitoring system in a short time with opensource technology. I can felt more and more the importance of opensource softwares after this project.

Dashboard with temperature, air pressure and light information

October 1, 2019

Use MaaS to manage local computing resource


Besides managing bare metal machines, MaaS (Metal as a Service) can also manage KVM guests as long as those machines are boot from network PXE. This feature allow us to efficiently manage local computing resources, while at the same time, leverage JuJu for automatically middle ware deployment. In this blog post, I will detail steps to manage KVM guest machines using MaaS.

System diagram


MaaS managed Physical Machine Diagram
MaaS Managed Machine: MaaS Virtual Machine creates other Virtual machines through KVM Pod. All VMs access the internet through physical machine's NIC using NAT.

The MaaS server will be deployed to a virtual machine running inside the physical machine. The reason for installing MaaS into a virtual machine instead of directly install to the physical machine is that we want to have the flexibility of VMs as we want to update MaaS version, or backup... The MaaS server will manage its "parent" physical machine resources through KVM pod.

All machines will access to the Internet through physical machine' NIC using NAT.

We could easily extend this diagram by adding more virtual NIC for more complex networks. We could also easily add another physical machines to MaaS, as long as the machine could be reachable by MaaS VM.

The Internet access for MaaS server is necessary because MaaS needs to fetch OS images from the remote repository.

Preinstall preparation

We will need to install kvm to the physical machine. The procedure to install KVM on Ubuntu Linux could be referred at https://help.ubuntu.com/community/KVM/Installation.

Because MaaS server will manage network resources, we need to make the modification to the "default" network to disable the DHCP for this network. MaaS needs to manage its network using DHCP in order to boot machine and automatically install OS.

To disable default, in virt-manager UI, we go to

Virt-Manager UI > Edit > Connection Details > [Virtual Networks] Tab

Remove the default network by stopping and deleting it. Re-add a new network name it "default". Modify its IP if you prefer (in my case, I use 10.23.23.0/24), remember to uncheck "Enable DHCPv4" option. Accepts other options as default.

My default network is as below

Default Network with DHCP disabled and is NATed to the physical NIC

With network ready, we go to install MaaS

Install MaaS Server

MaaS Installation

For MaaS server, we will need to use normal kvm management client to create MaaS VM. We also need to download Ubuntu .iso from Ubuntu Linux homepage to install it.

The process of install MaaS is similar to installation of Ubuntu Server.

MaaS Server Settings

Because we disabled DHCP, MaaS will not be allocated IP address, so we need to access the machine in console and setup its network.

I login into the machine console and make the following modification to /etc/netplat/50-cloud-init.yaml

$ vim /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        ens3:
            addresses:
            - 10.23.23.2/24
            gateway4: 10.23.23.1
            nameservers:
                addresses:
                - 10.23.23.1
        ens9:
            addresses:
            - 10.4.23.2/24
            dhcp4: false
     version: 2 
$ netplan apply 

Other settings like hostname ... could also be performed here. After setting up, reboot the machine for all settings applied, and to confirm that we have no problem when rebooting machines.

MaaS Installation


MaaS Installation is easily using apt


$ sudo apt install maas
…
  Applying metadataserver.0011_script_metadata... OK
  Applying metadataserver.0012_store_script_results... OK
  Applying metadataserver.0013_scriptresult_physicalblockdevice... OK
  Applying metadataserver.0014_rename_dhcp_unconfigured_ifaces... OK
  Applying metadataserver.0015_migrate_storage_tests... OK
  Applying metadataserver.0016_script_model_fw_update_and_hw_config... OK
  Applying metadataserver.0017_store_requested_scripts... OK
  Applying metadataserver.0018_script_result_skipped... OK
  Applying piston3.0002_auto_20151209_1652... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
Setting up maas (2.4.2-7034-g2f5deb8b8-0ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.19) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Processing triggers for rsyslog (8.32.0-1ubuntu4) ...
 
$ sudo maas init 
Create first admin account:
Username: admin
Password:
Again:
Email: ha@buihanotes.com
Import SSH keys [] (lp:user-id or gh:user-id):

After installing MaaS, confirm that we can open MaaS web interface by access to MaaS Server port 5240. In my case it is: http://10.23.23.2:5240/MAAS/#/dashboard

2 things worth doing after MaaS installation is

  1. Choose OS Image so MaaS can sync it from remote repository
  2. Synchronization your public key from github or launchpad as you could ssh into new virtual machines.

Setup KVM Pods

From MaaS version 2.6, MaaS server has one additional option to install KVM automatically. If you have existed machine, you should probably manually add physical machine to host to not break things.

In order to add physical machine to host, we add KVM pod to MaaS. In order for Pod to talk to physical machine, we add private ssh key to maas user to allow maas user to ssh to physical machine.

$ cd /var/lib/maas
$ sudo mkdir .ssh && sudo chown maas. .ssh && sudo chmod 700 .ssh
$ sudo vim .ssh/id_rsa
## PRIVATE KEY ##
## :wq
$ sudo chown maas .ssh/id_rsa && sudo chmod 600 .ssh/id_rsa


Open MaaS Web UI and add pods with virsh address as follows

qemu+ssh://$USERNAME:10.23.23.1/system

Because our host ip address from virtual machine will have the ip address of 10.23.23.1, we add that IP as hypervisor endpoint. One thing to notice here is that the $USERNAME should be able to communicate with local hypervisor without the need to sudo. In the other words, the user in physical machine should belong to libvirt group


Add Virtual Machine as KVM Pods


Provision Virtual Machine

If every step works without problem, you should be able to to see the Pods Summary with the number of virtual machines, its specifications.

In my case, I have a machine with 16 cores and and 32GB of RAM memory. I have 3 storage pools with different size and specifications.



Provisioning a machine is as simple as clicking [Take action] button and choose [Compose]

Compose a new Virtual machine by choosing its spec

Summary

In this post, I wrote about steps to add physical machines to MaaS server to allow MaaS to manage local computer resources, turn local computer into a cloud like environment. In next sections, I will write in more details about how to manage virtual machines, how to combine JuJu with local maas to rapidly deploy experiment Linux environment.

September 9, 2019

Technical Notes: How to change Juju user password

Juju User

JuJu has an internal user framework, which supports sharing of controllers and models. Using JuJu user feature, a sysadmin could separated users for controllers as in multiple clouds deployment or for users as in multiple systems deployment. Each juju user can have its own permissions at application, model, or controller layer. This separation of responsibilities allow multiple users to manage multiple infrastructures with a clear border of responsibilities.

At controller bootstrap phase, Juju automatically creates a new user named admin with superuser privileges. This user could perform all operations at cluster level. We could confirm the current user and its privileges by 

$ juju whoami
Controller:  kvm
Model:       k8s
User:        admin 

$ juju users
Controller: kvm

Name    Display name  Access     Date created  Last connection
admin*  admin         superuser  23 hours ago  just now  

Authentication

An user needs to submit its username and password in order to login into controller. At bootstrap phase, the admin password is automatically generated by Juju. Admin user is also login by default with generated password. All commands with controllers after login is managed by "cookies", which are stored at JuJu client "~/.local/share/juju/cookies/{cloud_name}.yaml". This file stores all cookies needed for current users to authenticated against Juju controllers. In my case, the file is

$ ll ~/.local/share/juju/cookies/
total 16
drwx------ 2 telescreen telescreen 4096  9月  9 12:55 ./
drwx------ 5 telescreen telescreen 4096  9月  9 12:55 ../
-rw------- 1 telescreen telescreen 2998  9月  9 12:55 kvm.json


If we by accident delete this file, current user will be logout and we need to submit a password to re-authenticated against Juju controller.

$ rm -rf ~/.local/share/juju/cookies/kvm.json
$ ll ~/.local/share/juju/cookies
total 12
drwx------ 2 telescreen telescreen 4096  9月  9 13:05 ./
drwx------ 5 telescreen telescreen 4096  9月  9 13:04 ../
$
$ juju models
please enter password for admin on kvm:
Controller: kvm

Model       Cloud/Region  Type  Status     Machines  Cores  Units  Access  Last connection
controller  kvm           maas  available         1      1  -      admin   just now
default     kvm           maas  available         0      -  -      admin   19 hours ago
k8s*        kvm           maas  available         5      5  16     admin   12 minutes ago



Because the password is generated by default at controller bootstrap phase and we don't know the password, it would be a good idea to change the password after bootstrap. We could change the current user's password by

$ juju change-user-password
new password:
type new password again:
Your password has been changed.


Recover password


In case we forget the administrator password and we happened to logout of Juju, we need to recover the password. The process of recover password is complex but doable.

Access the Database

Firstly, we need to login into the Juju controller machines with Juju mongodb running as PRIMARY instance. The credential for login is stored at ~/.local/share/juju/ssh/. We therefore could login into juju controller machine by
$ juju machines -m controller
Machine  State    DNS           Inst id  Series  AZ       Message
0        started  10.23.23.125  g6es7h   bionic  default  Deployed

$
$ ssh -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@10.23.23.125
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Mon Sep  9 04:15:30 UTC 2019

  System load:  0.26              Processes:           83
  Usage of /:   66.1% of 7.27GB   Users logged in:     0
  Memory usage: 45%               IP address for eth0: 10.23.23.125
  Swap usage:   0%

 * Congrats to the Kubernetes community on 1.16 beta 1! Now available
   in MicroK8s for evaluation and testing, with upgrades to RC and GA

     snap info microk8s

 * Canonical Livepatch is available for installation.
   - Reduce system reboots and improve kernel security. Activate at:
     https://ubuntu.com/livepatch

0 packages can be updated.
0 updates are security updates.


Last login: Sun Sep  8 08:52:20 2019 from 10.23.23.1
ubuntu@many-shrew:~$



Next, we need to login into mongodb used by Juju. The credential of mongodb admin user is stored at /var/lib/juju/agents/machine-0/agent.conf

$ mongo --sslAllowInvalidCertificates --ssl -u admin -p $(sudo sed -n '/^oldpassword:/s/^oldpassword: //p' /var/lib/juju/agents/machine-0/agent.conf) 127.0.0.1:37017/admin
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:37017/admin
2019-09-09T04:19:58.587+0000 W NETWORK  [thread1] SSL peer certificate validation failed: unable to get local issuer certificate
2019-09-09T04:19:58.587+0000 W NETWORK  [thread1] The server certificate does not match the host name. Hostname: 127.0.0.1 does not match CN: *
MongoDB server version: 3.6.3
Server has startup warnings:
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten]
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-09-08T04:07:02.872+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-09-08T04:07:02.872+0000 I CONTROL  [initandlisten]
juju:PRIMARY>
juju:PRIMARY> use juju
switched to db juju
juju:PRIMARY> db.users.find().pretty()
{
        "_id" : "admin",
        "name" : "admin",
        "displayname" : "admin",
        "passwordhash" : "il/ap8gXWEUp8x1T48bIwOxm",
        "passwordsalt" : "3YTbonyqTpmWCvtQ",
        "createdby" : "admin",
        "datecreated" : ISODate("2019-09-08T04:07:11Z"),
        "txn-revno" : NumberLong(7),
        "txn-queue" : [
                "5d75d082a20e030edc1539c8_953a034a"
        ]
}



Password Hash

According to [1], Juju users' password is hashed using PBKDF2 [2] with following parameters
  • Number of hash Iterations: 8192
  • Hash algorithm: SHA-512
  • Salt length: 12 bytes [3]
  • Password length in bytes: 18 bytes
We could easily generated the hashed value for any pasword using the same parameters above. For example, in Python we could generate a password by
>> import hashlib, binascii
>> dk = hashlib.pbkdf2_hmac('sha512', b'test', b'4AIANdqIiPvpZUS2', 8192, 18)
>> binascii.b2a_base64(dk)
b'iJW8NpnYPG4iwfzQoMkLfY2z\n'


Update password

Finally, we could recover the password by update the "passwordhash" and "passwordsalt" fields using mongodb update command. Because mongodb will overwrite the whole document, we need to include all other fields with old value in our backup.

db.users.update({_id: "admin"},
{
        "_id" : "admin",
        "name" : "admin",
        "displayname" : "admin",
        "passwordhash" : "iJW8NpnYPG4iwfzQoMkLfY2z",
        "passwordsalt" : "4AIANdqIiPvpZUS2",
        "createdby" : "admin",
        "datecreated" : ISODate("2019-09-08T04:07:11Z"),
        "txn-revno" : NumberLong(8),
        "txn-queue" : [
                "5d75d5f3a20e030edc1544bd_3d2483d7"
        ]
})

And we could login into the controller with the new password!

Reference

1. https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L65
2. https://en.wikipedia.org/wiki/PBKDF2
3. https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L49