November 16, 2019

Perspectives

I find this picture is intriguing.

To one chicken, the other chicken is on the other side
To the viewer of this picture, we have two stupid chickens (or one?).

I find it is interesting because it's just one picture yet we could perceive it differently, depending on the place we put our perspectives.

October 7, 2019

Technical Note: LXD Database and patch SQL

I deleted some unused zfs storage pools without realizing that they are used by LXD, so today after a machine rebooted, LXD refused to startups with following log messages were output to lxd.log files

t=2019-10-07T23:02:43+0900 lvl=info msg="Initializing storage pools"
t=2019-10-07T23:02:43+0900 lvl=eror msg="Failed to start the daemon: ZFS storage pool \"juju-zfs\" could not be imported: "
t=2019-10-07T23:02:43+0900 lvl=info msg="Starting shutdown sequence"
t

As lxd settings are stored in dqlite database (distributed sqlite) at /var/snap/lxd/common/lxd/database/global/db.bin, so I go confirm the record settings.


sqlite> .tables
certificates                        networks
config                              networks_config
images                              networks_nodes
images_aliases                      nodes
images_nodes                        operations
images_properties                   profiles
images_source                       profiles_config
instances                           profiles_config_ref
instances_backups                   profiles_devices
instances_config                    profiles_devices_config
instances_config_ref                profiles_devices_ref
instances_devices                   profiles_used_by_ref
instances_devices_config            projects
instances_devices_ref               projects_config
instances_profiles                  projects_config_ref
instances_profiles_ref              projects_used_by_ref
instances_snapshots                 schema
instances_snapshots_config          storage_pools
instances_snapshots_config_ref      storage_pools_config
instances_snapshots_devices         storage_pools_nodes
instances_snapshots_devices_config  storage_volumes
instances_snapshots_devices_ref     storage_volumes_config
sqlite> select * from storage_pools;
1|lxd|zfs||1
2|juju-zfs|zfs||1
3|juju-btrfs|btrfs||1
sqlite> select * from storage_pools_config;
3|1|1|zfs.pool_name|lxd
4|1|1|source|lxd
5|1|1|volatile.initial_source|lxd
7|2|1|size|21GB
8|2|1|source|/var/snap/lxd/common/lxd/disks/juju-zfs.img
9|2|1|zfs.pool_name|juju-zfs
11|3|1|size|21GB
12|3|1|source|/var/snap/lxd/common/lxd/disks/juju-btrfs.img


It seems that the storage pools settings are stored in 2 tables: storage_pools and storage_pools_config.

It tried to delete the related records from the above tables and restarted lxd process but lxd still failed with the same errors. I went back to to database and confirmed that those records were still there even after I deleted it. It seems that lxd recover those records from its log files. I could read the code to see how it recover but it takes time so I decided to look for database documentation in lxd source code while creating a new topic to ask for helps from lxd community.

I skimmed through lxd database documentation and found that I could create a patch.global.sql to remove unnecessary records as these sql statements are run at the very early stage of lxd startup. I created a file call patch.global.sql with statement to remove unneeded settings and start lxd.

And lxd process starts again with all my in-development containers!

Lesson learned: before removing anything, look for all its usages.

October 3, 2019

Visualize Enviro pHat sensors data

Enviro pHat is an low-cost environmental sensing boards that let you measure temperature, light color, motion and analog sensors. The accompanied python SDK makes writing program to get data from enviro pHat as easy as pie. Enviro pHat could be used in combination with RaspPi zero to become a simple room conditions monitoring sensing board.

With grafana and prometheus, it looks so easy to at least build a dashboard for environ pHat sensors data, so I decided to build a enviro pHat dashboard.

System Diagram


Though Enviro pHat provides SDK to get data from sensors, we need methods to that data to server. We also need to care about the connection between the device and server side. In home environment, I used Wifi as Pi Zero supports Wifi connections. Prometheus also provides scrape methods to fetch data from prometheus client, so I will add a prometheus client in the device side for prometheus server to scrape. The data is stored in prometheus data store and visualized by grafana.



System Diagram: Prometheus fetch data from RaspPi Zero over Wifi


Device Side

Soldering sensor board and setup OS

If you buy an un-soldered Enviro pHat, you probably need to solder it with the accompanied 2x20 pin female header. The solder part is not that difficult and Pimoroni has a detailed guide for the task. Below are my raspberry Pi Zero and the Enviro pHat sensor board.

My RaspBerry Pi Zero
Enviro pHAT board
Enviro pHat


For the PiZero device, you need to download Raspbian and write it into the SDCard. Because we are going to connect to the Rasp Pi through Wifi, we need to setup wifi for Rasp Pi. After downloading the OS image and writing the OS image to the sd card, put the sd card into your PC, open the sd card directory and added 2 following files to the boot directory.

wpa_supplicant.conf
 
country=JP
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
        ssid="WIFI SSID"
        psk="WIFI SSID PASSWORD"
        key_mgmt=WPA-PSK
}

ssh
empty content

The purpose of the ssh file is to tell raspbian to enable ssh server by default.

Plugin the power supply in. You should see Raspberry Pi IP address through your home router (Or ping the whole IP address segments to see which IP is assigned).

You should be able to ssh into the raspberry Pi through its IP address. Perform basic server setups

$ ssh pi@$PI_IP_ADDRESS
pi@$PI_IP_ADDRESS password: ****
$ pi@192.168.88.218's password:  0.567, 3: 0.51
Linux pizero 4.19.66+ #1253 Thu Aug 15 11:37:30 BST 2019 armv6l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Oct  3 05:50:11 2019 from 192.168.88.254

$ sudo apt update
$ sudo vim /etc/hostname
$ ...
$ sudo reboot 

Final result
Final result after soldering enviro phat, setting up OS for rasp pi zero, and plugging in the power supply

Install Enviro pHat SDK



$ curl https://get.pimoroni.com/envirophat | bash
$ pip3 install prometheus_client
$ git clone git@github.com:telescreen/raspizero-playground.git
$ cd raspizero-playground
$ python3 enviro-collectd.py -d -f enviro-collectd.log --pid enviro-collectd.pid


I write a program to export environ data through prometheus client: https://github.com/telescreen/raspizero-playground . After running the program like above, you should be able to confirm the prometheus data at

http://$PI_IP_ADDRESS:9090/


Server Side


I have LXD running in my local machine. I also have Juju bootstrapped with local cloud as cloud environment. As Juju store already provided prometheus2 and grafana charms, Installation of these middlewares is simply typing following 2 commands.


$ juju deploy prometheus2
$ juju deploy granafa
$ juju add-relation prometheuss:grafana-source grafana:grafana-source

It will take a while for Juju to download / install charm and necessary software. After Juju finishes installing granafa and prometheus, we should see  both applciation active in Juju status


$ juju status
Model    Controller  Cloud/Region         Version  SLA          Timestamp
default  lxd         localhost/localhost  2.6.9    unsupported  16:19:07+09:00

App          Version  Status  Scale  Charm        Store       Rev  OS      Notes
grafana               active      1  grafana      jujucharms   32  ubuntu
prometheus2           active      1  prometheus2  jujucharms   11  ubuntu

Unit            Workload  Agent  Machine  Public address  Ports               Message
grafana/0*      active    idle   1        10.4.88.50      3000/tcp            Started grafana-server
prometheus2/0*  active    idle   0        10.4.88.243     9090/tcp,12321/tcp  Ready

Machine  State    DNS          Inst id        Series  AZ  Message
0        started  10.4.88.243  juju-63ab09-0  bionic      Running
1        started  10.4.88.50   juju-63ab09-1  bionic      Running

Next, we need to add the exposed prometheus data in the device to Prometheus's scrape job. I prepare a simple scrape job setup at

https://github.com/telescreen/raspizero-playground/blob/master/scrape-job.yaml

Adding the scrape job with Juju by following command

$ juju config prometheus2 scrape-jobs=@scrape-job.yaml

After 1 minute, the device IP address should appear in the scrape target with state UP


Login into grafana with password retrieved by following actions. The other actions are performed through the grafana UI and quite intuitively, so I excluded those settings from this post.



$ juju run-action --wait grafana/0 get-admin-password
unit-grafana-0:
  id: 43e7d1e6-3251-437b-816d-d2aa182c5554
  results:
    password: NJBMMVNqW6ssKywL
  status: completed
  timing:
    completed: 2019-10-03 07:28:44 +0000 UTC
    enqueued: 2019-10-03 07:28:41 +0000 UTC
    started: 2019-10-03 07:28:44 +0000 UTC
  unit: grafana/0


Final Results

After like 3 hours, I could build the following dashboard for data captured by enviro phat's sensors. The light change is due to I put the sensors in a dark zone in my room. My room is definitely cooler than the temperature that Enviro is capturing. Probably, the temperature is affected by the heat from pi zero.

The storm is approaching my city so I am expecting to see a lower than usual air pressure. The pressure should increase back in a few days after the storm.

I was able to build a simple home conditions' monitoring system in a short time with opensource technology. I can felt more and more the importance of opensource softwares after this project.

Dashboard with temperature, air pressure and light information

October 1, 2019

Use MaaS to manage local computing resource


Besides managing bare metal machines, MaaS (Metal as a Service) can also manage KVM guests as long as those machines are boot from network PXE. This feature allow us to efficiently manage local computing resources, while at the same time, leverage JuJu for automatically middle ware deployment. In this blog post, I will detail steps to manage KVM guest machines using MaaS.

System diagram


MaaS managed Physical Machine Diagram
MaaS Managed Machine: MaaS Virtual Machine creates other Virtual machines through KVM Pod. All VMs access the internet through physical machine's NIC using NAT.

The MaaS server will be deployed to a virtual machine running inside the physical machine. The reason for installing MaaS into a virtual machine instead of directly install to the physical machine is that we want to have the flexibility of VMs as we want to update MaaS version, or backup... The MaaS server will manage its "parent" physical machine resources through KVM pod.

All machines will access to the Internet through physical machine' NIC using NAT.

We could easily extend this diagram by adding more virtual NIC for more complex networks. We could also easily add another physical machines to MaaS, as long as the machine could be reachable by MaaS VM.

The Internet access for MaaS server is necessary because MaaS needs to fetch OS images from the remote repository.

Preinstall preparation

We will need to install kvm to the physical machine. The procedure to install KVM on Ubuntu Linux could be referred at https://help.ubuntu.com/community/KVM/Installation.

Because MaaS server will manage network resources, we need to make the modification to the "default" network to disable the DHCP for this network. MaaS needs to manage its network using DHCP in order to boot machine and automatically install OS.

To disable default, in virt-manager UI, we go to

Virt-Manager UI > Edit > Connection Details > [Virtual Networks] Tab

Remove the default network by stopping and deleting it. Re-add a new network name it "default". Modify its IP if you prefer (in my case, I use 10.23.23.0/24), remember to uncheck "Enable DHCPv4" option. Accepts other options as default.

My default network is as below

Default Network with DHCP disabled and is NATed to the physical NIC

With network ready, we go to install MaaS

Install MaaS Server

MaaS Installation

For MaaS server, we will need to use normal kvm management client to create MaaS VM. We also need to download Ubuntu .iso from Ubuntu Linux homepage to install it.

The process of install MaaS is similar to installation of Ubuntu Server.

MaaS Server Settings

Because we disabled DHCP, MaaS will not be allocated IP address, so we need to access the machine in console and setup its network.

I login into the machine console and make the following modification to /etc/netplat/50-cloud-init.yaml

$ vim /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    ethernets:
        ens3:
            addresses:
            - 10.23.23.2/24
            gateway4: 10.23.23.1
            nameservers:
                addresses:
                - 10.23.23.1
        ens9:
            addresses:
            - 10.4.23.2/24
            dhcp4: false
     version: 2 
$ netplan apply 

Other settings like hostname ... could also be performed here. After setting up, reboot the machine for all settings applied, and to confirm that we have no problem when rebooting machines.

MaaS Installation


MaaS Installation is easily using apt


$ sudo apt install maas
…
  Applying metadataserver.0011_script_metadata... OK
  Applying metadataserver.0012_store_script_results... OK
  Applying metadataserver.0013_scriptresult_physicalblockdevice... OK
  Applying metadataserver.0014_rename_dhcp_unconfigured_ifaces... OK
  Applying metadataserver.0015_migrate_storage_tests... OK
  Applying metadataserver.0016_script_model_fw_update_and_hw_config... OK
  Applying metadataserver.0017_store_requested_scripts... OK
  Applying metadataserver.0018_script_result_skipped... OK
  Applying piston3.0002_auto_20151209_1652... OK
  Applying sessions.0001_initial... OK
  Applying sites.0001_initial... OK
  Applying sites.0002_alter_domain_unique... OK
Setting up maas (2.4.2-7034-g2f5deb8b8-0ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.19) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Processing triggers for rsyslog (8.32.0-1ubuntu4) ...
 
$ sudo maas init 
Create first admin account:
Username: admin
Password:
Again:
Email: ha@buihanotes.com
Import SSH keys [] (lp:user-id or gh:user-id):

After installing MaaS, confirm that we can open MaaS web interface by access to MaaS Server port 5240. In my case it is: http://10.23.23.2:5240/MAAS/#/dashboard

2 things worth doing after MaaS installation is

  1. Choose OS Image so MaaS can sync it from remote repository
  2. Synchronization your public key from github or launchpad as you could ssh into new virtual machines.

Setup KVM Pods

From MaaS version 2.6, MaaS server has one additional option to install KVM automatically. If you have existed machine, you should probably manually add physical machine to host to not break things.

In order to add physical machine to host, we add KVM pod to MaaS. In order for Pod to talk to physical machine, we add private ssh key to maas user to allow maas user to ssh to physical machine.

$ cd /var/lib/maas
$ sudo mkdir .ssh && sudo chown maas. .ssh && sudo chmod 700 .ssh
$ sudo vim .ssh/id_rsa
## PRIVATE KEY ##
## :wq
$ sudo chown maas .ssh/id_rsa && sudo chmod 600 .ssh/id_rsa


Open MaaS Web UI and add pods with virsh address as follows

qemu+ssh://$USERNAME:10.23.23.1/system

Because our host ip address from virtual machine will have the ip address of 10.23.23.1, we add that IP as hypervisor endpoint. One thing to notice here is that the $USERNAME should be able to communicate with local hypervisor without the need to sudo. In the other words, the user in physical machine should belong to libvirt group


Add Virtual Machine as KVM Pods


Provision Virtual Machine

If every step works without problem, you should be able to to see the Pods Summary with the number of virtual machines, its specifications.

In my case, I have a machine with 16 cores and and 32GB of RAM memory. I have 3 storage pools with different size and specifications.



Provisioning a machine is as simple as clicking [Take action] button and choose [Compose]

Compose a new Virtual machine by choosing its spec

Summary

In this post, I wrote about steps to add physical machines to MaaS server to allow MaaS to manage local computer resources, turn local computer into a cloud like environment. In next sections, I will write in more details about how to manage virtual machines, how to combine JuJu with local maas to rapidly deploy experiment Linux environment.

September 9, 2019

Technical Notes: How to change Juju user password

Juju User

JuJu has an internal user framework, which supports sharing of controllers and models. Using JuJu user feature, a sysadmin could separated users for controllers as in multiple clouds deployment or for users as in multiple systems deployment. Each juju user can have its own permissions at application, model, or controller layer. This separation of responsibilities allow multiple users to manage multiple infrastructures with a clear border of responsibilities.

At controller bootstrap phase, Juju automatically creates a new user named admin with superuser privileges. This user could perform all operations at cluster level. We could confirm the current user and its privileges by 

$ juju whoami
Controller:  kvm
Model:       k8s
User:        admin 

$ juju users
Controller: kvm

Name    Display name  Access     Date created  Last connection
admin*  admin         superuser  23 hours ago  just now  

Authentication

An user needs to submit its username and password in order to login into controller. At bootstrap phase, the admin password is automatically generated by Juju. Admin user is also login by default with generated password. All commands with controllers after login is managed by "cookies", which are stored at JuJu client "~/.local/share/juju/cookies/{cloud_name}.yaml". This file stores all cookies needed for current users to authenticated against Juju controllers. In my case, the file is

$ ll ~/.local/share/juju/cookies/
total 16
drwx------ 2 telescreen telescreen 4096  9月  9 12:55 ./
drwx------ 5 telescreen telescreen 4096  9月  9 12:55 ../
-rw------- 1 telescreen telescreen 2998  9月  9 12:55 kvm.json


If we by accident delete this file, current user will be logout and we need to submit a password to re-authenticated against Juju controller.

$ rm -rf ~/.local/share/juju/cookies/kvm.json
$ ll ~/.local/share/juju/cookies
total 12
drwx------ 2 telescreen telescreen 4096  9月  9 13:05 ./
drwx------ 5 telescreen telescreen 4096  9月  9 13:04 ../
$
$ juju models
please enter password for admin on kvm:
Controller: kvm

Model       Cloud/Region  Type  Status     Machines  Cores  Units  Access  Last connection
controller  kvm           maas  available         1      1  -      admin   just now
default     kvm           maas  available         0      -  -      admin   19 hours ago
k8s*        kvm           maas  available         5      5  16     admin   12 minutes ago



Because the password is generated by default at controller bootstrap phase and we don't know the password, it would be a good idea to change the password after bootstrap. We could change the current user's password by

$ juju change-user-password
new password:
type new password again:
Your password has been changed.


Recover password


In case we forget the administrator password and we happened to logout of Juju, we need to recover the password. The process of recover password is complex but doable.

Access the Database

Firstly, we need to login into the Juju controller machines with Juju mongodb running as PRIMARY instance. The credential for login is stored at ~/.local/share/juju/ssh/. We therefore could login into juju controller machine by
$ juju machines -m controller
Machine  State    DNS           Inst id  Series  AZ       Message
0        started  10.23.23.125  g6es7h   bionic  default  Deployed

$
$ ssh -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@10.23.23.125
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Mon Sep  9 04:15:30 UTC 2019

  System load:  0.26              Processes:           83
  Usage of /:   66.1% of 7.27GB   Users logged in:     0
  Memory usage: 45%               IP address for eth0: 10.23.23.125
  Swap usage:   0%

 * Congrats to the Kubernetes community on 1.16 beta 1! Now available
   in MicroK8s for evaluation and testing, with upgrades to RC and GA

     snap info microk8s

 * Canonical Livepatch is available for installation.
   - Reduce system reboots and improve kernel security. Activate at:
     https://ubuntu.com/livepatch

0 packages can be updated.
0 updates are security updates.


Last login: Sun Sep  8 08:52:20 2019 from 10.23.23.1
ubuntu@many-shrew:~$



Next, we need to login into mongodb used by Juju. The credential of mongodb admin user is stored at /var/lib/juju/agents/machine-0/agent.conf

$ mongo --sslAllowInvalidCertificates --ssl -u admin -p $(sudo sed -n '/^oldpassword:/s/^oldpassword: //p' /var/lib/juju/agents/machine-0/agent.conf) 127.0.0.1:37017/admin
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:37017/admin
2019-09-09T04:19:58.587+0000 W NETWORK  [thread1] SSL peer certificate validation failed: unable to get local issuer certificate
2019-09-09T04:19:58.587+0000 W NETWORK  [thread1] The server certificate does not match the host name. Hostname: 127.0.0.1 does not match CN: *
MongoDB server version: 3.6.3
Server has startup warnings:
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten]
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-09-08T04:07:00.871+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-09-08T04:07:02.872+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-09-08T04:07:02.872+0000 I CONTROL  [initandlisten]
juju:PRIMARY>
juju:PRIMARY> use juju
switched to db juju
juju:PRIMARY> db.users.find().pretty()
{
        "_id" : "admin",
        "name" : "admin",
        "displayname" : "admin",
        "passwordhash" : "il/ap8gXWEUp8x1T48bIwOxm",
        "passwordsalt" : "3YTbonyqTpmWCvtQ",
        "createdby" : "admin",
        "datecreated" : ISODate("2019-09-08T04:07:11Z"),
        "txn-revno" : NumberLong(7),
        "txn-queue" : [
                "5d75d082a20e030edc1539c8_953a034a"
        ]
}



Password Hash

According to [1], Juju users' password is hashed using PBKDF2 [2] with following parameters
  • Number of hash Iterations: 8192
  • Hash algorithm: SHA-512
  • Salt length: 12 bytes [3]
  • Password length in bytes: 18 bytes
We could easily generated the hashed value for any pasword using the same parameters above. For example, in Python we could generate a password by
>> import hashlib, binascii
>> dk = hashlib.pbkdf2_hmac('sha512', b'test', b'4AIANdqIiPvpZUS2', 8192, 18)
>> binascii.b2a_base64(dk)
b'iJW8NpnYPG4iwfzQoMkLfY2z\n'


Update password

Finally, we could recover the password by update the "passwordhash" and "passwordsalt" fields using mongodb update command. Because mongodb will overwrite the whole document, we need to include all other fields with old value in our backup.

db.users.update({_id: "admin"},
{
        "_id" : "admin",
        "name" : "admin",
        "displayname" : "admin",
        "passwordhash" : "iJW8NpnYPG4iwfzQoMkLfY2z",
        "passwordsalt" : "4AIANdqIiPvpZUS2",
        "createdby" : "admin",
        "datecreated" : ISODate("2019-09-08T04:07:11Z"),
        "txn-revno" : NumberLong(8),
        "txn-queue" : [
                "5d75d5f3a20e030edc1544bd_3d2483d7"
        ]
})

And we could login into the controller with the new password!

Reference

1. https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L65
2. https://en.wikipedia.org/wiki/PBKDF2
3. https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L49

July 25, 2019

Technical Notes: How to remove juju application in error state

I have been playing with juju for application deployment recently as parts of my current job. Most of the time, I just need to `juju deploy application` and JuJu gives me all settings, no matter what cloud I am using. However, in some cases, juju refuses to follow my orders, which makes me really frustrasted.

For example, the following deployment will definitely fail because grafana charm version 24 was not updated to changes in upstream grafana.

$ juju deploy grafana-24

The system will stuck with following juju status

$ juju status
Model       Controller  Cloud/Region  Version  SLA          Timestamp
experiment  stark-kvm   stark-kvm     2.6.5    unsupported  16:53:31+09:00

App      Version  Status  Scale  Charm    Store       Rev  OS      Notes
grafana           error       1  grafana  jujucharms   24  ubuntu

Unit        Workload  Agent  Machine  Public address  Ports  Message
grafana/0*  error     idle   0        10.4.23.120            hook failed: "install"

Machine  State    DNS          Inst id    Series  AZ       Message
0        started  10.4.23.120  tidy-tick  bionic  default  Deployed


When juju stucks at this stage, due to the hook error, we could not remove application. Worse, if you try `juju remove-application grafana` without --force or --no-wait flags, any subsequent command will also fail. In other word, the application refuses to be removed, until you resolve its internal errors.

My colleagues suggests 3 ways to resolve this issue in this case.

1. Resolve the issue and `juju resolve grafana/0`
2. Perform an operational hack 1: change hook script to a bash that always returns normal status code (exitcode 0).
3. Perform an operational hack 2: `juju debug-hook grafana/0`, wait for the hook context loaded, and exit immediately. This will send the positive feedback to juju controllers, let the controller perform the next action (which is our removal command).

I believe that users should be able to remove application every time they want, no matter what problem is occurring. It seems that JuJu developers thought the same so they triaged several related bugs. However, up to the writing of this blog, the issue has not been fixed yet (my version 2.6.5-bionic-amd64 still has the issue). Until the issue is fixed, there is no way but to really resolve the issue or to "hack" it.

June 20, 2019

Trading Review: No suitable stop loss and Premature Quit

I made two trading mistakes today as a beginner: No suitable stop loss and Premature Exit. I want to discuss each in more details to help my future self.

1. No suitable stop loss

2 weeks ago on May 31, I observed a big sell-off of USD agains JPY. The sell-off happened because the US and China did not agree with the counterpart's trade terms. The USD price felt because traders worried that if trade war prolongs, USD economics might get worse and therefore they sold the dollar to park their assets to a more stable currency the JPY. I myself did not think that trade war will prolong [1] and leaders of those two countries will act when they see enough struggle from their domestic company. I also observe that historically since Trump became the POTUS, he had not one time tried to devalue USD against other currency to boost America's economics. Due to above reason, I bet that USD will rise again JPY after a while, so I entered USD/JPY position by buying 50,000USD at following distribution

USD$10,000 at 109,600
USD$20,000 at 109,250
USD$20,000 at 108,938

At that time, I thought that USD price will bounce back soon [2], so I did not put any stop loss orders for the above positions.

In monthly FOMC meeting, the FED decided to keep their FFR at 2.25 but presented a dovish view about the economics outlook ahead. They decided no rate hike this year and at most one rate cut next year, at the same time showing their fully support for economics expansion. This confirms traders' assumptions so far but presented more dovish outlook, which makes traders worry more about the USD and USD price felt to next level and stay at 107.650. Yesterday, when the USD was around 108.400, I thought about selling to cut loss, but somehow I refrained from doing it fearing a minor loss. This morning when seeing Japanese trader dumps USD, I decided to get out of the position with a loss of around 65,000JPY.

While my assumption at [1] might be right finally when early this week Trump tweeted that he and Xi will talk about trade deal at G20, I entered the position too soon, trying catching the falling knife. My first mistake was that I was too naive to think that the USD will bounce back soon. My second mistake here was that I should have cut loss earlier last night. This is my second time I carry my position over the FOMC, the last time I "enjoyed" 250,000JPY loss. This is the second time and I should not let it proceed to the third time. Finally, my biggest mistake as a beginner trader was I did not put a loss cut after my enter.

I should learn these lessons by putting a loss cut order for each buy order, cut loss sooner, and stay away from major events in future trading.

2. Premature Exit

I have been holding SoftBank Group stock for 2 weeks and I enjoyed its price rise. At 7.8% profit, I decided that it was the time to sell so I put a sell order at 10,255. 5 minutes later the notification came telling me that all stocks were fully contracted. I enjoyed the profit at 76,300JPY which is not bad at all. However, after my sell order, SBG price continued to rise and stopped at 10,515 at closing time. If I had waited until closing time, I would have enjoyed a profit of around 10%. I made a small mistake by existing too soon and it costed me 3% profit or around 30,000JPY. 

I should be more patient, confirming the downtrend before exit in future trade.

By the way, when looking at the weekly price graph, SoftBank seems going to rise again so probably I will get in again tomorrow.


May 26, 2019

Technical Note: Dell XPS 9370 dynamic brightness control

Recently while using Windows Pro 10, I notice that my XPS 9370 adapts its screen brightness according to contents displayed on it. When I view websites with bright color theme, XPS 9370 screen will display the website's content with its maximum brightness. However, when I view websites using dark color theme, my dell laptop screen will automatically lower its brightness accordingly.

This adaptation is good because lowering display brightness helps reduce power consumption, as well as make our eyes comfortable. However, my Dell laptop often reacts too much to minor changes of contents' brightness. A huge recognizable brightness often happens even when there are minor changes to color themes. The problem turns worse when brightness incurs latency. I switch between windows of multiple applications often enough to find the brightness adaption and its latency annoying. I decided to turn off this feature.

It turns out to be quite difficult to find a place to turn off this feature. At first, I thought it was a Windows feature, so naturally I went to Windows display settings to look for it. But to my surprises, there are no corresponding option to adjust / change "automatically brightness adaption". A quick look at power settings failed to find out corresponding option. Somebody on the internet suggests looking at Intel Graphics Settings. I checked it out but find nothing there. I desperately thought that it was a hardware defect and I cannot do anything with it.

Luckily, I was wrong this time. A google for "Dell XPS 9370 brightness" leaded me to Dell Support. Someone was annoyed like me and bothered to open a dell support ticket. It seems that dynamic brightness control is a feature and it can be turn off in BIOS settings. I immediately restarted my computer, [F2] to go to BIOS settings, went to Power, and disabled this feature. Annoying display brightness adaption disappeared.

It's intriguing to find out that this feature only manifests in Windows Pro. My dual boot Ubuntu 19.04 shows no problem. This proves that sometimes a lag in hardware supports of OS is good for end users :-)

May 14, 2019

Technical Memo: DynamicUser in systemd

Recently, I received a really interesting question from my customer. He found out in his file system two files (directories), which belong to an user that not in /etc/passwd. Specifically in Ubuntu Linux 18.04, these 2 files are

/var/lib/private/systemd
/var/lib/private/systemd/timesync

These two files belong to a user named systemd-timesync with UID/GID in 62583 and this user does not belong to /etc/passwd.

Traditionally, a Linux user does not always need to be in /etc/passwd because it can come from many remote sources, for example
  • LDAP
  • NIS / NIS+ server
  • Windows Domain Controller Server
  • ...
All current users, both from /etc/passwd and remote sources, can be queried by getent command [1]

$ getent passwd

Unfortunately, Our systemd-timesync user does not appear in getent list

A quick Google points to an article [2], written by systemd developer - Lennart Poettering, explaining about DynamicUser feature. It turns out that DynamicUser was introduced from systemd version 235. My client's environment is running Ubuntu 18.04 with systemd version 237, so his system supports DynamicUser feature.

DynamicUser feature tries to solve the problem of user scatters in Linux system. Modern Linux systems have multiple system users, whose usage is to run system processes, and a less number of human users. The point is some processes when being removed from the system does not remove its system users properly, leaving the system with floating users. Worse, files created by system processes is leaved with old users' UID/GID. The UID/GID user added after that have total control over these files, which lessen system securities. DynamicUser solves this issue by allowing process to define a system user that only exist during run time. When a process with DynamicUser ends, the dynamic user gets removed automatically. We can enable this feature by adding DynamicUser = True to your systemd's service files and systemd will setup everything for you.

In Ubuntu 18.04, systemd-timesyncd uses this feature so my client can see files with systemd-timesync user that does not belong to /etc/passwd. Unfortunately at the time of this post, this feature seems buggy so Ubuntu developers decided to turn it off in later release. In Ubuntu 19.04 (Disco Dingo) and later, systemd-timesyncd will not use DynamicUser. So we expect to see systemd-timesync user appears in /etc/passwd again.

References
  1. https://askubuntu.com/questions/504971/is-it-possible-for-users-not-to-be-in-passwd
  2. http://0pointer.net/blog/dynamic-users-with-systemd.html

April 24, 2019

A Review of Kubernetes Tokyo Meetup #18

In a time when every cool developer talks about infrastructure software, a meetup of infrastructure technology is certainly getting a lot of attention. In Tokyo, Tokyo k8s meetup is one of the most popular community, where k8s enthusiasts gather and share their knowledge about this new rising infrastructure technology. This week's Monday, Tokyo Kubernetes Meetup #18 was held in Tokyo and I and my colleague Wenhan fortunately have chance to join it.

Never before I realize that k8s is becoming such a hot technology. The attention for this technology can be felt by the number of event registration. The event halls' capacity was solely 170 people, yet there were more than 500 people made registration (at the time I register). The organizer had to make a ticket lottery and select participants by random. I was lucky enough to get the lottery for the event.

The number of registrations is 5 times over the capacity of event hall. The organizer had to randomly select participants by lottery



At the time I came to the event, it was filled with developers busy typing notes. The participants all looked quite young, mostly at their late twenty or beginning of thirty. They are mostly from young Internet companies such as Line, Mercari, CyberAgent, Mixi, DeNA,... I can recognize some frequent other community members, who are also joining other communities like Kafka community or Cloud computing community. I also recognize a manager from KDDI, etc. The last time I talk to him, he told me he was trying to build a cloud computing team inside the corporation but having hard time recruit right people as well as choosing right technology stack. Perhaps, he is still working on it.

Developers from technology company in Tokyo joining k8s

The agenda for the meetup comprises of 2 parts
The "key notes" presentations from community leaders
The Lightning presentation by volunteers.

There are a small short break time between 2 sessions, during which the event sponsor provides burgers and drinks for participants. I myself got 2 burgers and a coke.

   
Small burger and drinks provided by sponsors. I chose a coke for myself

The "key notes" presentations were made by 3 famous guys in k8s community. The topics were

  1. kubebuilder/controller-runtime 入門 (30min) by Shingo Omura (@everpeace) 
  2. Container-native ingress controller with kubebuilder/(Admission Webhook) (30min) by MasayaAoyama (@amsy810)
  3. メルカリのマイクロサービスにおける Kubernetes manifest の運用 (30min) by Masaki Ishiyama (@b4b4r07)

I attached links for each presentation above.

For summary, the first presentation lays detailed Interfaces of kubebuilder and steps to build a k8s controller using this SDK. The golang interface was quite clean and I somehow can imaging steps I should do to build a controller using this documentation. I think the presentation could be better if its author provides listener more "bird eyes" overview of architecture before digging into details.

The second presentation was the one I find the most interesting in this meetup. Aoyoma, as usual, provides a simple and straightforward overview of containers's network technology. He detailed the difference between traditional networking and container-native networking. I was absolutely surprised when I heard about GKE L7 LoadBalancer's abilty to recognize Pods' network. I still remember I thought for a while about how to provide network to a pod from host machines, and all I thought of was to NAT pods' overlay network to host virtual network (well the situation was more complicated from host viewpoints). Alibaba Cloud LoadBalancer just didn't provide Pods network awareness. This cloud-native supports for network by GKE L7 LB did make me want to try this cloud out.
   
GKE Cloud-Native L7 Load Balancer are able to aware Pods Networks and distribute loads directly to containers without the necessary of traditional host's NAT.
After cloud-native's explanation of new GKE implementation, Aoyama presented in more details about his team's works last year. In cloud environment, all k8s controllers are provided by cloud provider so we don't need to care about its implementation details. However, in on-premise environment, supports for cloud-native k8s has not mature. Appliance high performance load balancer still only supports traditional physical hosts / networks. So the team build their own L7 load balancer based on nginx-ingress and calico.


L7 Load Balancer using Nginx-ingress / calico and custom built controller

The last presentation talks about Mercari experience in k8s deployment pipeline. Mercari is one of unicorn internet company with a rising popularity among developers. Mercari developers maintains its high-growth services by separating its system into multiple micro-services. Currently, Mercari has over 100 hundreds micro-services, maintained by more than 200 hundreds engineers. To maintain daily works with such a huge number of micro-services, engineers need a defined workflows with a high degree of automation. In their infrastructure, they use gitops and spinnaker for pipelines. They choose a monorepo style as starting point for their pipeline due to its easiness to begin with. They also details its organization of repositories, and use a custom-made linter called stein to validate deployment source code. I think Ishiyama provided a good lesson for cloud-native application managements and operations.

Mercari deployment pipeline

Overall, I think 3 presentations were very intriguing and I will definitely join this event in future. And yes, the #19 Tokyo k8s meetup will be organized in May 31. This time with the presentation from Joe Beda, one of creator of k8s. I find the event in May 31 promising because Joe now is working for Vmware and I hope I can get more information about Vmware involvement in cloud-native worlds.

Technical Memo: Apparmor starting errors in Ubuntu Bionic 18.04

1. System Degradation Issue

As usual, after installing some modules and modifying some user privileges, I performed a reboot for my home desktop computer. After the reboot, I always check for system start health and today is no different.

$ systemctl status
● stark
    State: degraded
     Jobs: 0 queued
   Failed: 1 units
    Since: Tue 2019-04-23 13:37:13 JST; 0h 4min ago
   CGroup: /
           ├─user.slice
           │ ├─user-1000.slice
           │ │ ├─user@1000.service
... 
 

2. What was the problem?

hm... I was quite surprised at this degradation of my system, so I look for the unit that fails to start at boot time.

$ systemctl list-units --failed
  UNIT             LOAD   ACTIVE SUB    DESCRIPTION           
● apparmor.service loaded failed failed AppArmor initialization

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

$ systemctl status apparmor
● apparmor.service - AppArmor initialization
   Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2019-04-24 10:06:09 JST; 5min ago
     Docs: man:apparmor(7)
           http://wiki.apparmor.net/
  Process: 20197 ExecStop=/etc/init.d/apparmor stop (code=exited, status=0/SUCCESS)
  Process: 20208 ExecStart=/etc/init.d/apparmor start (code=exited, status=123)
 Main PID: 20208 (code=exited, status=123)
...

 4月 24 10:06:07 stark apparmor[20208]: Warning from /var/lib/snapd/apparmor/profiles/snap.lxd.migrate (/var/lib/snapd/apparmor/profiles/snap.lxd.migrate line 484): Unconfined exec qualifier (ux) allows some dangerous environment variables to be passed to the unconfined process; 'man 5 apparmor.d' for details.
 4月 24 10:06:09 stark apparmor[20208]:    ...fail!
 4月 24 10:06:09 stark systemd[1]: apparmor.service: Main process exited, code=exited, status=123/n/a
 4月 24 10:06:09 stark systemd[1]: apparmor.service: Failed with result 'exit-code'.
 4月 24 10:06:09 stark systemd[1]: Failed to start AppArmor initialization.

Apparmor seems fail to initialize at startup. It's quite surprising because I didn't modify any apparmor-related things. I look for the status of apparmor to see what problem might happen.

Well, nothing special here. the apparmor service is complaining of some unconfined exec qualifier and it dies. Warnings cannot make a process fail to start because they are warnings! So I look at apparmor journal logs.

$ journalctl -u apparmor
...
 4月 23 16:25:41 stark apparmor[14865]: AppArmor parser error  for /etc/apparmor.d/usr.sbin.dhcpd in /etc/apparmor.d/usr.sbin.dhcpd at line 70: Could not open 'dhcpd.d'

The error at line 70 of profile file for dhcpd is the culprit causing apparmor failed to start. At that line is the following #include

  # allow packages to re-use dhcpd and provide their own specific directories
  #include

It seems that this apparmor's dhcpd profile looks for dhcpd.d directory in /etc/apparmor.d/ but fails to see this directory. So on theory, if I create this directory again, the apparmor parser will parse and I will not see any problem.

3. Temporary Solution

So I go and create the directory

$ sudo mkdir /etc/apparmor.d/dhcpd.d
$ sudo systemctl start apparmor
$ systemctl status apparmor
● apparmor.service - AppArmor initialization
   Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Wed 2019-04-24 10:50:18 JST; 5s ago
     Docs: man:apparmor(7)
           http://wiki.apparmor.net/

  Process: 20197 ExecStop=/etc/init.d/apparmor stop (code=exited, status=0/SUCCESS)
  Process: 11297 ExecStart=/etc/init.d/apparmor start (code=exited, status=0/SUCCESS)
 Main PID: 11297 (code=exited, status=0/SUCCESS)
...

$ systemctl status
● stark
    State: running
     Jobs: 0 queued
   Failed: 0 units

Ok, it works!

4. Why?

I try to google around to see whether I'm the only person who see this issue. I did found out similar issue date back to 2015. The reason at that time was that Ubuntu developers forgot to add the empty directory dhcpd.d to the built package. However, the issue got fixed and released at 2.8.95~
2430-0ubuntu5.2. My apparmor package version was 2.12-4ubuntu5.1 (apt-cache showpkg apparmor) so it should be no issue at all. I'm curious to see whether my apt update affects this issue. so I issue a search command for apparmor to apt log

$ grep apparmor /var/log/apt/term.log
Skipping profile in /etc/apparmor.d/disable: usr.sbin.squid
Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-confine.real ... 

2 promising lines did appear but it didn't relate to my issue.
At this time, I still need to finish my new employee training plan, so I leave this issue as it is. If due to some modification, the issue reappears, I will definitely raise a bug ticket to Ubuntu Launchpad.

April 19, 2019

Technical Note: Roles, Grants for newly installed PostgreSQL

I'm familiar with MySQL yet I have always wanted to use PosegreSQL due to its feature richness. Due to differences between the 2 databases, I always find it hard to remember initial setups. Today, It took me a while to re-navigate Postgres official documentation again, before I can create a role and grant permissions to newly created roles. I think it's useful to note those tasks for my future references.

Steps to setup Postgres
  1. Allow local role to access using password
  2. Create Role and Grant Permission 

Manual


1. Allow password authentication to local role


Postgres comes with different role management mechanisms from MySQL's. By default, local users connect with Postgres throught Unix socket without password. Postgres delegates user authentication to Linux, which means that user connect to Postgres locally needs not only a role in Postgres but also a Linux account. For a local development, this setting is quite inconvenient so I switch local user authencation to password also.

Postgres authencation mechanism is set in pg_hba.conf. In Ubuntu 19.04 (Disco) with Postgres installed from apt, the file is stored under /etc/postgresql/11/main/pg_hba.conf

Open a file with edit permission (which means using sudo in Ubuntu), look for the following files, and change METHOD from peer to md5
...
# "local" is for Unix domain socket connections only
---     local            all             all              peer
+++ local            all             all              md5
...

Explanation is written in detail in file comment. After edit and save file, we restart Postgres.

$ sudo systemctl restart postgresql
$ systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2019-04-19 00:50:53 UTC; 40min ago
  Process: 5168 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 5168 (code=exited, status=0/SUCCESS)
Apr 19 00:50:53 disco systemd[1]: Starting PostgreSQL RDBMS...
Apr 19 00:50:53 disco systemd[1]: Started PostgreSQL RDBMS

2. Add Role and Grant Permission to Role

We need a role (users) and its permissions. Permissions include permission to connect to database, permission to use public schema, permission on all tables, and permission for sequences.

 

2.1 Create Role and Database

I will try to create a role with name: tele

$ sudo su - postgres
$ psql
postgres=> CREATE ROLE tele WITH LOGIN;
postgres=> \password tele
Enter new password:  *****
Enter it again: *****
postgres=> CREATE DATABASE teledb; 

 

2.2 Grant Connect and Usage

postgres=> GRANT CONNECT ON DATABASE teledb to tele;
postgres=> GRANT USAGE ON SCHEMA public to tele;

 

2.3 Grant permissions on table and sequence

postgres=> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public to tele;
postgres=> GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public to tele;


3. Confirmation


Verify that user tele can login into database using username and password you have setup.

$ psql -U tele -d teledb -W
password: *****
tele=>

Reference

1. https://gist.github.com/AtulKsol/4470d377b448e56468baef85af7fd614
2. https://www.postgresql.org/docs/11/user-manag.html
3.https://dba.stackexchange.com/questions/117109/how-to-manage-default-privileges-for-users-on-a-database-vs-schema/117661#117661
4. https://stackoverflow.com/questions/22483555/give-all-the-permissions-to-a-user-on-a-db


March 29, 2019

Technical Note: Sleep Issue on Dell XPS 9370 with Ubuntu 18.04 (Bionic Beaver)

I notice than when I close my lit of Dell XPS 9370, which is installed Ubuntu Linux 18.04 (Bionic Beaver),the OS does not get into deep sleep mode. It does turn off the display, and hard disk and CPU seems to get into low power mode. However, the OS consumes battery power as normal mode and if I forget to put my computer into my backpack in this sleep mode, the CPU rallies producing heats. This definitely reduce the life of my laptop and I know that I need to do something.

At first, I thought this issue relates to driver due to new hardware, but after a quick Google, I found out that Arch Linux user and Bionic Beaver users see the same issue. I think the answer made by monty47 is so complete that I need no further explanation. I copied his solution here including several references for my personal educational purposes.

1. Root cause and Solution

Linux Kernel supports 4 system sleep states, though some states depend on platform support codes. Those 4 states are represented by strings that can be written or read from /sys/power/state, which are
  • "mem"
  • "standby" - Power-On-Suspend
  • "freeze" - Suspend-To-Idle
  • "disk" - Hibernation
The "mem" represented available sleep modes supported by the platform. The modes could be referenced in /sys/power/mem_sleep, and could have following values
  • s2idle - Suspend-To-Idle
  • shallow - Power-on Suspend
  • deep - Suspend-To-Ram
"s2idle" seems always available, while "shallow" and "deep" depend on underlying platform and supporting code (I have not traced which layer's code). For example, my Dell XPS 9370 can only be in "s2idle" and "deep".

$ cat /sys/power/mem_sleep
[s2idle] deep

For detailed explanation, please refer to System Power Management Sleep States by Wysocki. Only in "deep" state, all system and devices but memory are put in low-power state, and kernel might pass controls to the BIOS.

So, for my Dell XPS 9370, if I don't put it into "deep" mem sleep state, it will maintain normal "s2idle", which is just a light-weight, pure software mode. CPU will spend most of its time in idle mode, it produces heat even in this mode (because I greedily chose a Core i7 chip).

To check which mode my computer is in by default, just put it into sleep mode using Fn+End and search for sleep keyword in journal.

$ cat /sys/power/mem_sleep
[s2idle] deep     # <-- highly="" in="" p="" s2idle="">$ # Fn + End to put to computer to sleep and wake it up
$ journalctl -S "2019-03-28" | grep "PM: suspend" | tail -2
Mar 28 19:08:27 ha-xps kernel: PM: suspend entry (s2idle)
Mar 28 22:29:35 ha-xps kernel: PM: suspend exit


To fix this issue, just temporarily change mem_sleep to "deep" mode by

$ echo deep | sudo tee /sys/power/mem_state

or permanently put that mode into kernel startup parameters.

$ sudo vim /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash mem_sleep_default=deep"
...
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
$ sudo reboot
$ # Confirm after rebooted
$ sudo journalctl -b 0 | grep "PM: suspend" | tail -2
Mar 28 19:08:27 ha-xps kernel: PM: suspend entry (deep)
Mar 28 22:29:35 ha-xps kernel: PM: suspend exit 


2. Known Related Issues

This solution comes with 2 consequences (or related issues).

  1. Suspend will fails if you encrypts hard disk
  2. Bluetooth will not wake up after suspend. This means you cannot use bluetooth mouse after your system wakes up.
I, myself, have the No.2 issue and I need to reset bluetooth after startup. Restart bluetooth through UI seems not working to me and I don't know why (perhaps it is a bug?).

$ sudo systemctl start bluetooth
$ bluetoothctl
...
[NEW] Characteristic
        /org/bluez/hci0/dev_D7_4A_86_5E_2D_C4/service0008/char0009
        00002a05-0000-1000-8000-00805f9b34fb
        Service Changed
[NEW] Descriptor
        /org/bluez/hci0/dev_D7_4A_86_5E_2D_C4/service0008/char0009/desc000b
        00002902-0000-1000-8000-00805f9b34fb
        Client Characteristic Configuration
Agent registered
...


3. Future Consideration

This issue has not been fixed on my current Bionic HWE release.
Hope it will get fixed in upcoming version.