tag:blogger.com,1999:blog-68266288408836905372024-03-14T16:52:15.286+09:00Bui Ha's NotesThere is no substitute for hard work. ― Thomas A. EdisonUnknownnoreply@blogger.comBlogger70125tag:blogger.com,1999:blog-6826628840883690537.post-41405362398443125032020-02-02T10:31:00.002+09:002020-02-02T10:31:53.756+09:00Technical Memo: Convert an ova image to qcow2 formatFrom Wikipedia of <a href="https://en.wikipedia.org/wiki/Open_Virtualization_Format" target="_blank">Open Virtualization Format</a>, I learnt that an the ova file is just a tar file of ovf directory. So I went open a Windows 10 Developer Evaluation ova file to see what exist inside it.<br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>tar -tf WinDev2001Eval.ova
WinDev2001Eval.ovf
WinDev2001Eval-disk001.vmdk
WinDev2001Eval.mf</pre>
<pre style="line-height: 125%; margin: 0;">
</pre>
<br />
So inside the ova file includes<br />
<ul>
<li>A virtual machine specification definition .ovf file</li>
<li>A hash checksum .mf file</li>
<li>A disk image in .vmdk image</li>
</ul>
As qemu-img supports vmdk as disk image file, we actually could convert vmdk file to any other format that we like.<br />
<br />
<b>1. Convert .vmdk to .img</b><br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>qemu-img convert -O raw WinDev2001Eval-disk001.vmdk WinDev2001Eval.img</pre>
<br />
<br />
<b>2. Convert .vmdk to .qcow2 </b><br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;"> </span></pre>
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>qemu-img convert -O qcow2 WinDev2001Eval-disk001.vmdk WinDev2001Eval.qcow2</pre>
<br />
<br />
All supported formats can be confirmed by <b>qemu-img -h</b><br />
<br />
The converted qcow2 image could be used to create a new virtual machine using virt-manager.<br />
<br />
I used to think that .ova file is the file format of Virtual Box (There are also many articles on the internet instruct you to install Virtual Box on Linux to open .ova file!). Actually, by simple conversion, we can run .ova packaging image on default simple performant Linux KVM. The OVF was originated from 2007 and it has been more than a decade from its creation, yet I am still amazed at how little I know about it.<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;"> </span></pre>
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">
</span></pre>
<pre style="line-height: 125%; margin: 0;"> </pre>
<pre style="line-height: 125%; margin: 0;"> </pre>
<br />
<pre style="line-height: 125%; margin: 0;"> </pre>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-65143138427146463252020-01-31T17:34:00.000+09:002020-02-02T10:32:24.465+09:00Technical Memo: Disk Encryption using Cryptsetup with Vault as Key Management ServiceIn Linux Operating system, full disk encryption could be achieved by various solutions: encryptfs, dm-crypt, cryptsetup… While a step-by-step setup of disk/partition encryption can be referred from official documentation, integration disk encryption with Key Management solution like Hashicorp Vault is unobvious. Fortunately, there are a package named vaultlocker in Ubuntu Universe repository that ease this integration. I decided to spend my free time to make cryptsetup work with Vault.<br />
<br />
Notes: A similar request has been made to Cryptsetup but it goes outside of cryptsetup so it was closed.<br />
<br />
<h3>
1. Install Vault</h3>
As JuJu charm store has working Vault/MySQL charms, Vault deployment is as simple and easy as following commands<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju deploy percona-cluster
<span style="color: #996633;">$ </span>juju deploy vault
<span style="color: #996633;">$ </span>juju relate vault:shared-db percona-cluster
</pre>
</div>
<br />
<br />
Vault needs to be initialized and unsealed before it can provide its secure services. I followed below steps to initialize Vault
<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju ssh vault/0
<span style="color: #996633;">$ </span><span style="color: #007020;">export </span><span style="color: #996633;">VAULT_ADDR</span><span style="color: #333333;">=</span>http://127.0.0.1:8200
<span style="color: #996633;">$ </span>vault operator init -key-shares<span style="color: #333333;">=</span>3 -key-threshold<span style="color: #333333;">=</span>2
Unseal Key 1: hALty7n4UHvYQAN89xy3SE7yb2SUYM+9DR/tEpPGIQDB
Unseal Key 2: b+OcCcpjHQ+eG/dZ1bcGuwbL9LM1p2xR0yEha5XnsvO4
Unseal Key 3: DGo4UAsb894lXw4musCeUp30jJqoi5Qiyg5YQDpSRRqT
Initial Root Token: s.k5awSEin8LFTHQXy3qXM8GFv </pre>
<pre style="line-height: 125%; margin: 0;">...</pre>
<pre style="line-height: 125%; margin: 0;"> </pre>
<pre style="line-height: 125%; margin: 0;"> </pre>
</div>
This gives me 3 keys. I need to use 2 keys to unseal Vault in case it is locked. As we are initiating Vault, I use 2 keys to unseal vault.<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>vault operator unseal hALty7n4UHvYQAN89xy3SE7yb2SUYM+9DR/tEpPGIQDB
<span style="color: #996633;">$ </span>vault operator unseal b+OcCcpjHQ+eG/dZ1bcGuwbL9LM1p2xR0yEha5XnsvO4
</pre>
</div>
<br />
Now Vault should be unsealed and its result can be confirmed by vault status
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>vault status
Key Value
--- -----
Seal Type shamir
Initialized <span style="color: #007020;">true </span>
Sealed <span style="color: #007020;">false </span>
Total Shares 3
Threshold 2
Version 1.1.1
Cluster Name vault-cluster-44e9a806 Cluster ID 1087afa0-c7f1-01e4-abce-187b7b5f63a3
HA Enabled <span style="color: #007020;">false</span>
</pre>
</div>
<br />
Authorize Juju against Vault to allow JuJu actions to run.<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span><span style="color: #007020;">export </span><span style="color: #996633;">VAULT_TOKEN</span><span style="color: #333333;">=</span>s.k5awSEin8LFTHQXy3qXM8GFv
<span style="color: #996633;">$ </span>vault token create -ttl<span style="color: #333333;">=</span>10m
Value
--- -----
token s.7jChN3pD8cbLLVUSYrXqsBig
token_accessor gj76tBhiygYh1WsK1vamP20e
token_duration 10m
token_renewable <span style="color: #007020;">true </span>
token_policies <span style="color: #333333;">[</span><span style="background-color: #fff0f0;">"root"</span><span style="color: #333333;">]</span>
identity_policies <span style="color: #333333;">[]</span>
policies <span style="color: #333333;">[</span><span style="background-color: #fff0f0;">"root"</span><span style="color: #333333;">]</span>
<span style="color: #996633;">$ </span>juju status
Model Controller Cloud/Region Version SLA Timestamp
default nucctr nuc/default 2.7.1 unsupported 14:24:12+09:00
App Version Status Scale Charm Store Rev OS Notes
percona-cluster 5.7.20 active 1 percona-cluster jujucharms 282 ubuntu
vault 1.1.1 active 1 vault jujucharms 32 ubuntu
Unit Workload Agent Machine Public address Ports Message
percona-cluster/0* active idle 0 10.23.23.65 3306/tcp Unit is
ready vault/0* active idle 1 10.23.23.66 8200/tcp Unit is ready <span style="color: #333333;">(</span>active: <span style="color: #007020;">true</span>, mlock: enabled<span style="color: #333333;">)</span>
Machine State DNS Inst id Series AZ Message
0 started 10.23.23.65 needed-guinea bionic default Deployed
1 started 10.23.23.66 first-wahoo bionic default Deployed
</pre>
</div>
<br />
<br />
Reference: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-vault.html<br />
<br />
<h3>
2. Install VaultLocker and Setting Vault</h3>
On the test machine, install vaultlocker.<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo apt update
<span style="color: #996633;">$ </span>sudo apt install vaultlocker
</pre>
</div>
<br />
Vaultlocker will need information of Vault in order to talk to Vault. The information includes app role and proper policy to create information in Vault. So in order for vaultlocker to function, we need to create an approle and assign it a policy.<br />
<br />
<h4>
Create AppRole</h4>
I will create a approle named vaultluks with no timeout for secret-id<br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>vault auth <span style="color: #007020;">enable </span>approle
<span style="color: #996633;">$ </span>vault write auth/approle/role/vaultluks
<span style="color: #996633;">$ </span>vault <span style="color: #007020;">read </span>auth/approle/role/vaultluks/role-id
Key Value
--- -----
role_id 7cbb1c1c-2270-6120-04f6-c3b1c753191d
<span style="color: #996633;">$ </span>vault write -f auth/approle/role/vaultluks/secret-id
Key Value
--- -----
secret_id f657f357-4748-dd09-733c-58ff994bd985
secret_id_accessor 5eb5f63f-478b-4f41-a1c9-63f151991cf6 </pre>
<br />
<h4>
Create Secret Backend and Policy</h4>
In order for vaultlocker to write secret to Vault, a secret backend and a policy that allows vaultlocker to access that secret backend<br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>vault secrets <span style="color: #007020;">enable</span> -path<span style="color: #333333;">=</span>luks kv
Path Type Accessor Description
---- ---- -------- -----------
charm-pki-local/ pki pki_c7e293a2 Charm created PKI backend
cubbyhole/ cubbyhole cubbyhole_ffd5c863 per-token private secret storage
identity/ identity identity_70188f48 identity store
luks/ kv kv_0ad49111 n/a
secret/ kv kv_7b586ad0 n/a
sys/ system system_0cc7f9f9 system endpoints used <span style="color: #008800; font-weight: bold;">for </span>control, policy and debugging
<span style="color: #996633;">$ </span>vim vaultluks.hcl
path <span style="background-color: #fff0f0;">"luks/*"</span> <span style="color: #333333;">{</span>
<span style="color: #996633;">capabilities</span> <span style="color: #333333;">=</span> <span style="color: #333333;">[</span><span style="background-color: #fff0f0;">"create"</span>, <span style="background-color: #fff0f0;">"read"</span>, <span style="background-color: #fff0f0;">"update"</span>, <span style="background-color: #fff0f0;">"delete"</span>, <span style="background-color: #fff0f0;">"list"</span><span style="color: #333333;">]</span>
<span style="color: #333333;">}</span>
<span style="color: #996633;">$ </span>vault policy write vault luks vaultluks.hcl </pre>
<h4>
Configure Vaultlocker and Encrypt Disk</h4>
The url, approle and secret_id should be modified accordingly.<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo vim /etc/vaultlocker/vaultlocker.conf
<span style="color: #333333;">[</span>vault<span style="color: #333333;">]</span>
<span style="color: #996633;">url</span> <span style="color: #333333;">=</span> http://10.23.23.66:8200
<span style="color: #996633;">approle</span> <span style="color: #333333;">=</span> 7cbb1c1c-2270-6120-04f6-c3b1c753191d
<span style="color: #996633;">secret_id</span> <span style="color: #333333;">=</span> f657f357-4748-dd09-733c-58ff994bd985
<span style="color: #996633;">backend</span> <span style="color: #333333;">=</span> luks
<span style="color: #996633;">$ </span>sudo vaultlocker encrypt /dev/vdb2
DEBUG:urllib3.connectionpool:Starting new HTTP connection <span style="color: #333333;">(</span>1<span style="color: #333333;">)</span>: 10.23.23.66
DEBUG:urllib3.connectionpool:http://10.23.23.66:8200 <span style="background-color: #fff0f0;">"POST /v1/auth/approle/login HTTP/1.1"</span> 200 484
INFO:vaultlocker.dmcrypt:LUKS formatting /dev/vdb2 using UUID:bed01f5f-6766-4a3d-aa22-05703568da74
INFO:vaultlocker.dmcrypt:udevadm trigger block/add <span style="color: #008800; font-weight: bold;">for</span> /dev/vdb2 INFO:vaultlocker.dmcrypt:udevadm settle /dev/disk/by-uuid/bed01f5f-6766-4a3d-aa22-05703568da74
DEBUG:urllib3.connectionpool:http://10.23.23.66:8200 <span style="background-color: #fff0f0;">"PUT /v1/luks/vault/bed01f5f-6766-4a3d-aa22-05703568da74 HTTP/1.1"</span> 204 0
DEBUG:urllib3.connectionpool:http://10.23.23.66:8200 <span style="background-color: #fff0f0;">"GET /v1/luks/vault/bed01f5f-6766-4a3d-aa22-05703568da74 HTTP/1.1"</span> 200 866
INFO:vaultlocker.dmcrypt:LUKS opening bed01f5f-6766-4a3d-aa22-05703568da74
INFO:root:Enabling systemd unit <span style="color: #008800; font-weight: bold;">for </span>vaultlocker-decrypt@bed01f5f-6766-4a3d-aa22-05703568da74.service Created symlink /etc/systemd/system/multi-user.target.wants/vaultlocker-decrypt@bed01f5f-6766-4a3d-aa22-05703568da74.service → /lib/systemd/system/vaultlocker-decrypt@.service.
<span style="color: #996633;">$ </span>lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 7.5G 0 disk
└─vda1 252:1 0 7.5G 0 part /
vdb 252:16 0 5G 0 disk
├─vdb1 252:17 0 1.9G 0 part
└─vdb2 252:18 0 3.1G 0 part
└─crypt-bed01f5f-6766-4a3d-aa22-05703568da74 253:0 0 3.1G 0 crypt
</pre>
</div>
<br />
<br />
Now vdb2 is encrypted and its key is stored in vaults under /luks path. Here bed01f5f-6766-4a3d-aa22-05703568da74 is the UUID of the disk and it is also the key that vaultlocker uses to retrieve the encryption key.<br />
<br />
In vault server, you can confirm the key file stored in Vault. There should be a key with the same uuid as the device and value is the encryption key.<br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>vault kv list luks/
Keys
----
vault/
ubuntu@first-wahoo:~<span style="color: #996633;">$ </span>vault kv list luks/vault
Keys
----
0690f876-f297-4b09-a86c-d80f4a3c2ba1
Bed01f5f-6766-4a3d-aa22-05703568da74
<span style="color: #996633;">$ </span>vault kv get luks/vault/bed01f5f-6766-4a3d-aa22-05703568da74
<span style="color: #333333;">=======</span> <span style="color: #996633;">Data</span> <span style="color: #333333;">=======</span>
Key Value
--- -----
dmcrypt_key LONGLONGKEY
</pre>
<br />
<br />
<br />
<h3>
How to use</h3>
Everytime you want to open the partition you could use<br />
<br />
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo vaultlocker decrypt <span style="color: #008800; font-weight: bold;">${</span><span style="color: #996633;">UUID</span><span style="color: #008800; font-weight: bold;">}</span>
Or
<span style="color: #996633;">$ </span><span style="color: #007020;">echo</span> -n <span style="color: #996633;">$KEY</span> | sudo cryptsetup --batch-mode --key-file - open \</pre>
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;"> UUID</span><span style="color: #333333;">=</span>bed01f5f-6766-4a3d-aa22-05703568da74 \ </pre>
<pre style="line-height: 125%; margin: 0;"> crypt-bed01f5f-6766-4a3d-aa22-05703568da74 --type luks
Where <span style="color: #996633;">$KEY</span> is the variable with value got from vault</pre>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-84044362827507482132020-01-22T16:04:00.002+09:002020-01-22T16:07:12.183+09:00SSH Port ForwardingLocal / Remote port forwarding are powerful features of ssh, yet I am confused of the their usage. Today I saw a great picture explaining in details many options of port forwarding. The author of the accepted answers drew a so great picture that I want to copy here for my own future reference.<br />
<br />
<blockquote class="tr_bq">
<i>"a picture worth a thousand words"</i></blockquote>
<br />
Source:
https://unix.stackexchange.com/questions/115897/whats-ssh-port-forwarding-and-whats-the-difference-between-ssh-local-and-remot <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-LY14SEZL3Wc/XifzuPDGqhI/AAAAAAAAKSI/wrYx60DIlesUjUpOQ6rTkN66IJqrowvGwCLcBGAsYHQ/s1600/ssh%2Bport%2Bforwarding.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1009" data-original-width="972" height="400" src="https://1.bp.blogspot.com/-LY14SEZL3Wc/XifzuPDGqhI/AAAAAAAAKSI/wrYx60DIlesUjUpOQ6rTkN66IJqrowvGwCLcBGAsYHQ/s400/ssh%2Bport%2Bforwarding.png" width="385" /></a></div>
<a href="https://i.stack.imgur.com/4iK3b.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="ssh tunnel starting from remote" border="0" height="400" src="https://i.stack.imgur.com/4iK3b.png" width="385" /></a><br />
<br />
<br />
<br />
<div style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;">
<div style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;">
</div>
</div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-85056180293923227212020-01-20T12:08:00.001+09:002020-01-20T12:08:42.107+09:00New NUCMy old desktop becomes sluggish as I deploy multiple test environment on it. As I also use various desktop applications for my daily works on the same desktop machine, sometimes resource insufficiency freeze the all the applications, interfering my work flows. I am always a fan of small low-energy NUC devices so I take this chance to buy a new NUC device solve my resource issue.<br />
<br />
Before buying new device, I had some considerations for a NUC.<br />
<ul>
<li>Whether it's good time to buy a NUC? If yes, what version should I buy? </li>
<li>Whether it works with my 4K Dell Monitor?</li>
<li>Whether it generates too much heat?</li>
<li>Whether it supports enough resource for my daily work flows (I need a lot of Linux/Windows virtual machines)</li>
<li>How I should use </li>
</ul>
<br />
<b>It is a good timing?</b> <br />
At CES 2020, Intel announced <a href="https://www.zdnet.com/article/ces-2020-intel-launches-nuc-9-extreme-kit-nuc-9-pro-workstation/" rel="nofollow" target="_blank">an impressive NUC 9</a> and it made me think for a while whether I should wait for the release of this device. However, after checking around for price/performance ratio, I realize that it's not a good time to wait as newly released device will have a price premium, which is easily over my budget. Moreover, I probably don't need a device with such high performance. So I persuade myself to go with a <a href="https://www.amazon.co.jp/gp/product/B07J6T286L/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1" target="_blank">NUC 8</a>. Since NUC8, it seems that the heat problem has gone much better so I should have no issue with heat issue. As I am going to build software on my desktop, for energy saving purpose I went with an Corei5 model.<br />
<br />
<br />
<b>Whether it works with my 4K Monitor? </b><br />
I want the NUC should support 4K resolution with at least 60Hz refresh rate. I confirmed the tech spec for my NUC at <a href="https://www.intel.sg/content/www/xa/en/support/articles/000005545/intel-nuc.html" target="_blank">Intel Nuc Tech Spec</a>. Any model on the list should support 4K HDR. I went to the model field in the product page and saw BOXNUC8I5BEH as the device model, so this model supports 4K. All I need is <a href="https://www.intel.sg/content/www/xa/en/support/articles/000029357/intel-nuc.html?countrylabel=Asia%20Pacific" target="_blank">a HDR 4K HDMI cable</a>, so I bought this NUC device with a HDMI cable from Amazon.<br />
<br />
<br />
<b>Does it generate heat?</b><br />
Memory does not generate much heat so I bought 2 <a href="https://www.amazon.co.jp/gp/product/B07H42XM9S/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1" target="_blank">Silicon Power DDR4 Laptop Memory</a>, whose memory size is 16GB each. I wanted to buy memory from more well-known vendors like Kingston, but as my NUC transfer speed is 2400Hz and the only memory with the same transfer speed at the time I search for is from Silicon Power, I decided to buy these 2. If it breaks, I can easy replace them as memory now is inexpensive.<br />
<br />
<b>Resource Sufficiency?</b><br />
I don't think I will have performance bottleneck with cpu and memory resources. For hard disk, I wondered between a standard SATA SSD device and a NVME device. NVME SSD is known to generate heat but with higher read/write performance. For a NUC, a normal SATA should be fine. However, I want to see whether I have a heat bottleneck, so I decided to go for <a href="https://www.amazon.co.jp/gp/product/B07JWK33MV/ref=ppx_yo_dt_b_asin_title_o03_s01?ie=UTF8&psc=1" target="_blank">a nvme device.</a><br />
<br />
<br />
<b>Setup</b><br />
With some helps from Youtube, I could install SSD and RAM without any issue. I boot the device, plugged in a bootable USB with Ubuntu LTS 18.04. The OS install goes as usual with mouse clicks.<br />
<br />
After the device boots, I realize that by default Ubuntu only generates display resolution of 4k at 30Hz, which is unacceptable for me. As 4K resolution is unreadable for my eyes even in 27-inches display, I performed following step to generate 2K resolution at 60Hz<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>cvt 2560 1440 60
<span style="color: #888888;"># 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync </span>
</pre>
</div>
<br />
After that I add the following lines to ~/.xprofile<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;">xrandr --newmode <span style="background-color: #fff0f0;">"2560x1440_60.00"</span> 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync<br />
xrandr --addmode DP-1 2560x1440_60.00<br />
</pre>
</div>
<br />
Where DP-1 is my display device. Now my display could show 2K resolution at 60Hz.<br />
<br />
Other settings are as usual and will be topics for another blog post.<br />
<br />
<b>Conclusion</b><br />
I bought a new NUC device and I am totally satisfied with it. <br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-66709813536377913982019-11-16T11:37:00.002+09:002019-11-16T11:37:23.075+09:00PerspectivesI find this picture is intriguing.<div>
<br /></div>
<div>
To one chicken, the other chicken is on <b><u>the other side</u></b></div>
<div>
To the viewer of this picture, we have two stupid chickens (or one?).</div>
<div>
<br /></div>
<div>
I find it is interesting because it's just one picture yet we could perceive it differently, depending on the place we put our perspectives.<br /><div>
<br /></div>
<div>
<div>
<img src="https://qph.fs.quoracdn.net/main-qimg-7eda8b8645a105db454437d213bf4162" /></div>
</div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-45940295574543437272019-10-07T23:55:00.000+09:002019-10-07T23:55:14.524+09:00Technical Note: LXD Database and patch SQLI deleted some unused zfs storage pools without realizing that they are used by LXD, so today after a machine rebooted, LXD refused to startups with following log messages were output to lxd.log files<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">t</span><span style="color: #333333;">=</span>2019-10-07T23:02:43+0900 <span style="color: #996633;">lvl</span><span style="color: #333333;">=</span>info <span style="color: #996633;">msg</span><span style="color: #333333;">=</span><span style="background-color: #fff0f0;">"Initializing storage pools"</span>
<span style="color: #996633;">t</span><span style="color: #333333;">=</span>2019-10-07T23:02:43+0900 <span style="color: #996633;">lvl</span><span style="color: #333333;">=</span>eror <span style="color: #996633;">msg</span><span style="color: #333333;">=</span><span style="background-color: #fff0f0;">"Failed to start the daemon: ZFS storage pool \"juju-zfs\" could not be imported: "</span>
<span style="color: #996633;">t</span><span style="color: #333333;">=</span>2019-10-07T23:02:43+0900 <span style="color: #996633;">lvl</span><span style="color: #333333;">=</span>info <span style="color: #996633;">msg</span><span style="color: #333333;">=</span><span style="background-color: #fff0f0;">"Starting shutdown sequence"</span>
t
</pre>
</div>
<br />
As lxd settings are stored in <a href="https://dqlite.io/" target="_blank">dqlite database</a> (distributed sqlite) at <b>/var/snap/lxd/common/lxd/database/global/db.bin</b>, so I go confirm the record settings.<br />
<br />
<b></b>
<b></b>
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;">sqlite> .tables
certificates networks
config networks_config
images networks_nodes
images_aliases nodes
images_nodes operations
images_properties profiles
images_source profiles_config
instances profiles_config_ref
instances_backups profiles_devices
instances_config profiles_devices_config
instances_config_ref profiles_devices_ref
instances_devices profiles_used_by_ref
instances_devices_config projects
instances_devices_ref projects_config
instances_profiles projects_config_ref
instances_profiles_ref projects_used_by_ref
instances_snapshots schema
instances_snapshots_config storage_pools
instances_snapshots_config_ref storage_pools_config
instances_snapshots_devices storage_pools_nodes
instances_snapshots_devices_config storage_volumes
instances_snapshots_devices_ref storage_volumes_config
sqlite> <span style="color: #008800; font-weight: bold;">select</span> * from storage_pools;
1|lxd|zfs||1
2|juju-zfs|zfs||1
3|juju-btrfs|btrfs||1
sqlite> <span style="color: #008800; font-weight: bold;">select</span> * from storage_pools_config;
3|1|1|zfs.pool_name|lxd
4|1|1|source|lxd
5|1|1|volatile.initial_source|lxd
7|2|1|size|21GB
8|2|1|source|/var/snap/lxd/common/lxd/disks/juju-zfs.img
9|2|1|zfs.pool_name|juju-zfs
11|3|1|size|21GB
12|3|1|source|/var/snap/lxd/common/lxd/disks/juju-btrfs.img
</pre>
</div>
<br />
<br />
It seems that the storage pools settings are stored in 2 tables: <b>storage_pools</b> and <b>storage_pools_config.</b><br />
<br />
It tried to delete the related records from the above tables and restarted lxd process but lxd still failed with the same errors. I went back to to database and <u>confirmed that those records were still there even after I deleted it</u>.<b> </b>It seems that lxd recover those records from its log files. I could read the code to see how it recover but it takes time so I decided to look for database documentation in lxd source code while creating <a href="https://discuss.linuxcontainers.org/t/lxd-does-not-start-after-a-storage-pool-was-accidentally-deleted/5892/2" target="_blank">a new topic to ask for helps from lxd community</a>.<br />
<br /><b></b>
I skimmed through <a href="https://linuxcontainers.org/lxd/docs/master/database" target="_blank">lxd database documentation</a> and found that I could create a <b>patch.global.sql</b> to remove unnecessary records as these sql statements are run at the very early stage of lxd startup. I created a file call patch.global.sql with statement to remove unneeded settings and start lxd.<br />
<br />
And lxd process starts again with all my in-development containers!<br />
<br /><b></b>
<b>Lesson learned</b>: before removing anything, look for all its usages.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-8203246849758227142019-10-03T16:45:00.000+09:002019-10-03T16:46:52.053+09:00Visualize Enviro pHat sensors data<a href="https://learn.pimoroni.com/tutorial/sandyj/getting-started-with-enviro-phat" target="_blank">Enviro pHat</a> is an low-cost environmental sensing boards that let you measure temperature, light color, motion and analog sensors. The accompanied python SDK makes writing program to get data from enviro pHat as easy as pie. Enviro pHat could be used in combination with RaspPi zero to become a simple room conditions monitoring sensing board.<b> </b><br />
<br />
With grafana and prometheus, it looks so easy to at least build a dashboard for environ pHat sensors data, so I decided to build a enviro pHat dashboard.<br />
<br />
<h3>
System Diagram</h3>
<br />
<b></b>
Though Enviro pHat provides SDK to get data from sensors, we need methods to that data to server. We also need to care about the connection between the device and server side. In home environment, I used Wifi as Pi Zero supports Wifi connections. Prometheus also provides scrape methods to fetch data from prometheus client, so I will add a prometheus client in the device side for prometheus server to scrape. The data is stored in prometheus data store and visualized by grafana.<br />
<b></b><br />
<br />
<b></b>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-gDHRyhPrHbU/XZWXVdCDXvI/AAAAAAAAJho/ygAnNMIS1zwxFTY9SbIGUhUCe28lyGSngCK4BGAYYCw/s1600/Enviro%2BpHat%2BDashBoard.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://4.bp.blogspot.com/-gDHRyhPrHbU/XZWXVdCDXvI/AAAAAAAAJho/ygAnNMIS1zwxFTY9SbIGUhUCe28lyGSngCK4BGAYYCw/s1600/Enviro%2BpHat%2BDashBoard.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">System Diagram: Prometheus fetch data from RaspPi Zero over Wifi</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
<h3>
Device Side</h3>
<h4>
Soldering sensor board and setup OS</h4>
If you buy an un-soldered Enviro pHat, you probably need to solder it with the accompanied 2x20 pin female header. The solder part is not that difficult and Pimoroni has <a href="http://learn.pimoroni.com/tutorial/sandyj/soldering-phats" target="_blank">a detailed guide</a> for the task. Below are my raspberry Pi Zero and the Enviro pHat sensor board.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-UV_H_TwfvnA/XZWfFAPW55I/AAAAAAAAJiU/S4D_vUDaQOI0xTxQUPXWRf9qtNrZDctTgCKgBGAsYHg/s1600/IMG_20191003_154736.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://1.bp.blogspot.com/-UV_H_TwfvnA/XZWfFAPW55I/AAAAAAAAJiU/S4D_vUDaQOI0xTxQUPXWRf9qtNrZDctTgCKgBGAsYHg/s400/IMG_20191003_154736.jpg" width="300" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">My RaspBerry Pi Zero</td></tr>
</tbody></table>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://learn.pimoroni.com/static/repos/learn/sandyj/enviro_phat_board.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img alt="Enviro pHAT board" border="0" height="266" src="https://learn.pimoroni.com/static/repos/learn/sandyj/enviro_phat_board.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Enviro pHat</td></tr>
</tbody></table>
<br />
<br />
For the PiZero device, you need to download Raspbian and write it into the SDCard. Because we are going to connect to the Rasp Pi through Wifi, we need to setup wifi for Rasp Pi. After downloading the OS image and writing the OS image to the sd card, put the sd card into your PC, open the sd card directory and added 2 following files to the boot directory.<br />
<br />
<b>wpa_supplicant.conf</b><br />
<b> </b><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">country</span><span style="color: #333333;">=</span>JP
<span style="color: #996633;">ctrl_interface</span><span style="color: #333333;">=</span><span style="color: #996633;">DIR</span><span style="color: #333333;">=</span>/var/run/wpa_supplicant <span style="color: #996633;">GROUP</span><span style="color: #333333;">=</span>netdev
<span style="color: #996633;">update_config</span><span style="color: #333333;">=</span>1
<span style="color: #996633;">network</span><span style="color: #333333;">={</span>
<span style="color: #996633;">ssid</span><span style="color: #333333;">=</span><span style="background-color: #fff0f0;">"WIFI SSID"</span>
<span style="color: #996633;">psk</span><span style="color: #333333;">=</span><span style="background-color: #fff0f0;">"WIFI SSID PASSWORD"</span>
<span style="color: #996633;">key_mgmt</span><span style="color: #333333;">=</span>WPA-PSK
<span style="color: #333333;">}</span>
</pre>
</div>
<br />
<b>ssh</b><br />
empty content<br />
<b> </b><br />
The purpose of the ssh file is to tell raspbian to enable ssh server by default.<br />
<br />
Plugin the power supply in. You should see Raspberry Pi IP address through your home router (Or ping the whole IP address segments to see which IP is assigned).<br />
<br />
You should be able to ssh into the raspberry Pi through its IP address. Perform basic server setups<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>ssh pi@<span style="color: #996633;">$PI_IP_ADDRESS</span>
pi@<span style="color: #996633;">$PI_IP_ADDRESS</span> password: ****
<span style="color: #996633;">$ </span>pi@192.168.88.218<span style="background-color: #ffaaaa; color: red;">'</span>s password: 0.567, 3: 0.51
Linux pizero 4.19.66+ <span style="color: #888888;">#1253 Thu Aug 15 11:37:30 BST 2019 armv6l</span>
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms <span style="color: #008800; font-weight: bold;">for </span>each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Oct 3 05:50:11 2019 from 192.168.88.254
<span style="color: #996633;">$ </span>sudo apt update
<span style="color: #996633;">$ </span>sudo vim /etc/hostname</pre>
<pre style="line-height: 125%; margin: 0;">$ ...</pre>
<pre style="line-height: 125%; margin: 0;">$ sudo reboot </pre>
</div>
<br />
<u><b>Final result</b></u><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-hZ-CtFN0Sls/XZWfFNF1S0I/AAAAAAAAJiU/DoEwqk5QB8cGswdffLUuUh09x7kOzB0qwCKgBGAsYHg/s1600/IMG_20191003_154434.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://1.bp.blogspot.com/-hZ-CtFN0Sls/XZWfFNF1S0I/AAAAAAAAJiU/DoEwqk5QB8cGswdffLUuUh09x7kOzB0qwCKgBGAsYHg/s640/IMG_20191003_154434.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Final result after soldering enviro phat, setting up OS for rasp pi zero, and plugging in the power supply</td></tr>
</tbody></table>
<br />
<h4>
Install Enviro pHat SDK</h4>
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>curl https://get.pimoroni.com/envirophat | bash
<span style="color: #996633;">$ </span>pip3 install prometheus_client</pre>
<pre style="line-height: 125%; margin: 0;">$ git clone git@github.com:telescreen/raspizero-playground.git</pre>
<pre style="line-height: 125%; margin: 0;">$ cd raspizero-playground</pre>
<pre style="line-height: 125%; margin: 0;">$ python3 enviro-collectd.py -d -f enviro-collectd.log --pid enviro-collectd.pid
</pre>
</div>
<br />
<br />
I write a program to export environ data through prometheus client: https://github.com/telescreen/raspizero-playground . After running the program like above, you should be able to confirm the prometheus data at<br />
<br />
<b>http://$PI_IP_ADDRESS:9090/</b><br />
<br />
<br />
<h3>
Server Side</h3>
<br />
I have LXD running in my local machine. I also have Juju bootstrapped with local cloud as cloud environment. As Juju store already provided prometheus2 and grafana charms, Installation of these middlewares is simply typing following 2 commands.<br />
<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju deploy prometheus2
<span style="color: #996633;">$ </span>juju deploy granafa
<span style="color: #996633;">$ </span>juju add-relation prometheuss:grafana-source grafana:grafana-source
</pre>
</div>
<br />
It will take a while for Juju to download / install charm and necessary software. After Juju finishes installing granafa and prometheus, we should see both applciation active in Juju status<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.6.9 unsupported 16:19:07+09:00
App Version Status Scale Charm Store Rev OS Notes
grafana active 1 grafana jujucharms 32 ubuntu
prometheus2 active 1 prometheus2 jujucharms 11 ubuntu
Unit Workload Agent Machine Public address Ports Message
grafana/0* active idle 1 10.4.88.50 3000/tcp Started grafana-server
prometheus2/0* active idle 0 10.4.88.243 9090/tcp,12321/tcp Ready
Machine State DNS Inst id Series AZ Message
0 started 10.4.88.243 juju-63ab09-0 bionic Running
1 started 10.4.88.50 juju-63ab09-1 bionic Running
</pre>
</div>
<br />
Next, we need to add the exposed prometheus data in the device to Prometheus's scrape job. I prepare a simple scrape job setup at<br />
<br />
https://github.com/telescreen/raspizero-playground/blob/master/scrape-job.yaml<br />
<br />
Adding the scrape job with Juju by following command<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju config prometheus2 scrape-jobs<span style="color: #333333;">=</span>@scrape-job.yaml
</pre>
</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-2KMDGBZgX3o/XZWi0dD5J1I/AAAAAAAAJik/lD_ai2wNGuwafoSbZPNBZi1gkGYiYhwzgCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-03%2B16-26-55.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="254" src="https://2.bp.blogspot.com/-2KMDGBZgX3o/XZWi0dD5J1I/AAAAAAAAJik/lD_ai2wNGuwafoSbZPNBZi1gkGYiYhwzgCK4BGAYYCw/s640/Screenshot%2Bfrom%2B2019-10-03%2B16-26-55.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">After 1 minute, the device IP address should appear in the scrape target with state UP</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<br />
Login into grafana with password retrieved by following actions. The other actions are performed through the grafana UI and quite intuitively, so I excluded those settings from this post.<br />
<br />
<br />
<!-- HTML generated using hilite.me --><br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju run-action --wait grafana/0 get-admin-password
unit-grafana-0:
id: 43e7d1e6-3251-437b-816d-d2aa182c5554
results:
password: NJBMMVNqW6ssKywL
status: completed
timing:
completed: 2019-10-03 07:28:44 +0000 UTC
enqueued: 2019-10-03 07:28:41 +0000 UTC
started: 2019-10-03 07:28:44 +0000 UTC
unit: grafana/0
</pre>
</div>
<br />
<br />
<h3>
Final Results</h3>
After like 3 hours, I could build the following dashboard for data captured by enviro phat's sensors. The light change is due to I put the sensors in a dark zone in my room. My room is definitely cooler than the temperature that Enviro is capturing. Probably, the temperature is affected by the heat from pi zero.<br />
<br />
The storm is approaching my city so I am expecting to see a lower than usual air pressure. The pressure should increase back in a few days after the storm.<br />
<br />
I was able to build a simple home conditions' monitoring system in a short time with opensource technology. I can felt more and more the importance of opensource softwares after this project.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-MMXYKlzGGCM/XZWj15lgA_I/AAAAAAAAJiw/D149C87H7XY8kRuSa0hLK3Bsf68YGQxGQCK4BGAYYCw/s1600/Simple%2BGrafana%2BDashboard.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="420" src="https://3.bp.blogspot.com/-MMXYKlzGGCM/XZWj15lgA_I/AAAAAAAAJiw/D149C87H7XY8kRuSa0hLK3Bsf68YGQxGQCK4BGAYYCw/s640/Simple%2BGrafana%2BDashboard.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Dashboard with temperature, air pressure and light information</td></tr>
</tbody></table>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-7594703842213393372019-10-01T20:29:00.002+09:002019-10-08T11:53:35.065+09:00Use MaaS to manage local computing resource<span style="background-color: black;"></span><br />
Besides managing bare metal machines, MaaS (Metal as a Service) can also manage KVM guests as long as those machines are boot from network PXE. This feature allow us to efficiently manage local computing resources, while at the same time, leverage JuJu for automatically middle ware deployment. In this blog post, I will detail steps to manage KVM guest machines using MaaS.<br />
<br />
<h3>
System diagram </h3>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-TIpF98J9fls/XZMQ5jHPI6I/AAAAAAAAJgE/nldTUYktdJ42burwYeUu0UQVCov1KgTYQCK4BGAYYCw/s1600/MaaS%2BManaged%2BPhysical%2BMachine.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img alt="MaaS managed Physical Machine Diagram" border="0" src="https://3.bp.blogspot.com/-TIpF98J9fls/XZMQ5jHPI6I/AAAAAAAAJgE/nldTUYktdJ42burwYeUu0UQVCov1KgTYQCK4BGAYYCw/s1600/MaaS%2BManaged%2BPhysical%2BMachine.png" title="MaaS managed Physical Machine Diagram" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><b>MaaS Managed Machine</b>: MaaS Virtual Machine creates other Virtual machines through KVM Pod. All VMs access the internet through physical machine's NIC using NAT.</td></tr>
</tbody></table>
<br />
The MaaS server will be deployed to a virtual machine running inside the physical machine. The reason for installing MaaS into a virtual machine instead of directly install to the physical machine is that we want to have the flexibility of VMs as we want to update MaaS version, or backup... The MaaS server will manage its "parent" physical machine resources through KVM pod.<br />
<br />
All machines will access to the Internet through physical machine' NIC using NAT.<br />
<br />
We could easily extend this diagram by adding more virtual NIC for more complex networks. We could also easily add another physical machines to MaaS, as long as the machine could be reachable by MaaS VM.<br />
<br />
The Internet access for MaaS server is necessary because MaaS needs to fetch OS images from the remote repository.<br />
<br />
<h3>
Preinstall preparation</h3>
We will need to install kvm to the physical machine. The procedure to install KVM on Ubuntu Linux could be referred at <a href="https://help.ubuntu.com/community/KVM/Installation">https://help.ubuntu.com/community/KVM/Installation</a>.<br />
<br />
Because MaaS server will manage network resources, we need to make the modification to the "default" network to disable the DHCP for this network. MaaS needs to manage its network using DHCP in order to boot machine and automatically install OS.<br />
<br />
To disable default, in virt-manager UI, we go to<br />
<br />
Virt-Manager UI > Edit > Connection Details > [Virtual Networks] Tab<br />
<br />
Remove the default network by stopping and deleting it. Re-add a new network name it "default". Modify its IP if you prefer (in my case, I use 10.23.23.0/24), remember to uncheck "Enable DHCPv4" option. Accepts other options as default.<br />
<br />
My default network is as below<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-YojjnjJRpM8/XZMWfPB4lAI/AAAAAAAAJgQ/vcR_HMwmhTk_TUtMmJjXRH5VIcxxzvfhgCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-01%2B18-03-09.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://2.bp.blogspot.com/-YojjnjJRpM8/XZMWfPB4lAI/AAAAAAAAJgQ/vcR_HMwmhTk_TUtMmJjXRH5VIcxxzvfhgCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-01%2B18-03-09.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Default Network with DHCP disabled and is NATed to the physical NIC</td></tr>
</tbody></table>
<br />
With network ready, we go to install MaaS<br />
<h3>
Install MaaS Server</h3>
<h4>
MaaS Installation </h4>
For MaaS server, we will need to use normal kvm management client to create MaaS VM. We also need to download Ubuntu .iso from Ubuntu Linux homepage to install it.<br />
<br />
The process of install MaaS is similar to installation of Ubuntu Server. <br />
<h4>
MaaS Server Settings</h4>
Because we disabled DHCP, MaaS will not be allocated IP address, so we need to access the machine in console and setup its network.<br />
<br />
I login into the machine console and make the following modification to /etc/netplat/50-cloud-init.yaml<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;">$ vim /etc/netplan/50-cloud-init.yaml
<span style="color: #888888;"># This file is generated from information provided by</span>
<span style="color: #888888;"># the datasource. Changes to it will not persist across an instance.</span>
<span style="color: #888888;"># To disable cloud-init's network configuration capabilities, write a file</span>
<span style="color: #888888;"># /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:</span>
<span style="color: #888888;"># network: {config: disabled}</span>
network:
ethernets:
ens3:
addresses:
- 10.23.23.2/24
gateway4: 10.23.23.1
nameservers:
addresses:
- 10.23.23.1</pre>
<pre style="line-height: 125%; margin: 0;"> ens9:
addresses:
- 10.4.23.2/24
dhcp4: false
<span style="color: #007020;"> </span>version: 2 </pre>
<pre style="line-height: 125%; margin: 0;">$ netplan apply </pre>
</div>
<br />
Other settings like hostname ... could also be performed here. After setting up, reboot the machine for all settings applied, and to confirm that we have no problem when rebooting machines.<br />
<br />
<h4>
MaaS Installation</h4>
<br />
MaaS Installation is easily using apt<br />
<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo apt install maas
…
Applying metadataserver.0011_script_metadata... OK
Applying metadataserver.0012_store_script_results... OK
Applying metadataserver.0013_scriptresult_physicalblockdevice... OK
Applying metadataserver.0014_rename_dhcp_unconfigured_ifaces... OK
Applying metadataserver.0015_migrate_storage_tests... OK
Applying metadataserver.0016_script_model_fw_update_and_hw_config... OK
Applying metadataserver.0017_store_requested_scripts... OK
Applying metadataserver.0018_script_result_skipped... OK
Applying piston3.0002_auto_20151209_1652... OK
Applying sessions.0001_initial... OK
Applying sites.0001_initial... OK
Applying sites.0002_alter_domain_unique... OK
Setting up maas <span style="color: #333333;">(</span>2.4.2-7034-g2f5deb8b8-0ubuntu1<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>libc-bin <span style="color: #333333;">(</span>2.27-3ubuntu1<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>systemd <span style="color: #333333;">(</span>237-3ubuntu10.19<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>ureadahead <span style="color: #333333;">(</span>0.100.0-20<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>ufw <span style="color: #333333;">(</span>0.36-0ubuntu0.18.04.1<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>dbus <span style="color: #333333;">(</span>1.12.2-1ubuntu1<span style="color: #333333;">)</span> ...
Processing triggers <span style="color: #008800; font-weight: bold;">for </span>rsyslog <span style="color: #333333;">(</span>8.32.0-1ubuntu4<span style="color: #333333;">)</span> ...
<span style="color: #996633;">$ </span>sudo maas init
Create first admin account:
Username: admin
Password:
Again:
Email: ha@buihanotes.com
Import SSH keys <span style="color: #333333;">[]</span> <span style="color: #333333;">(</span>lp:user-id or gh:user-id<span style="color: #333333;">)</span>:
</pre>
</div>
<br />
After installing MaaS, confirm that we can open MaaS web interface by access to MaaS Server port 5240. In my case it is: <a href="http://10.23.23.2:5240/MAAS/#/dashboard">http://10.23.23.2:5240/MAAS/#/dashboard</a><br />
<br />
2 things worth doing after MaaS installation is<br />
<br />
<ol>
<li>Choose OS Image so MaaS can sync it from remote repository</li>
<li>Synchronization your public key from github or launchpad as you could ssh into new virtual machines.</li>
</ol>
<h3>
Setup KVM Pods</h3>
From MaaS version 2.6, MaaS server has one additional option to install KVM automatically. If you have existed machine, you should probably manually add physical machine to host to not break things.<br />
<br />
In order to add physical machine to host, we add KVM pod to MaaS. In order for Pod to talk to physical machine, we add private ssh key to maas user to allow maas user to ssh to physical machine.<br />
<br />
<div style="background: #ffffff; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span><span style="color: #007020;">cd</span> /var/lib/maas
<span style="color: #996633;">$ </span>sudo mkdir .ssh <span style="color: #333333;">&&</span> sudo chown maas. .ssh <span style="color: #333333;">&&</span> sudo chmod 700 .ssh
<span style="color: #996633;">$ </span>sudo vim .ssh/id_rsa
<span style="color: #888888;">## PRIVATE KEY ##</span>
<span style="color: #888888;">## :wq</span>
<span style="color: #996633;">$ </span>sudo chown maas .ssh/id_rsa <span style="color: #333333;">&&</span> sudo chmod 600 .ssh/id_rsa
</pre>
</div>
<br />
<br />
Open MaaS Web UI and add pods with virsh address as follows<br />
<br />
<b id="docs-internal-guid-01952013-7fff-b02b-e064-45df9de68a71" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre;">qemu+ssh://$USERNAME:10.23.23.1/system</span></b><br />
<br />
<span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Because</span><span style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> our host</span></span><b id="docs-internal-guid-01952013-7fff-b02b-e064-45df9de68a71" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre;"> </span></b><span style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">ip address from virtual machine will have the ip address of 10.23.23.1, we add that IP as hypervisor endpoint. One thing to notice here is that the <b>$USERNAME </b>should be able to communicate with local hypervisor without the need to sudo. In the other words, the user in physical machine should belong to <b>libvirt</b> <b>group</b>. </span></span><br />
<span style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></span><b id="docs-internal-guid-01952013-7fff-b02b-e064-45df9de68a71" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre;"></span></b>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-fOBZya1WbYs/XZM21Xo8MnI/AAAAAAAAJgc/fgxUOfB91J8sHBfkmli5tYk0gowKJLujQCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-01%2B20-21-43.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="264" src="https://3.bp.blogspot.com/-fOBZya1WbYs/XZM21Xo8MnI/AAAAAAAAJgc/fgxUOfB91J8sHBfkmli5tYk0gowKJLujQCK4BGAYYCw/s640/Screenshot%2Bfrom%2B2019-10-01%2B20-21-43.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Add Virtual Machine as KVM Pods</td></tr>
</tbody></table>
<span style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br /></span></span><b id="docs-internal-guid-01952013-7fff-b02b-e064-45df9de68a71" style="font-weight: normal;"><span style="background-color: transparent; color: black; font-family: "arial"; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: 700; text-decoration: none; vertical-align: baseline; white-space: pre;"></span></b>
<br />
<h3>
Provision Virtual Machine</h3>
If every step works without problem, you should be able to to see the Pods Summary with the number of virtual machines, its specifications.<br />
<br />
In my case, I have a machine with 16 cores and and 32GB of RAM memory. I have 3 storage pools with different size and specifications. <br />
<br />
<a href="http://1.bp.blogspot.com/--nHX2GEvi9g/XZM3R1hEB1I/AAAAAAAAJgo/beX_uBf2IoQmFIMDi0orwE6ZX2UHk-d7QCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-01%2B20-23-06.png" imageanchor="1"><img border="0" height="520" src="https://1.bp.blogspot.com/--nHX2GEvi9g/XZM3R1hEB1I/AAAAAAAAJgo/beX_uBf2IoQmFIMDi0orwE6ZX2UHk-d7QCK4BGAYYCw/s640/Screenshot%2Bfrom%2B2019-10-01%2B20-23-06.png" width="640" /></a><br />
<br />
Provisioning a machine is as simple as clicking<b> [Take action] </b>button and choose <b>[Compose]</b><br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://3.bp.blogspot.com/-PAgDi5XMuIA/XZM3nFXZO0I/AAAAAAAAJg0/9Sa2jX38X9ItUB0_r9GYDQxeYcoEOqYBgCK4BGAYYCw/s1600/Screenshot%2Bfrom%2B2019-10-01%2B20-24-48.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="378" src="https://3.bp.blogspot.com/-PAgDi5XMuIA/XZM3nFXZO0I/AAAAAAAAJg0/9Sa2jX38X9ItUB0_r9GYDQxeYcoEOqYBgCK4BGAYYCw/s640/Screenshot%2Bfrom%2B2019-10-01%2B20-24-48.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Compose a new Virtual machine by choosing its spec</td></tr>
</tbody></table>
<br />
<h3>
Summary </h3>
In this post, I wrote about steps to add physical machines to MaaS server to allow MaaS to manage local computer resources, turn local computer into a cloud like environment. In next sections, I will write in more details about how to manage virtual machines, how to combine JuJu with local maas to rapidly deploy experiment Linux environment.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-11495203951651702802019-09-09T13:41:00.000+09:002019-09-09T13:41:51.595+09:00Technical Notes: How to change Juju user password<h3>
Juju User </h3>
JuJu has an <a href="https://jaas.ai/docs/working-with-multiple-users" target="_blank">internal user framework</a>, which supports sharing of controllers and models. Using JuJu user feature, a sysadmin could separated users for controllers as in multiple clouds deployment or for users as in multiple systems deployment. Each juju user can have its own permissions at application, model, or controller layer. This separation of responsibilities allow multiple users to manage multiple infrastructures with a clear border of responsibilities.<br />
<br />
At controller bootstrap phase, Juju automatically creates a new user named admin with superuser privileges. This user could perform all operations at cluster level. We could confirm the current user and its privileges by <br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ juju whoami
Controller: kvm
Model: k8s
User: admin
$ juju users
Controller: kvm
Name Display name Access Date created Last connection
admin* admin superuser 23 hours ago just now </code></pre>
<br />
<h3>
Authentication </h3>
An user needs to submit its username and password in order to login into controller. At bootstrap phase, the admin password is automatically generated by Juju. Admin user is also login by default with generated password. All commands with controllers after login is managed by "cookies", which are stored at JuJu client <b>"~/.local/share/juju/cookies/{cloud_name}.yaml". </b>This file stores all cookies needed for current users to authenticated against Juju controllers. In my case, the file is<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ ll ~/.local/share/juju/cookies/
total 16
drwx------ 2 telescreen telescreen 4096 9月 9 12:55 ./
drwx------ 5 telescreen telescreen 4096 9月 9 12:55 ../
-rw------- 1 telescreen telescreen 2998 9月 9 12:55 kvm.json
</code></pre>
<br />
If we by accident delete this file, current user will be logout and we need to submit a password to re-authenticated against Juju controller.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ rm -rf ~/.local/share/juju/cookies/kvm.json
$ ll ~/.local/share/juju/cookies
total 12
drwx------ 2 telescreen telescreen 4096 9月 9 13:05 ./
drwx------ 5 telescreen telescreen 4096 9月 9 13:04 ../
$
$ juju models
please enter password for admin on kvm:
Controller: kvm
Model Cloud/Region Type Status Machines Cores Units Access Last connection
controller kvm maas available 1 1 - admin just now
default kvm maas available 0 - - admin 19 hours ago
k8s* kvm maas available 5 5 16 admin 12 minutes ago
</code></pre>
<br />
<br />
Because the password is generated by default at controller bootstrap phase and we don't know the password, it would be a good idea to change the password after bootstrap. We could change the current user's password by<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ juju change-user-password
new password:
type new password again:
Your password has been changed.
</code></pre>
<br />
<h3>
Recover password</h3>
<br />
In case we forget the administrator password and we happened to logout of Juju, we need to recover the password. The process of recover password is complex but doable.<br />
<br />
<h4>
Access the Database</h4>
Firstly, we need to login into the Juju controller machines with Juju mongodb running as PRIMARY instance. The credential for login is stored at <b>~/.local/share/juju/ssh/. </b>We therefore could login into juju controller machine by<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ juju machines -m controller
Machine State DNS Inst id Series AZ Message
0 started 10.23.23.125 g6es7h bionic default Deployed
$
$ ssh -i ~/.local/share/juju/ssh/juju_id_rsa ubuntu@10.23.23.125
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-60-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Sep 9 04:15:30 UTC 2019
System load: 0.26 Processes: 83
Usage of /: 66.1% of 7.27GB Users logged in: 0
Memory usage: 45% IP address for eth0: 10.23.23.125
Swap usage: 0%
* Congrats to the Kubernetes community on 1.16 beta 1! Now available
in MicroK8s for evaluation and testing, with upgrades to RC and GA
snap info microk8s
* Canonical Livepatch is available for installation.
- Reduce system reboots and improve kernel security. Activate at:
https://ubuntu.com/livepatch
0 packages can be updated.
0 updates are security updates.
Last login: Sun Sep 8 08:52:20 2019 from 10.23.23.1
ubuntu@many-shrew:~$
</code></pre>
<br />
<br />
Next, we need to login into mongodb used by Juju. The credential of mongodb admin user is stored at <b>/var/lib/juju/agents/machine-0/agent.conf</b><br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>$ mongo --sslAllowInvalidCertificates --ssl -u admin -p $(sudo sed -n '/^oldpassword:/s/^oldpassword: //p' /var/lib/juju/agents/machine-0/agent.conf) 127.0.0.1:37017/admin
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:37017/admin
2019-09-09T04:19:58.587+0000 W NETWORK [thread1] SSL peer certificate validation failed: unable to get local issuer certificate
2019-09-09T04:19:58.587+0000 W NETWORK [thread1] The server certificate does not match the host name. Hostname: 127.0.0.1 does not match CN: *
MongoDB server version: 3.6.3
Server has startup warnings:
2019-09-08T04:07:00.871+0000 I STORAGE [initandlisten]
2019-09-08T04:07:00.871+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-09-08T04:07:00.871+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-09-08T04:07:02.872+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-09-08T04:07:02.872+0000 I CONTROL [initandlisten]
juju:PRIMARY>
juju:PRIMARY> use juju
switched to db juju
juju:PRIMARY> db.users.find().pretty()
{
"_id" : "admin",
"name" : "admin",
"displayname" : "admin",
"passwordhash" : "il/ap8gXWEUp8x1T48bIwOxm",
"passwordsalt" : "3YTbonyqTpmWCvtQ",
"createdby" : "admin",
"datecreated" : ISODate("2019-09-08T04:07:11Z"),
"txn-revno" : NumberLong(7),
"txn-queue" : [
"5d75d082a20e030edc1539c8_953a034a"
]
}
</code></pre>
<br />
<br />
<h4>
Password Hash</h4>
According to [1], Juju users' password is hashed using PBKDF2 [2] with following parameters <br />
<ul>
<li>Number of hash Iterations: 8192</li>
<li>Hash algorithm: SHA-512</li>
<li>Salt length: 12 bytes [3] </li>
<li>Password length in bytes: 18 bytes</li>
</ul>
We could easily generated the hashed value for any pasword using the same parameters above. For example, in Python we could generate a password by<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>>> import hashlib, binascii
>> dk = hashlib.pbkdf2_hmac('sha512', b'test', b'4AIANdqIiPvpZUS2', 8192, 18)
>> binascii.b2a_base64(dk)
b'iJW8NpnYPG4iwfzQoMkLfY2z\n'
</code></pre>
<br />
<h4>
Update password</h4>
Finally, we could recover the password by update the "passwordhash" and "passwordsalt" fields using mongodb update command. Because mongodb will overwrite the whole document, we need to include all other fields with old value in our backup.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: "andale mono" , "lucida console" , "monaco" , "fixed" , monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code>db.users.update({_id: "admin"},
{
"_id" : "admin",
"name" : "admin",
"displayname" : "admin",
"passwordhash" : "iJW8NpnYPG4iwfzQoMkLfY2z",
"passwordsalt" : "4AIANdqIiPvpZUS2",
"createdby" : "admin",
"datecreated" : ISODate("2019-09-08T04:07:11Z"),
"txn-revno" : NumberLong(8),
"txn-queue" : [
"5d75d5f3a20e030edc1544bd_3d2483d7"
]
})
</code></pre>
<br />
And we could login into the controller with the new password!<br />
<br />
<h3>
Reference</h3>
1. <a href="https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L65">https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L65</a><br />
2. <a href="https://en.wikipedia.org/wiki/PBKDF2">https://en.wikipedia.org/wiki/PBKDF2</a><br />
3. <a href="https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L49">https://github.com/juju/utils/blob/c746c6e86f4fb2a04bc08d66b7a0f7e900d9cbab/password.go#L49</a>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-6826628840883690537.post-43859818951939857452019-07-25T16:56:00.000+09:002019-07-25T16:56:45.234+09:00Technical Notes: How to remove juju application in error stateI have been playing with juju for application deployment recently as parts of my current job. Most of the time, I just need to `<i>juju deploy application</i>` and JuJu gives me all settings, no matter what cloud I am using. However, in some cases, juju refuses to follow my orders, which makes me really frustrasted.<br />
<br />
For example, the following deployment will definitely fail because grafana charm version 24 was not updated to changes in upstream grafana.<br />
<br />
<i>$ juju deploy grafana-24</i><br />
<br />
The system will stuck with following juju status<br />
<br />
<div style="background: #ffffff; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>juju status
Model Controller Cloud/Region Version SLA Timestamp
experiment stark-kvm stark-kvm 2.6.5 unsupported 16:53:31+09:00
App Version Status Scale Charm Store Rev OS Notes
grafana error 1 grafana jujucharms 24 ubuntu
Unit Workload Agent Machine Public address Ports Message
grafana/0* error idle 0 10.4.23.120 hook failed: <span style="background-color: #fff0f0;">"install"</span>
Machine State DNS Inst id Series AZ Message
0 started 10.4.23.120 tidy-tick bionic default Deployed
</pre>
</div>
<br />
<br />
When juju stucks at this stage, due to the hook error, we could not remove application. Worse, if you try <i>`juju remove-application grafana`</i> without <b>--force</b> or <b>--no-wait</b> flags, any subsequent command will also fail. In other word, the application refuses to be removed, until you resolve its internal errors.<br />
<br />
My colleagues suggests 3 ways to resolve this issue in this case.<br />
<br />
1. Resolve the issue and <i>`juju resolve grafana/0`</i><br />
2. Perform an operational hack 1: change hook script to a bash that always returns normal status code (exitcode 0).<br />
3. Perform an operational hack 2: <i>`juju debug-hook grafana/0`</i>, wait for the hook context loaded, and exit immediately. This will send the positive feedback to juju controllers, let the controller perform the next action (which is our removal command).<br />
<br />
I believe that users should be able to remove application every time they want, no matter what problem is occurring. It seems that JuJu developers thought the same so they triaged several related bugs. However, up to the writing of this blog, the issue has not been fixed yet (my version 2.6.5-bionic-amd64 still has the issue). Until the issue is fixed, there is no way but to really resolve the issue or to "hack" it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-18092958262283995372019-06-20T16:06:00.002+09:002019-06-20T16:06:20.668+09:00Trading Review: No suitable stop loss and Premature QuitI made two trading mistakes today as a beginner: No suitable stop loss and Premature Exit. I want to discuss each in more details to help my future self.<br />
<br />
<h4>
1. No suitable stop loss</h4>
2 weeks ago on May 31, I observed a big sell-off of USD agains JPY. The sell-off happened because the US and China did not agree with the counterpart's trade terms. The USD price felt because traders worried that if trade war prolongs, USD economics might get worse and therefore they sold the dollar to park their assets to a more stable currency the JPY. I myself did not think that trade war will prolong [1] and leaders of those two countries will act when they see enough struggle from their domestic company. I also observe that historically since Trump became the POTUS, he had not one time tried to devalue USD against other currency to boost America's economics. Due to above reason, I bet that USD will rise again JPY after a while, so I entered USD/JPY position by buying 50,000USD at following distribution<br />
<br />
USD$10,000 at 109,600<br />
USD$20,000 at 109,250<br />
USD$20,000 at 108,938<br />
<br />
At that time, I thought that USD price will bounce back soon [2], so <b>I did not put any stop loss orders for the above positions</b>. <br />
<br />
In monthly FOMC meeting, the FED decided to keep their FFR at 2.25 but presented a dovish view about the economics outlook ahead. They decided no rate hike this year and at most one rate cut next year, at the same time showing their fully support for economics expansion. This confirms traders' assumptions so far but presented more dovish outlook, which makes traders worry more about the USD and USD price felt to next level and stay at 107.650. Yesterday, when the USD was around 108.400, I thought about selling to cut loss, but <b>somehow I refrained from doing it fearing a minor loss</b>. This morning when seeing Japanese trader dumps USD, I decided to get out of the position with a loss of around <i>65,000JPY.</i><br />
<br />
While my assumption at [1] might be right finally when early this week Trump tweeted that he and Xi will talk about trade deal at G20, I entered the position too soon, trying catching the falling knife. My first mistake was that I was too naive to think that the USD will bounce back soon. My second mistake here was that I should have cut loss earlier last night. This is my second time I carry my position over the FOMC, the last time I "enjoyed" 250,000JPY loss. This is the second time and I should not let it proceed to the third time. Finally, my biggest mistake as a beginner trader was I did not put a loss cut after my enter.<br />
<br />
I should learn these lessons by <b>putting a loss cut order for each buy order, cut loss sooner, and stay away from major events in future trading.</b><br />
<br />
<h4>
2. Premature Exit</h4>
I have been holding SoftBank Group stock for 2 weeks and I enjoyed its price rise. At 7.8% profit, I decided that it was the time to sell so I put a sell order at 10,255. 5 minutes later the notification came telling me that all stocks were fully contracted. I enjoyed the profit at 76,300JPY which is not bad at all. However, after my sell order, SBG price continued to rise and stopped at 10,515 at closing time. If I had waited until closing time, I would have enjoyed a profit of around 10%. <b>I made a small mistake by existing too soon and it costed me 3% profit or around 30,000JPY. </b><br />
<br />
<b>I should be more patient, confirming the downtrend before exit in future trade.</b><br />
<br />
By the way, when looking at the weekly price graph, SoftBank seems going to rise again so probably I will get in again tomorrow.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-89455576135144478782019-05-26T08:11:00.000+09:002019-05-26T08:11:35.508+09:00Technical Note: Dell XPS 9370 dynamic brightness controlRecently while using Windows Pro 10, I notice that my XPS 9370 adapts its screen brightness according to contents displayed on it. When I view websites with bright color theme, XPS 9370 screen will display the website's content with its maximum brightness. However, when I view websites using dark color theme, my dell laptop screen will automatically lower its brightness accordingly.<br />
<br />
This adaptation is good because lowering display brightness helps reduce power consumption, as well as make our eyes comfortable. However, my Dell laptop often reacts too much to minor changes of contents' brightness. A huge recognizable brightness often happens even when there are minor changes to color themes. The problem turns worse when brightness incurs latency. I switch between windows of multiple applications often enough to find the brightness adaption and its latency annoying. I decided to turn off this feature.<br />
<br />
It turns out to be quite difficult to find a place to turn off this feature. At first, I thought it was a Windows feature, so naturally I went to Windows display settings to look for it. But to my surprises, there are no corresponding option to adjust / change "automatically brightness adaption". A quick look at power settings failed to find out corresponding option. Somebody on the internet suggests looking at Intel Graphics Settings. I checked it out but find nothing there. I desperately thought that it was a hardware defect and I cannot do anything with it.<br />
<br />
Luckily, I was wrong this time. A google for "Dell XPS 9370 brightness" leaded me to Dell Support. Someone was annoyed like me and bothered to open <a href="https://www.dell.com/community/XPS/XPS-13-9370-Disable-Dynamic-Brightness-Control/td-p/6123512" target="_blank">a dell support ticket</a>. It seems that dynamic brightness control is a feature and it can be turn off in BIOS settings. I immediately restarted my computer, [F2] to go to BIOS settings, went to Power, and disabled this feature. Annoying display brightness adaption disappeared.<br />
<br />
It's intriguing to find out that this feature only manifests in Windows Pro. My dual boot Ubuntu 19.04 shows no problem. This proves that sometimes a lag in hardware supports of OS is good for end users :-)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-86996654080199334972019-05-14T21:41:00.000+09:002019-05-14T21:43:48.570+09:00Technical Memo: DynamicUser in systemdRecently, I received a really interesting question from my customer. He found out in his file system two files (directories), which belong to an user that not in <b>/etc/passwd</b>. Specifically in Ubuntu Linux 18.04, these 2 files are<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;"></span>/var/lib/private/systemd
/var/lib/private/systemd/timesync
</pre>
<br />
These two files belong to a user named <i>systemd-timesync</i> with UID/GID in 62583 and this user does not belong to <b>/etc/passwd</b>.<br />
<br />
Traditionally, a Linux user does not always need to be in <b>/etc/passwd</b> because it can come from many remote sources, for example<br />
<ul>
<li>LDAP</li>
<li>NIS / NIS+ server</li>
<li>Windows Domain Controller Server</li>
<li>... </li>
</ul>
All current users, both from <b>/etc/passwd</b> and remote sources, can be queried by <b>getent</b> command [1] <br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>getent passwd
</pre>
</div>
<br />
Unfortunately, Our <i>systemd-timesync</i> user does not appear in <b>getent</b> list<br />
<br />
A quick Google points to <a href="http://0pointer.net/blog/dynamic-users-with-systemd.html" target="_blank">an article</a> [2], written by systemd developer - <a href="https://en.wikipedia.org/wiki/Lennart_Poettering" target="_blank">Lennart Poettering</a>, explaining about DynamicUser feature. It turns out that DynamicUser was introduced from systemd version 235. My client's environment is running Ubuntu 18.04 with systemd version 237, so his system supports DynamicUser feature.<br />
<br />
DynamicUser feature tries to solve the problem of user scatters in Linux system. Modern Linux systems have multiple system users, whose usage is to run system processes, and a less number of human users. The point is some processes when being removed from the system does not remove its system users properly, leaving the system with floating users. Worse, files created by system processes is leaved with old users' UID/GID. The UID/GID user added after that have total control over these files, which lessen system securities. DynamicUser solves this issue by allowing process to define a system user that only exist during run time. When a process with DynamicUser ends, the dynamic user gets removed automatically. We can enable this feature by adding<b> DynamicUser = True</b> to your systemd's service files and systemd will setup everything for you.<br />
<br />
In Ubuntu 18.04, <b>systemd-timesyncd</b> uses this feature so my client can see files with <i>systemd-timesync</i> user that does not belong to <b>/etc/passwd</b>. Unfortunately at the time of this post, this feature seems buggy so Ubuntu developers decided to turn it off in later release. In Ubuntu 19.04 (Disco Dingo) and later, systemd-timesyncd will not use DynamicUser. So we expect to see <i>systemd-timesync</i> user appears in /etc/passwd again.<br />
<br />
<b>References</b><br />
<ol>
<li>https://askubuntu.com/questions/504971/is-it-possible-for-users-not-to-be-in-passwd</li>
<li>http://0pointer.net/blog/dynamic-users-with-systemd.html</li>
</ol>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-15398091283545492422019-04-24T13:18:00.000+09:002019-04-24T13:18:26.039+09:00A Review of Kubernetes Tokyo Meetup #18In a time when every cool developer talks about infrastructure software,
a meetup of infrastructure technology is certainly getting a lot of
attention. In Tokyo, Tokyo k8s meetup is one of the most popular community, where k8s enthusiasts gather and share their knowledge about this new rising infrastructure technology. This week's Monday, Tokyo Kubernetes Meetup #18 was held in Tokyo and I and my colleague <a href="http://wenhan.blog/" target="_blank">Wenhan</a> fortunately have chance to join it.<br />
<br />
Never before I realize that k8s is becoming such a hot technology. The attention for this technology can be felt by the number of event registration. The event halls' capacity was solely 170 people, yet there were more than 500 people made registration (at the time I register). The organizer had to make a ticket lottery and select participants by random. I was <span style="color: #38761d;"><b>lucky enough</b></span> to get the lottery for the event.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-pqJq1RHAKQA/XL_M-6aa1dI/AAAAAAAAH84/0ywS6tIgjx0azvRpPfNSu3HDhxcusFxFQCLcBGAs/s1600/Tokyo%2Bk8s%2Bmeetup%2B%252318%2Bregistration.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="689" data-original-width="1070" height="411" src="https://4.bp.blogspot.com/-pqJq1RHAKQA/XL_M-6aa1dI/AAAAAAAAH84/0ywS6tIgjx0azvRpPfNSu3HDhxcusFxFQCLcBGAs/s640/Tokyo%2Bk8s%2Bmeetup%2B%252318%2Bregistration.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The number of registrations is 5 times over the capacity of event hall. The organizer had to randomly select participants by lottery</td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td><td class="tr-caption" style="text-align: center;"><br /></td></tr>
</tbody></table>
<span id="goog_495065750"></span><span id="goog_495065751"></span><br />
At the time I came to the event, it was filled with developers busy typing notes. The participants all looked quite young, mostly at their late twenty or beginning of thirty. They are mostly from young Internet companies such as Line, Mercari, CyberAgent, Mixi, DeNA,... I can recognize some frequent other community members, who are also joining other communities like Kafka community or Cloud computing community. I also recognize a manager from KDDI, etc. The last time I talk to him, he told me he was trying to build a cloud computing team inside the corporation but having hard time recruit right people as well as choosing right technology stack. Perhaps, he is still working on it.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-sBeWmMCrOrE/XL_PeQPbnFI/AAAAAAAAH9k/ULKh8aVcHHAM5Crt_HNlHul9ODT5k-XyACKgBGAs/s1600/IMG_20190422_191900.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://3.bp.blogspot.com/-sBeWmMCrOrE/XL_PeQPbnFI/AAAAAAAAH9k/ULKh8aVcHHAM5Crt_HNlHul9ODT5k-XyACKgBGAs/s640/IMG_20190422_191900.jpg" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Developers from technology company in Tokyo joining k8s</td></tr>
</tbody></table>
<br />
The agenda for the meetup comprises of 2 parts<br />
The "key notes" presentations from community leaders<br />
The Lightning presentation by volunteers.<br />
<br />
There are a small short break time between 2 sessions, during which the event sponsor provides burgers and drinks for participants. I myself got 2 burgers and a coke.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/-Bu_PqjelGdU/XL_QgB5QHKI/AAAAAAAAH90/4QViUH9cLAIE7_sqt3L3T9fiNBMBq9P9ACKgBGAs/s1600/IMG_20190422_205214.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://3.bp.blogspot.com/-Bu_PqjelGdU/XL_QgB5QHKI/AAAAAAAAH90/4QViUH9cLAIE7_sqt3L3T9fiNBMBq9P9ACKgBGAs/s640/IMG_20190422_205214.jpg" width="640" /> </a></td><td style="text-align: center;"> </td><td style="text-align: center;"> </td></tr>
<tr><td class="tr-caption" style="text-align: center;">Small burger and drinks provided by sponsors. I chose a coke for myself</td></tr>
</tbody></table>
<br />
The "key notes" presentations were made by 3 famous guys in k8s community. The topics were<br />
<br />
<ol>
<li><a href="https://www.slideshare.net/pfi/kubernete-meetup-tokyo-18-kubebuildercontrollerruntime?ref=https://twitter.com/i/cards/tfw/v1/1120236914609541120?cardname=player&autoplay_disabled=true&earned=true&edge=true&lang=ja&card_height=130&scribe_context=%7B%22client%22%3A%22web%22%2C%22page%22%3A%22profile%22%2C%22section%22%3A%22permalink_overlay%22%2C%22component%22%3A%22tweet%22%7D&bearer_token=AAAAAAAAAAAAAAAAAAAAAPYXBAAAAAAACLXUNDekMxqa8h%252F40K4moUkGsoc%253DTYfbDKbT3jJPCEVnMYqilB28NHfOPqkca3qaAxGfsyKCs0wRbw" target="_blank">kubebuilder/controller-runtime 入門</a> (30min) by Shingo Omura <a href="https://twitter.com/everpeace" target="_blank">(@everpeace)</a> </li>
<li><a href="https://speakerdeck.com/masayaaoyama/k8sjp22-c-native" target="_blank">Container-native ingress controller with kubebuilder/(Admission Webhook)</a> (30min) by MasayaAoyama (<a href="https://twitter.com/amsy810" rel="nofollow">@amsy810</a>)</li>
<li><a href="https://speakerdeck.com/b4b4r07/kubernetes-manifests-management-and-operation-in-mercari" target="_blank">メルカリのマイクロサービスにおける Kubernetes manifest の運用</a> (30min) by Masaki Ishiyama (<a href="https://twitter.com/b4b4r07" rel="nofollow">@b4b4r07</a>)</li>
</ol>
<br />
I attached links for each presentation above.<br />
<br />
For summary, the first presentation lays detailed Interfaces of <a href="https://github.com/kubernetes-sigs/kubebuilder" target="_blank">kubebuilder</a> and steps to build a k8s controller using this SDK. The golang interface was quite clean and I somehow can imaging steps I should do to build a controller using this documentation. I think the presentation could be better if its author provides listener more "bird eyes" overview of architecture before digging into details.<br />
<br />
The second presentation was the one I find the most interesting in this meetup. Aoyoma, as usual, provides a simple and straightforward overview of containers's network technology. He detailed the difference between traditional networking and container-native networking. I was absolutely surprised when I heard about GKE L7 LoadBalancer's abilty to recognize Pods' network. I still remember I thought for a while about how to provide network to a pod from host machines, and all I thought of was to NAT pods' overlay network to host virtual network (well the situation was more complicated from host viewpoints). Alibaba Cloud LoadBalancer just didn't provide Pods network awareness. This cloud-native supports for network by GKE L7 LB did make me want to try this cloud out. <br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-Q45TGAZHIxw/XL_WHHw2gZI/AAAAAAAAH-A/U57uJxijLZokiiNSh8Q9E7MjnRlY6X61ACLcBGAs/s1600/GKE%2BCloud%2BNative%2BL7%2BLoad%2BBalancer.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="809" data-original-width="1430" height="362" src="https://4.bp.blogspot.com/-Q45TGAZHIxw/XL_WHHw2gZI/AAAAAAAAH-A/U57uJxijLZokiiNSh8Q9E7MjnRlY6X61ACLcBGAs/s640/GKE%2BCloud%2BNative%2BL7%2BLoad%2BBalancer.png" width="640" /> </a></td><td style="text-align: center;"> </td><td style="text-align: center;"> </td></tr>
<tr><td class="tr-caption" style="text-align: center;">GKE Cloud-Native L7 Load Balancer are able to aware Pods Networks and distribute loads directly to containers without the necessary of traditional host's NAT.</td></tr>
</tbody></table>
After cloud-native's explanation of new GKE implementation, Aoyama
presented in more details about his team's works last year. In cloud environment,
all k8s controllers are provided by cloud provider so we don't need to care about its implementation details. However, in on-premise environment, supports for cloud-native k8s has not mature. Appliance high performance load balancer still only supports traditional physical hosts / networks. So the team build their own L7 load balancer based on <a href="https://github.com/kubernetes/ingress-nginx" target="_blank">nginx-ingress</a> and <a href="https://www.projectcalico.org/" target="_blank">calico</a>.<br />
<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-9mxAI1eA1a0/XL_hknAtMAI/AAAAAAAAH-M/X_HjLZdg_6Qgchm9P118azGpY0lHPMUxgCLcBGAs/s1600/L7%2BLoad%2BBalancer%2Busing%2Bnginx%2Bingress%2Band%2Bcustom%2Bbuilt%2Bcontroller.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="794" data-original-width="1425" height="356" src="https://2.bp.blogspot.com/-9mxAI1eA1a0/XL_hknAtMAI/AAAAAAAAH-M/X_HjLZdg_6Qgchm9P118azGpY0lHPMUxgCLcBGAs/s640/L7%2BLoad%2BBalancer%2Busing%2Bnginx%2Bingress%2Band%2Bcustom%2Bbuilt%2Bcontroller.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">L7 Load Balancer using Nginx-ingress / calico and custom built controller</td></tr>
</tbody></table>
<br />
The last presentation talks about Mercari experience in k8s deployment pipeline. Mercari is one of unicorn internet company with a rising popularity among developers. Mercari developers maintains its high-growth services by separating its system into multiple micro-services. Currently, Mercari has over 100 hundreds micro-services, maintained by more than 200 hundreds engineers. To maintain daily works with such a huge number of micro-services, engineers need a defined workflows with a high degree of automation. In their infrastructure, they use gitops and spinnaker for pipelines. They choose a monorepo style as starting point for their pipeline due to its easiness to begin with. They also details its organization of repositories, and use a custom-made linter called stein to validate deployment source code. I think Ishiyama provided a good lesson for cloud-native application managements and operations.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-6EyOrUaBJ30/XL_irbj0AnI/AAAAAAAAH-U/sa890QXGPk4mGYH4yq-2NxnZ6JLomq0PgCLcBGAs/s1600/mercari%2Bpipeline.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="735" data-original-width="1380" height="340" src="https://2.bp.blogspot.com/-6EyOrUaBJ30/XL_irbj0AnI/AAAAAAAAH-U/sa890QXGPk4mGYH4yq-2NxnZ6JLomq0PgCLcBGAs/s640/mercari%2Bpipeline.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Mercari deployment pipeline</td></tr>
</tbody></table>
<br />
Overall, I think 3 presentations were very intriguing and I will definitely join this event in future. And yes, the #19 Tokyo k8s meetup will be organized in May 31. This time with the presentation from <a href="https://www.linkedin.com/in/jbeda/" target="_blank">Joe Beda</a>, one of creator of k8s. I find the event in May 31 promising because Joe now is working for Vmware and I hope I can get more information about Vmware involvement in cloud-native worlds.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-55716633289842779962019-04-24T11:17:00.001+09:002019-04-24T11:17:23.626+09:00Technical Memo: Apparmor starting errors in Ubuntu Bionic 18.04<h3>
1. System Degradation Issue </h3>
As usual, after installing some modules and modifying some user privileges, I performed a reboot for my home desktop computer. After the reboot, I always check for system start health and today is no different.<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>systemctl status
● stark
State: <span style="color: red;">degraded</span>
Jobs: 0 queued
Failed: 1 units
Since: Tue 2019-04-23 13:37:13 JST; 0h 4min ago
CGroup: /
├─user.slice
│ ├─user-1000.slice
│ │ ├─user@1000.service
... </pre>
<pre style="line-height: 125%; margin: 0;"> </pre>
</div>
<h3>
2. What was the problem?</h3>
hm... I was quite surprised at this degradation of my system, so I look for the unit that fails to start at boot time.<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>systemctl list-units --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● apparmor.service loaded failed failed AppArmor initialization
<span style="color: #996633;">LOAD</span> <span style="color: #333333;">=</span> Reflects whether the unit definition was properly loaded.
<span style="color: #996633;">ACTIVE</span> <span style="color: #333333;">=</span> The high-level unit activation state, i.e. generalization of SUB.
<span style="color: #996633;">SUB</span> <span style="color: #333333;">=</span> The low-level unit activation state, values depend on unit type.
1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use <span style="background-color: #fff0f0;">'systemctl list-unit-files'</span>.
<span style="color: #996633;">$ </span>systemctl status apparmor
● apparmor.service - AppArmor initialization
Loaded: loaded <span style="color: #333333;">(</span>/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled<span style="color: #333333;">)</span>
Active: failed <span style="color: #333333;">(</span>Result: <span style="color: #007020;">exit</span>-code<span style="color: #333333;">)</span> since Wed 2019-04-24 10:06:09 JST; 5min ago
Docs: man:apparmor<span style="color: #333333;">(</span>7<span style="color: #333333;">)</span>
http://wiki.apparmor.net/
Process: 20197 <span style="color: #996633;">ExecStop</span><span style="color: #333333;">=</span>/etc/init.d/apparmor stop <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
Process: 20208 <span style="color: #996633;">ExecStart</span><span style="color: #333333;">=</span>/etc/init.d/apparmor start <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>123<span style="color: #333333;">)</span>
Main PID: 20208 <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>123<span style="color: #333333;">)</span>
...
4月 24 10:06:07 stark apparmor<span style="color: #333333;">[</span>20208<span style="color: #333333;">]</span>: Warning from /var/lib/snapd/apparmor/profiles/snap.lxd.migrate <span style="color: #333333;">(</span>/var/lib/snapd/apparmor/profiles/snap.lxd.migrate line 484<span style="color: #333333;">)</span>: Unconfined <span style="color: #007020;">exec </span>qualifier <span style="color: #333333;">(</span>ux<span style="color: #333333;">)</span> allows some dangerous environment variables to be passed to the unconfined process; <span style="background-color: #fff0f0;">'man 5 apparmor.d'</span> <span style="color: #008800; font-weight: bold;">for </span>details.
4月 24 10:06:09 stark apparmor<span style="color: #333333;">[</span>20208<span style="color: #333333;">]</span>: ...fail!
4月 24 10:06:09 stark systemd<span style="color: #333333;">[</span>1<span style="color: #333333;">]</span>: apparmor.service: Main process exited, <span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>123/n/a
4月 24 10:06:09 stark systemd<span style="color: #333333;">[</span>1<span style="color: #333333;">]</span>: apparmor.service: Failed with result <span style="background-color: #fff0f0;">'exit-code'</span>.
4月 24 10:06:09 stark systemd<span style="color: #333333;">[</span>1<span style="color: #333333;">]</span>: Failed to start AppArmor initialization.
</pre>
</div>
<br />
<div style="line-height: 125%; margin: 0;">
<span style="background-color: white;">Apparmor seems fail to initialize at startup. It's quite surprising because I didn't modify any apparmor-related things. I look for<span style="font-weight: bold;"> </span>the status of apparmor to see what problem might happen.</span></div>
<div style="line-height: 125%; margin: 0;">
<br /></div>
Well, nothing special here. the <b>apparmor</b> service is complaining of some unconfined exec qualifier and it dies. Warnings cannot make a process fail to start because they are warnings! So I look at apparmor journal logs.<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>journalctl -u apparmor
...
4月 23 16:25:41 stark apparmor<span style="color: #333333;">[</span>14865<span style="color: #333333;">]</span>: AppArmor parser error <span style="color: #008800; font-weight: bold;"> for</span> /etc/apparmor.d/usr.sbin.dhcpd in /etc/apparmor.d/usr.sbin.dhcpd at line 70: Could not open <span style="background-color: #fff0f0;">'dhcpd.d'</span>
</pre>
</div>
<br />
The error at line 70 of profile file for dhcpd is the culprit causing apparmor failed to start. At that line is the following #include<br />
<br />
<b> # allow packages to re-use dhcpd and provide their own specific directories</b><br />
<b> #include </b><dhcpd .d=""></dhcpd><br />
<dhcpd .d=""><br /></dhcpd>
It seems that this apparmor's dhcpd profile looks for dhcpd.d directory in <b>/etc/apparmor.d/</b> but fails to see this directory. So on theory, if I create this directory again, the apparmor parser will parse and I will not see any problem.<br />
<br />
<h3>
3. Temporary Solution</h3>
So I go and create the directory<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo mkdir /etc/apparmor.d/dhcpd.d
<span style="color: #996633;">$ </span>sudo systemctl start apparmor
<span style="color: #996633;">$ </span>systemctl status apparmor
● apparmor.service - AppArmor initialization
Loaded: loaded <span style="color: #333333;">(</span>/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled<span style="color: #333333;">)</span>
Active: active <span style="color: #333333;">(</span>exited<span style="color: #333333;">)</span> since Wed 2019-04-24 10:50:18 JST; 5s ago
Docs: man:apparmor<span style="color: #333333;">(</span>7<span style="color: #333333;">)</span>
http://wiki.apparmor.net/
Process: 20197 <span style="color: #996633;">ExecStop</span><span style="color: #333333;">=</span>/etc/init.d/apparmor stop <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
Process: 11297 <span style="color: #996633;">ExecStart</span><span style="color: #333333;">=</span>/etc/init.d/apparmor start <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
Main PID: 11297 <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
...
<span style="color: #996633;">$ </span>systemctl status
● stark
State: <span style="color: #38761d;"><b>running</b></span>
Jobs: 0 queued
Failed: 0 units
</pre>
</div>
<br />
Ok, it works!<br />
<br />
<h3>
4. Why?</h3>
I try to google around to see whether I'm the only person who see this issue. I did found out <a href="https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1453088" target="_blank">similar issue date back to 2015</a>. The reason at that time was that Ubuntu developers forgot to add the empty directory dhcpd.d to the built package. However, the issue got fixed and released at <b>2.8.95~</b><br />
<wbr></wbr><b>2430-0ubuntu5.</b><wbr></wbr><b>2. </b>My apparmor package version was <b>2.12-4ubuntu5.1 (apt-cache showpkg apparmor)</b> so it should be no issue at all. I'm curious to see whether my apt update affects this issue. so I issue a search command for apparmor to apt log<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>grep apparmor /var/log/apt/term.log
Skipping profile in /etc/apparmor.d/disable: usr.sbin.squid
Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-confine.real ...
</pre>
</div>
<br />
2 promising lines did appear but it didn't relate to my issue.<br />
At this time, I still need to finish my new employee training plan, so I leave this issue as it is. If due to some modification, the issue reappears, I will definitely raise a bug ticket to <a href="https://bugs.launchpad.net/ubuntu" target="_blank">Ubuntu Launchpad</a>. Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-74736976089266967692019-04-19T12:03:00.001+09:002019-04-19T12:04:57.228+09:00Technical Note: Roles, Grants for newly installed PostgreSQLI'm familiar with MySQL yet I have always wanted to use PosegreSQL due to its feature richness. Due to differences between the 2 databases, I always find it hard to remember initial setups. Today, It took me a while to re-navigate Postgres official documentation again, before I can create a role and grant permissions to newly created roles. I think it's useful to note those tasks for my future references.<br />
<br />
Steps to setup Postgres<br />
<ol>
<li>Allow local role to access using password</li>
<li>Create Role and Grant Permission </li>
</ol>
<h2>
Manual</h2>
<br />
<h3>
1. Allow password authentication to local role</h3>
<br />
Postgres comes with different role management mechanisms from MySQL's. By default, local users connect with Postgres throught Unix socket without password. Postgres delegates user authentication to Linux, which means that user connect to Postgres locally needs not only a role in Postgres but also a Linux account. For a local development, this setting is quite inconvenient so I switch local user authencation to password also.<br />
<br />
Postgres authencation mechanism is set in pg_hba.conf. In Ubuntu 19.04 (Disco) with Postgres installed from apt, the file is stored under /etc/postgresql/11/main/pg_hba.conf<br />
<br />
Open a file with edit permission (which means using sudo in Ubuntu), look for the following files, and change METHOD from peer to md5<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.2em; width: auto;">
<pre style="line-height: 125%; margin: 0;">...
<span style="color: #888888;"># "local" is for Unix domain socket connections only</span>
--- <span style="color: #007020;">local </span>all all peer
+++ <span style="color: #007020;">local </span>all all md5
...
</pre>
</div>
<br />
Explanation is written in detail in file comment. After edit and save file, we restart Postgres.<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo systemctl restart postgresql
<span style="color: #996633;">$ </span>systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded <span style="color: #333333;">(</span>/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled<span style="color: #333333;">)</span>
Active: active <span style="color: #333333;">(</span>exited<span style="color: #333333;">)</span> since Fri 2019-04-19 00:50:53 UTC; 40min ago
Process: 5168 <span style="color: #996633;">ExecStart</span><span style="color: #333333;">=</span>/bin/true <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
Main PID: 5168 <span style="color: #333333;">(</span><span style="color: #996633;">code</span><span style="color: #333333;">=</span>exited, <span style="color: #996633;">status</span><span style="color: #333333;">=</span>0/SUCCESS<span style="color: #333333;">)</span>
Apr 19 00:50:53 disco systemd<span style="color: #333333;">[</span>1<span style="color: #333333;">]</span>: Starting PostgreSQL RDBMS...
Apr 19 00:50:53 disco systemd<span style="color: #333333;">[</span>1<span style="color: #333333;">]</span>: Started PostgreSQL RDBMS
</pre>
</div>
<br />
<h3>
2. Add Role and Grant Permission to Role</h3>
We need a role (users) and its permissions. Permissions include permission to connect to database, permission to use public schema, permission on all tables, and permission for sequences.<br />
<h4>
</h4>
<h4>
2.1 Create Role and Database</h4>
I will try to create a role with name: tele<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>sudo su - postgres
<span style="color: #996633;">$ </span>psql
<span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> CREATE ROLE tele WITH LOGIN;
<span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> <span style="background-color: #fff0f0; color: #666666; font-weight: bold;">\p</span>assword tele
Enter new password: *****
Enter it again: *****
<span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> CREATE DATABASE teledb;
</pre>
</div>
<h4>
</h4>
<h4>
2.2 Grant Connect and Usage</h4>
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> GRANT CONNECT ON DATABASE teledb to tele;
<span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> GRANT USAGE ON SCHEMA public to tele;
</pre>
</div>
<h4>
</h4>
<h4>
2.3 Grant permissions on table and sequence</h4>
<div style="background: #ffffff; overflow: auto; padding: 0.2em 0.6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public to tele;
<span style="color: #996633;">postgres</span><span style="color: #333333;">=</span>> GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public to tele;</pre>
</div>
<br />
<br />
<h3>
3. Confirmation</h3>
<br />
Verify that user tele can login into database using username and password you have setup.<br />
<br />
<div style="background: #ffffff; overflow: auto; padding: .2em .6em; width: auto;">
<pre style="line-height: 125%; margin: 0;"><span style="color: #996633;">$ </span>psql -U tele -d teledb -W
password: *****
<span style="color: #996633;">tele</span><span style="color: #333333;">=</span>>
</pre>
</div>
<br />
<h2>
Reference</h2>
1. https://gist.github.com/AtulKsol/4470d377b448e56468baef85af7fd614<br />
2. https://www.postgresql.org/docs/11/user-manag.html<br />
3.https://dba.stackexchange.com/questions/117109/how-to-manage-default-privileges-for-users-on-a-database-vs-schema/117661#117661<br />
4. https://stackoverflow.com/questions/22483555/give-all-the-permissions-to-a-user-on-a-db <br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-76314808391788482672019-03-29T00:26:00.000+09:002019-04-19T12:28:39.233+09:00Technical Note: Sleep Issue on Dell XPS 9370 with Ubuntu 18.04 (Bionic Beaver)I notice than when I close my lit of Dell XPS 9370, which is installed Ubuntu Linux 18.04 (Bionic Beaver),the OS does not get into deep sleep mode. It does turn off the display, and hard disk and CPU seems to get into low power mode. However, the OS consumes battery power as normal mode and if I forget to put my computer into my backpack in this sleep mode, the CPU rallies producing heats. This definitely reduce the life of my laptop and I know that I need to do something.<br />
<br />
At first, I thought this issue relates to driver due to new hardware, but after a quick Google, I found out that <a href="https://gist.github.com/greigdp/bb70fbc331a0aaf447c2d38eacb85b8f" target="_blank">Arch Linux</a> user and <a href="https://askubuntu.com/questions/1029474/ubuntu-18-04-dell-xps13-9370-no-longer-suspends-on-lid-close" target="_blank">Bionic Beaver</a> users see the same issue. I think the answer made by monty47 is so complete that I need no further explanation. I copied his solution here including several references for my personal educational purposes. <br />
<br />
<h3>
1. Root cause and Solution</h3>
Linux Kernel supports 4 system sleep states, though some states depend on platform support codes. Those 4 states are represented by strings that can be written or read from /sys/power/state, which are<br />
<ul>
<li>"<b>mem</b>"</li>
<li>"<b>standby</b>" - Power-On-Suspend</li>
<li>"<b>freeze</b>" - Suspend-To-Idle</li>
<li>"<b>disk</b>" - Hibernation</li>
</ul>
The "mem" represented available sleep modes supported by the platform. The modes could be referenced in<b> <span style="color: #444444;"><i>/sys/power/mem_sleep</i></span></b>, and could have following values<br />
<ul>
<li><b>s2idle</b> - Suspend-To-Idle</li>
<li><b>shallow</b> - Power-on Suspend</li>
<li><b>deep</b> - Suspend-To-Ram</li>
</ul>
"s2idle" seems always available, while "shallow" and "deep" depend on underlying platform and supporting code (I have not traced which layer's code). For example, my Dell XPS 9370 can only be in "s2idle" and "deep".<br />
<br />
<span style="color: #444444;"><span style="background-color: white;"><i>$ cat /sys/power/mem_sleep</i></span></span><br />
<span style="background-color: #eeeeee;"><span style="color: #444444;"><span style="background-color: white;"><i>[s2idle] deep</i></span></span> </span><br />
<br />
For detailed explanation, please refer to <a href="https://www.kernel.org/doc/Documentation/power/states.txt" target="_blank">System Power Management Sleep States</a> by Wysocki. Only in "deep" state, all system and devices but memory are put in low-power state, and kernel might pass controls to the BIOS.<br />
<br />
So, for my Dell XPS 9370, if I don't put it into "deep" mem sleep state, it will maintain normal "s2idle", which is just a light-weight, pure software mode. CPU will spend most of its time in idle mode, it produces heat even in this mode (because I greedily chose a Core i7 chip).<br />
<br />
To check which mode my computer is in by default, just put it into sleep mode using <b>Fn+End</b> and search for sleep keyword in <i>journal</i>.<br />
<br />
<i><span style="color: #444444;">$ cat /sys/power/mem_sleep</span></i><br />
<i><span style="color: #444444;">[s2idle] deep # </span></i><-- highly="" in="" p="" s2idle=""><i><span style="color: #444444;">$ # Fn + End to put to computer to sleep and wake it up<br />
$ journalctl -S "2019-03-28" | grep "PM: suspend" | tail -2<br />Mar 28 19:08:27 ha-xps kernel: PM: <b>suspend entry (s2idle)</b><br />Mar 28 22:29:35 ha-xps kernel: PM: suspend exit</span></i><br />
<br />
To fix this issue, just temporarily change mem_sleep to "deep" mode by<br />
<br />
<span style="color: #444444;"><i>$ echo deep | sudo tee /sys/power/mem_state</i></span><br />
<br />
or permanently put that mode into kernel startup parameters.<br />
<br />
<i><span style="color: #444444;">$ sudo vim /etc/default/grub<br />
... <br />
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash <b>mem_sleep_default=deep</b>"<br />
...<br />
$ sudo grub-mkconfig -o /boot/grub/grub.cfg<br />$ sudo reboot<br />
$ # Confirm after rebooted<br />
$ sudo journalctl -b 0 | grep "PM: suspend" | tail -2<br />Mar 28 19:08:27 ha-xps kernel: PM: <b>suspend entry (deep)</b><br />Mar 28 22:29:35 ha-xps kernel: PM: suspend exit </span></i></--><br />
<br />
<h3>
2. Known Related Issues</h3>
This solution comes with 2 consequences (or related issues).<br />
<br />
<ol>
<li>Suspend will fails if you encrypts hard disk</li>
<li>Bluetooth will not wake up after suspend. This means you cannot use bluetooth mouse after your system wakes up. </li>
</ol>
I, myself, have the No.2 issue and I need to reset bluetooth after startup. Restart bluetooth through UI seems not working to me and I don't know why (perhaps it is a bug?).<br />
<br />
<span style="color: #444444;"><i>$ sudo systemctl start bluetooth<br />
$ bluetoothctl<br />
...<br />
[NEW] Characteristic<br /> /org/bluez/hci0/dev_D7_4A_86_5E_2D_C4/service0008/char0009<br /> 00002a05-0000-1000-8000-00805f9b34fb<br /> Service Changed<br />[NEW] Descriptor<br /> /org/bluez/hci0/dev_D7_4A_86_5E_2D_C4/service0008/char0009/desc000b<br /> 00002902-0000-1000-8000-00805f9b34fb<br /> Client Characteristic Configuration<br />Agent registered<br />...</i></span><br />
<br />
<h3>
3. Future Consideration</h3>
This issue has not been fixed on my current Bionic <a href="https://wiki.ubuntu.com/Kernel/LTSEnablementStack" target="_blank">HWE release.</a><br />
Hope it will get fixed in upcoming version.<br />
<br />Unknownnoreply@blogger.comtag:blogger.com,1999:blog-6826628840883690537.post-3737319121099977482018-08-12T00:00:00.000+09:002019-04-24T14:27:59.153+09:00My presentation at Developer Summit 2018Developer Summit one of the biggest developer gathering in Japan, where developer will gather in one day event with full of sessions to gain knowledge and experiences, as well as technology trends of current year. This season’s topic is “society in the expansion of data”. Full event timetable and speakers’ profiles could be confirmed at event homepage (<a href="https://event.shoeisha.jp/devsumi/20180727">https://event.shoeisha.jp/devsumi/20180727</a>)<br />
<br />
This year, I’m honored to be one of the speaker at this event (<a href="https://event.shoeisha.jp/devsumi/20180727/session/1752/">https://event.shoeisha.jp/devsumi/20180727/session/1752/</a>). I and my colleagues presented about Alibaba’s Big Data platform, as well as its applications. Through out the presentation, I talked about the development history of Apsara platform, the architecture of Alibaba cloud platform and the decisions behind the architecture of a project that I’m responsible for. The content of our talks can be seen at <a href="https://techblog.sbcloud.co.jp/2018/07/27/devsumi2018/">https://techblog.sbcloud.co.jp/2018/07/27/devsumi2018/</a><br />
<br />
I’m really honored to be able to join Developer summit and I’m looking forward to presenting to other developer events too.Unknownnoreply@blogger.com0日本、東京都35.6894875 139.6917063999999321.7112835 119.03740939999993 49.6676915 160.34600339999992tag:blogger.com,1999:blog-6826628840883690537.post-79397857345417043192014-11-19T14:02:00.004+09:002014-11-19T14:02:40.877+09:00Which kind of company do I want?I just think that if we can't clearly form a picture of future by ourselves, I should follow a master and learn what he could see, and gradually we could create our own vision. <div>
<br /></div>
<div>
I want a knowledge company, which lives by knowledge.</div>
<div>
<br /></div>
<div>
Though I still didn't have a clear image of what a knowledge-based company should be. I somehow have similar "masters" in real life. </div>
<div>
<br /></div>
<div>
http://www.wolfram.com/ and http://fivethirtyeight.com/ are 2 of companies that I want to imitate. </div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-60799736465904530852014-10-28T10:07:00.000+09:002014-10-28T10:07:01.107+09:00Imagine futureA book I read recently says that: you can't achieve things you don't desire for.<br />
<br />
What future do I desire for?<br />
<br />
I couldn't answer this question. In short, I desire nothing.<br />
<br />
For a while, It's been a random walk. I would live without any specific purpose in life. I follow several interests without a definite goal.<br />
<br />
I would reason that I live that life style because I can't imagine the future. I don't know what future I would like to live on, so I wander around to find interest.<br />
<br />
And I realize that I should stop wandering around and start imagine about the future. It's the shortest way I could go.<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-63800652810876347512014-10-26T12:58:00.001+09:002019-01-05T12:15:16.548+09:00A review of "Stress test"Morning is the time of random thoughts and it'd better to jail those thoughts somewhere rather than let them wandering around and disappear. I decided to make writing the first thing I do everyday.<br />
<div>
<br /></div>
<div>
Yesterday, I came to a near bookstore and saw that the book "Stress Test: reflections on financial crisis" already on shelves. After reading the book's review by Bill Gates, I can't help waiting until the book is sold in bookstores, that I buy a kindle version through Amazon. I started reading the book every single free time because the content is so compelling and its author is so amazing. I want to jot down several points I think interesting here.<br />
<br />
<br />
<a name='more'></a> </div>
<div>
<br /></div>
<div>
First thing I learnt from the book is that financial crisis is the crisis of faith. In the time of crisis, every people go crazy, concerning their money rather than whatever else. Nobody believes confirmation made by bankers or authority. People just run for their money. In this time of turbulence, the rightest thing for a banker to do is not to persuade people to believe his statement, but to put a lot of money over the window, so that people can see money available for them; so that they understand they don't need to worry for their money and stop running. In time of crisis, a wise action is to calm public outrage.</div>
<div>
<br /></div>
<div>
Second thing I learn from the book is about money. I grew up never asking the existence of money. I spend money as that it's thing to spend, work for money as it's thing to work for, never asking where money does come from. After finishing the book, I become awareness of the existence of so called national central banks (the Federal Reserve in America), their duties and responsibilities. I start to realize the importance of IMF (International Monetary Fund) and its mission in taming global crisis. I understand the intrinsic value of money, that money is not solely a common tool for exchange but also a blood in an economy. Money acts as blood in an economy, bring oxygen to businesses helping them operate normally. Banks are hearts of the economy, keeping money flow. If money stops to flow, business stop functioning and start to lay down workers. People stop spending money making sales decrease. The overall economy jumps into a vicious cycle of evils which lead to recession. Money needs to flow to make the overall system functions.</div>
<div>
<br /></div>
<div>
Third thing I learn from the book is about the America's politics. In time of crisis, the politics is exposed. Politician fights not for their country but for their own profits. I often think that America is the only country that have a best institution in the world, where great people working together for a great America, but I was wrong. Overall, the american can be proud of their governments because their president always good. But behind the scene, the Main Street (or Congress) is the real stage of politics where politicians use their powers to control things. And the congress and politicians there are awful (dirty?). I just fully realize that even chosen congressman (leaders?) won't care about ordinary people in turbulence, so I should not expect other people care about me. </div>
<div>
<br /></div>
<div>
The author is under lots of criticism even after crisis. He might be right or wrong, but to me, in time of crisis, we need a leader who might be wrong but strong rather than a leader who might be right but weak. Without actions, people can draw countless pictures of imaginary results where they can judge right or wrong.</div>
<div>
<br /></div>
<div>
I nearly recommend this book for people who want to understand the housing bubble in America and the finance crisis in 2008, as well as thoughts, debates and policies behind the scenes made by America leaders.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-12553159971661070872014-10-11T15:34:00.000+09:002014-10-11T15:34:01.525+09:00Nonsense argumentsI had an argument over web frameworks and their design philosophies. I was supporting micro frameworks because of its light and performance predictability. I know, in a micro framework, where to put things in, where to get things when I need it. I get the overall controls over queries, conventions, so on. My friends, on the other side, argues that frameworks like rails are great because they help us iterate faster. Such frameworks also has been designed and refined for a long time so they comprise of good patterns. We had a really heat debates which we don't want to withdraw our arguments.<br />
<br />
It was nonsense and a waste of time.<br />
<br />
After reading software design books recently, I just come to a realization that we have argued from different viewpoints and that every framework has its own merits. We wouldn't have derive each other like we have done.<br />
<br />
And I should stick to a framework for a while, embrace myself in different world before judging something.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-46779922659939202642014-10-02T23:04:00.000+09:002014-10-02T23:04:11.975+09:00How to do researchAs noted in previous posts, I love reading, writing and doing research. However each time I consider return to academics world to pursue a PhD, my inner voice would keep yelling out loud I'm crazy. As normal graduate students, I know the agony of doing research. It's really hard to maintain enough energy to keep ourselves from burnouts. I was stressful and burned out once.<br />
<br />
But recently after working in industry for a while, I realize that I didn't get around well because I didn't know how to conduct researches. I didn't know how to divide tasks into sub-tasks to conquer. I didn't know to that writing down ideas and keeping a research notebook is important. I didn't know how to set milestones and strive for them. when got stuck, I didn't bother to ask for helps from my professors or from friends. I, inversely, even thought that asking others show my weakness, which showing my arrogance. I didn't even try to visualize things in my brain, which I think I'm good at, and break down things to maintainable size. I learned by solely reading without trying to build a working toy product to see how it really works. My pictures of research objects are vague, unclear, sometimes misleading. I didn't know how to work by myself. I always ask myself why I still don't start a company and now I somehow figure out the answer by myself. It's because I don't have a clear goal and don't know how to do it by myself.<br />
<br />
Rarely I can find a simple but outstanding piece of writing, but each time I do find out something it's really a masterpiece. <a href="https://people.cs.umass.edu/~emery/misc/how-to.pdf">https://people.cs.umass.edu/~emery/misc/how-to.pdf</a> , a suggestion for graduate students of MIT AI lab is one of those masterpiece. It's a simple, clearly documented recipe for survive at AI lab. It posed a reality and it opened my eyes. Just bookmark here to in case I forget about it later.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-67571264736138927942014-09-29T14:41:00.000+09:002014-09-29T14:41:09.655+09:00Do what you love, Love what you doI got a little bit distracted recently by business stuffs. I don't find I'm suitable to do business. A friend told me I'd better do research because I have the knack of doing it. I might acknowledge that I don't have business instinct. I don't often negotiate for my profits.<br />
<br />
Recently I find out that I need money more than ever. My family is poor by a variety of standards and one of my goals is to bring my whole family out of poverty. Besides, I also love coding. I miss coding also, cause I don't write code recently. I got addicted to coding. And I was wondering whether I can make enough money out of coding. I got stucked<br />
<br />
And I came across an article of a Japanese genius. He spends the whole day gazing at computer screens, looking for moving patterns. He then trades with his own money and becomes multimillionaire just by trading. He's a star. It's astonishing to know that he even doesn't trade for money. To him, doing trading itself is fulfilling. He doesn't need other people to encourage him working. He do it without much considerations, without fears of losing money. Trading by itself is full of pressure and it influences deeply to our emotions. Yet B.N.F somehow can manage to work all days through it. Moreover, he lives a frugal life. He doesn't use money for his own sakes. His largest spending is his own mansion without nearly nothing. he lives an He just does what he loves and I find it intriguing. He is respectable and he is just amazing.<br />
<br />
http://ja.wikipedia.org/wiki/B%E3%83%BBN%E3%83%BBF<br />
<br />
I just understand a simple truth. Money doesn't come to people who love money. It sometimes does come bringing troubles and misfortunes. Money is mischief. However it is very generous to people who don't love it. It comes to people who don't work for it. In short, it comes to people who do what they love.<br />
<br />
I become less stressful of making more money and more focus on doing what I love. Ultimately, we are happy by doing what we love.<br />
<br />
Sometimes it takes long time to just understand simple truths, but to me this time's understanding is precious.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6826628840883690537.post-63761655623152623242014-09-06T12:33:00.000+09:002014-09-06T12:33:10.271+09:00The unexotic underclasshttp://miter.mit.edu/the-unexotic-underclass/<br />
<br />
The impact of the article upon me is so great that I decided to put the title of the article title of this post. Similar to the author, I always think that we as educated citizens is putting our mental energy to wrong problems. As for me, I spend my days and nights watching our video servers. Of course our company philosophy is to bring television to the internet and deliver contents to a subculture community which is great, but in reality it doesn't solve any problem which helps progress our civilized. What we're doing is to entertain people, bring fun to people. We are living in a different world bring a mass-produced products to people, forgetting people in the lower ranks.<br />
<br />
Do we need to care for people of underclasses? Their life is their choice anyway.Unknownnoreply@blogger.com0