April 24, 2019

A Review of Kubernetes Tokyo Meetup #18

In a time when every cool developer talks about infrastructure software, a meetup of infrastructure technology is certainly getting a lot of attention. In Tokyo, Tokyo k8s meetup is one of the most popular community, where k8s enthusiasts gather and share their knowledge about this new rising infrastructure technology. This week's Monday, Tokyo Kubernetes Meetup #18 was held in Tokyo and I and my colleague Wenhan fortunately have chance to join it.

Never before I realize that k8s is becoming such a hot technology. The attention for this technology can be felt by the number of event registration. The event halls' capacity was solely 170 people, yet there were more than 500 people made registration (at the time I register). The organizer had to make a ticket lottery and select participants by random. I was lucky enough to get the lottery for the event.

The number of registrations is 5 times over the capacity of event hall. The organizer had to randomly select participants by lottery

At the time I came to the event, it was filled with developers busy typing notes. The participants all looked quite young, mostly at their late twenty or beginning of thirty. They are mostly from young Internet companies such as Line, Mercari, CyberAgent, Mixi, DeNA,... I can recognize some frequent other community members, who are also joining other communities like Kafka community or Cloud computing community. I also recognize a manager from KDDI, etc. The last time I talk to him, he told me he was trying to build a cloud computing team inside the corporation but having hard time recruit right people as well as choosing right technology stack. Perhaps, he is still working on it.

Developers from technology company in Tokyo joining k8s

The agenda for the meetup comprises of 2 parts
The "key notes" presentations from community leaders
The Lightning presentation by volunteers.

There are a small short break time between 2 sessions, during which the event sponsor provides burgers and drinks for participants. I myself got 2 burgers and a coke.

Small burger and drinks provided by sponsors. I chose a coke for myself

The "key notes" presentations were made by 3 famous guys in k8s community. The topics were

  1. kubebuilder/controller-runtime 入門 (30min) by Shingo Omura (@everpeace) 
  2. Container-native ingress controller with kubebuilder/(Admission Webhook) (30min) by MasayaAoyama (@amsy810)
  3. メルカリのマイクロサービスにおける Kubernetes manifest の運用 (30min) by Masaki Ishiyama (@b4b4r07)

I attached links for each presentation above.

For summary, the first presentation lays detailed Interfaces of kubebuilder and steps to build a k8s controller using this SDK. The golang interface was quite clean and I somehow can imaging steps I should do to build a controller using this documentation. I think the presentation could be better if its author provides listener more "bird eyes" overview of architecture before digging into details.

The second presentation was the one I find the most interesting in this meetup. Aoyoma, as usual, provides a simple and straightforward overview of containers's network technology. He detailed the difference between traditional networking and container-native networking. I was absolutely surprised when I heard about GKE L7 LoadBalancer's abilty to recognize Pods' network. I still remember I thought for a while about how to provide network to a pod from host machines, and all I thought of was to NAT pods' overlay network to host virtual network (well the situation was more complicated from host viewpoints). Alibaba Cloud LoadBalancer just didn't provide Pods network awareness. This cloud-native supports for network by GKE L7 LB did make me want to try this cloud out.
GKE Cloud-Native L7 Load Balancer are able to aware Pods Networks and distribute loads directly to containers without the necessary of traditional host's NAT.
After cloud-native's explanation of new GKE implementation, Aoyama presented in more details about his team's works last year. In cloud environment, all k8s controllers are provided by cloud provider so we don't need to care about its implementation details. However, in on-premise environment, supports for cloud-native k8s has not mature. Appliance high performance load balancer still only supports traditional physical hosts / networks. So the team build their own L7 load balancer based on nginx-ingress and calico.

L7 Load Balancer using Nginx-ingress / calico and custom built controller

The last presentation talks about Mercari experience in k8s deployment pipeline. Mercari is one of unicorn internet company with a rising popularity among developers. Mercari developers maintains its high-growth services by separating its system into multiple micro-services. Currently, Mercari has over 100 hundreds micro-services, maintained by more than 200 hundreds engineers. To maintain daily works with such a huge number of micro-services, engineers need a defined workflows with a high degree of automation. In their infrastructure, they use gitops and spinnaker for pipelines. They choose a monorepo style as starting point for their pipeline due to its easiness to begin with. They also details its organization of repositories, and use a custom-made linter called stein to validate deployment source code. I think Ishiyama provided a good lesson for cloud-native application managements and operations.

Mercari deployment pipeline

Overall, I think 3 presentations were very intriguing and I will definitely join this event in future. And yes, the #19 Tokyo k8s meetup will be organized in May 31. This time with the presentation from Joe Beda, one of creator of k8s. I find the event in May 31 promising because Joe now is working for Vmware and I hope I can get more information about Vmware involvement in cloud-native worlds.

Technical Memo: Apparmor starting errors in Ubuntu Bionic 18.04

1. System Degradation Issue

As usual, after installing some modules and modifying some user privileges, I performed a reboot for my home desktop computer. After the reboot, I always check for system start health and today is no different.

$ systemctl status
● stark
    State: degraded
     Jobs: 0 queued
   Failed: 1 units
    Since: Tue 2019-04-23 13:37:13 JST; 0h 4min ago
   CGroup: /
           │ ├─user-1000.slice
           │ │ ├─user@1000.service

2. What was the problem?

hm... I was quite surprised at this degradation of my system, so I look for the unit that fails to start at boot time.

$ systemctl list-units --failed
  UNIT             LOAD   ACTIVE SUB    DESCRIPTION           
● apparmor.service loaded failed failed AppArmor initialization

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

$ systemctl status apparmor
● apparmor.service - AppArmor initialization
   Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2019-04-24 10:06:09 JST; 5min ago
     Docs: man:apparmor(7)
  Process: 20197 ExecStop=/etc/init.d/apparmor stop (code=exited, status=0/SUCCESS)
  Process: 20208 ExecStart=/etc/init.d/apparmor start (code=exited, status=123)
 Main PID: 20208 (code=exited, status=123)

 4月 24 10:06:07 stark apparmor[20208]: Warning from /var/lib/snapd/apparmor/profiles/snap.lxd.migrate (/var/lib/snapd/apparmor/profiles/snap.lxd.migrate line 484): Unconfined exec qualifier (ux) allows some dangerous environment variables to be passed to the unconfined process; 'man 5 apparmor.d' for details.
 4月 24 10:06:09 stark apparmor[20208]:    ...fail!
 4月 24 10:06:09 stark systemd[1]: apparmor.service: Main process exited, code=exited, status=123/n/a
 4月 24 10:06:09 stark systemd[1]: apparmor.service: Failed with result 'exit-code'.
 4月 24 10:06:09 stark systemd[1]: Failed to start AppArmor initialization.

Apparmor seems fail to initialize at startup. It's quite surprising because I didn't modify any apparmor-related things. I look for the status of apparmor to see what problem might happen.

Well, nothing special here. the apparmor service is complaining of some unconfined exec qualifier and it dies. Warnings cannot make a process fail to start because they are warnings! So I look at apparmor journal logs.

$ journalctl -u apparmor
 4月 23 16:25:41 stark apparmor[14865]: AppArmor parser error  for /etc/apparmor.d/usr.sbin.dhcpd in /etc/apparmor.d/usr.sbin.dhcpd at line 70: Could not open 'dhcpd.d'

The error at line 70 of profile file for dhcpd is the culprit causing apparmor failed to start. At that line is the following #include

  # allow packages to re-use dhcpd and provide their own specific directories

It seems that this apparmor's dhcpd profile looks for dhcpd.d directory in /etc/apparmor.d/ but fails to see this directory. So on theory, if I create this directory again, the apparmor parser will parse and I will not see any problem.

3. Temporary Solution

So I go and create the directory

$ sudo mkdir /etc/apparmor.d/dhcpd.d
$ sudo systemctl start apparmor
$ systemctl status apparmor
● apparmor.service - AppArmor initialization
   Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
   Active: active (exited) since Wed 2019-04-24 10:50:18 JST; 5s ago
     Docs: man:apparmor(7)

  Process: 20197 ExecStop=/etc/init.d/apparmor stop (code=exited, status=0/SUCCESS)
  Process: 11297 ExecStart=/etc/init.d/apparmor start (code=exited, status=0/SUCCESS)
 Main PID: 11297 (code=exited, status=0/SUCCESS)

$ systemctl status
● stark
    State: running
     Jobs: 0 queued
   Failed: 0 units

Ok, it works!

4. Why?

I try to google around to see whether I'm the only person who see this issue. I did found out similar issue date back to 2015. The reason at that time was that Ubuntu developers forgot to add the empty directory dhcpd.d to the built package. However, the issue got fixed and released at 2.8.95~
2430-0ubuntu5.2. My apparmor package version was 2.12-4ubuntu5.1 (apt-cache showpkg apparmor) so it should be no issue at all. I'm curious to see whether my apt update affects this issue. so I issue a search command for apparmor to apt log

$ grep apparmor /var/log/apt/term.log
Skipping profile in /etc/apparmor.d/disable: usr.sbin.squid
Installing new version of config file /etc/apparmor.d/usr.lib.snapd.snap-confine.real ... 

2 promising lines did appear but it didn't relate to my issue.
At this time, I still need to finish my new employee training plan, so I leave this issue as it is. If due to some modification, the issue reappears, I will definitely raise a bug ticket to Ubuntu Launchpad.

April 19, 2019

Technical Note: Roles, Grants for newly installed PostgreSQL

I'm familiar with MySQL yet I have always wanted to use PosegreSQL due to its feature richness. Due to differences between the 2 databases, I always find it hard to remember initial setups. Today, It took me a while to re-navigate Postgres official documentation again, before I can create a role and grant permissions to newly created roles. I think it's useful to note those tasks for my future references.

Steps to setup Postgres
  1. Allow local role to access using password
  2. Create Role and Grant Permission 


1. Allow password authentication to local role

Postgres comes with different role management mechanisms from MySQL's. By default, local users connect with Postgres throught Unix socket without password. Postgres delegates user authentication to Linux, which means that user connect to Postgres locally needs not only a role in Postgres but also a Linux account. For a local development, this setting is quite inconvenient so I switch local user authencation to password also.

Postgres authencation mechanism is set in pg_hba.conf. In Ubuntu 19.04 (Disco) with Postgres installed from apt, the file is stored under /etc/postgresql/11/main/pg_hba.conf

Open a file with edit permission (which means using sudo in Ubuntu), look for the following files, and change METHOD from peer to md5
# "local" is for Unix domain socket connections only
---     local            all             all              peer
+++ local            all             all              md5

Explanation is written in detail in file comment. After edit and save file, we restart Postgres.

$ sudo systemctl restart postgresql
$ systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: active (exited) since Fri 2019-04-19 00:50:53 UTC; 40min ago
  Process: 5168 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 5168 (code=exited, status=0/SUCCESS)
Apr 19 00:50:53 disco systemd[1]: Starting PostgreSQL RDBMS...
Apr 19 00:50:53 disco systemd[1]: Started PostgreSQL RDBMS

2. Add Role and Grant Permission to Role

We need a role (users) and its permissions. Permissions include permission to connect to database, permission to use public schema, permission on all tables, and permission for sequences.


2.1 Create Role and Database

I will try to create a role with name: tele

$ sudo su - postgres
$ psql
postgres=> CREATE ROLE tele WITH LOGIN;
postgres=> \password tele
Enter new password:  *****
Enter it again: *****
postgres=> CREATE DATABASE teledb; 


2.2 Grant Connect and Usage

postgres=> GRANT CONNECT ON DATABASE teledb to tele;
postgres=> GRANT USAGE ON SCHEMA public to tele;


2.3 Grant permissions on table and sequence


3. Confirmation

Verify that user tele can login into database using username and password you have setup.

$ psql -U tele -d teledb -W
password: *****


1. https://gist.github.com/AtulKsol/4470d377b448e56468baef85af7fd614
2. https://www.postgresql.org/docs/11/user-manag.html
4. https://stackoverflow.com/questions/22483555/give-all-the-permissions-to-a-user-on-a-db