k3s Setup

k3s Setup

Starting with a 3 node cluster:

Addresses:
10.10.50.31
10.10.50.32
10.10.50.33

Using some directions found on K3sup github page:
https://github.com/alexellis/k3sup#download-k3sup-tldr

STEP1

Installing k3sup

Im running this and all the following commands from my Mac and deploying aginst 3 Intel Nuc devices with addressed listed above.

sudo curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/ 

This installed fine for me on ubuntu but on MacOS I ran into a prermissions issue with /usr/local/bin/, make sure your user id has access to the file:

chown {user}:admin /usr/local/bin/k3sup

Then run:

install k3sup /usr/local/bin/ 

STEP3

Installing first node and deploying kubconfig

Lets install and get a kubeconfig file, replace your user in {user}:

k3sup install --help
k3sup install --ip 10.10.50.31 --user {user}
(base) ➜  kube k3sup install --ip 10.10.50.31 --user {user}
Running: k3sup install
2021/05/13 09:51:03 10.10.50.31
Public IP: 10.10.50.31
[INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
Result: [INFO]  Finding release for channel v1.19
[INFO]  Using v1.19.10+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.19.10+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s
 Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.

Saving file to: /Users/user/Documents/Projects/kube/kubeconfig

# Test your cluster with:
export KUBECONFIG=/Users/user/Documents/Projects/kube/kubeconfig
kubectl config set-context default
kubectl get node -o wide

Updated: Something Ive noticed is an issue with permissions of the k3s.yaml on the primary node. If you exerience this or want to head it off now, login to the node you just pushed k3s to and sudo su to root and run the following:

sudo chmod 644 /etc/rancher/k3s/k3s.yaml

STEP3

Adding nodes

Here we are adding in 2nd of 3 total nodes to our 3 node Cluster:

k3sup join --ip 10.10.50.32 --server-ip 10.10.50.31 --user {Enter user}

Once done, lets view all nodes:

(base) ➜  kube kubectl get node -o wide
NAME    STATUS   ROLES    AGE    VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
kub01   Ready    master   9m2s   v1.19.10+k3s1   10.10.50.31   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.4-k3s1
kub02   Ready    <none>   17s    v1.19.10+k3s1   10.10.50.32   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.4-k3s1
kube k3sup join --ip 10.10.50.33 --server-ip 10.10.50.31 --user {Enter user}

Finally we can get all nodes and show the 3 node cluster is complete:

kube kubectl get node -o wide

NAME    STATUS   ROLES    AGE     VERSION         INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
kub02   Ready    <none>   3m21s   v1.19.10+k3s1   10.10.50.32   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.4-k3s1
kub01   Ready    master   12m     v1.19.10+k3s1   10.10.50.31   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.4-k3s1
kub03   Ready    <none>   67s     v1.19.10+k3s1   10.10.50.33   <none>        Ubuntu 18.04.5 LTS   4.15.0-112-generic   containerd://1.4.4-k3s1

STEP4

Lets install Kubernetes Dashboard

Install
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

Access

To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:

kube kubectl proxy

>Starting to serve on 127.0.0.1:8001

To Access the dashboard enter the following URL:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

You will be presented with a login like the following:
Screen-Shot-2021-05-13-at-10.10.04-AM

Now we need to create an Authentication Token.

Reading from here: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

We are creating Service Account with name admin-user in namespace kubernetes-dashboard first.

Open a new Terminal and connect to first Kub server, this is so we dont loose the kubectl proxy.

Creating Service Account:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Error I got:

coffeeman@kub01:~$ cat <<EOF | kubectl apply -f -
> apiVersion: v1
> kind: ServiceAccount
> metadata:
>   name: admin-user
>   namespace: kubernetes-dashboard
> EOF
WARN[2021-05-13T10:30:05.182569677-05:00] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode to modify kube config permissions
error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied

This is my home network so I just chmod to open everything up, re-ran the above command with no issues:

sudo chmod 777 /etc/rancher/k3s/k3s.yaml

Successful run looks like:

coffeeman@kub01:~$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
EOF
serviceaccount/admin-user created

Creating a ClusterRoleBinding

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

Now we get that Token we needed for the login:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

Succesful run looks like:

coffeeman@kub01:~$ cat <<EOF | kubectl apply -f -
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRoleBinding
> metadata:
>   name: admin-user
> roleRef:
>   apiGroup: rbac.authorization.k8s.io
>   kind: ClusterRole
>   name: cluster-admin
> subjects:
> - kind: ServiceAccount
>   name: admin-user
>   namespace: kubernetes-dashboard
> EOF
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Get the token:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

Output should look like this:

coffeeman@kub01:~$ kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6IlVYTTRxeFNqcU9DMl9IYlhJTHlFYUtDYXlfN24wOUtyVFNWZ21TVmNFOVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXE2MnNuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkM2MzODY1Yy1lNmZiLTRiYWUtODQ0ZS02NDUxMDRhNjQxN2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RIENHnw3TwwzlrksstJyIqLgNYq74URRz5In7imVMk_XGyZZjtt3-yCgwek1_fsQimU5OCyjcEiSAXn-Tx3xZu9kesZSX9WjtcLs6443ZEo7dw3dUlkqWIoO8OlYtAXN3JQ7KhLERw0DiabmLj5FtjBXrAdiAVjdfo20J38F6yZ8jnvLzikP6-THGb8GAuJzf1EU109hxCctjmCxQdjH6BCd88KDwddFQuLPb_QR8UuN0FaM48IjJkL7GWXdqLEM5SW_L81OL8P7Do-u0g-Ki0a-1qZYEpX1BSBu5VKSXPrh7CfOWP4tIkqqU5T7KRlx277HN2lY07PbITyJUk4x1A

Copy token and place in token field:
Screen-Shot-2021-05-13-at-10.36.19-AM

This is the dashboard screen im greeted with:
Screen-Shot-2021-05-13-at-10.37.51-AM-1

I clicked on the Site header icon and got a good overview of my cluster.


Share Tweet Send
0 Comments
Loading...
You've successfully subscribed to Metristech
Great! Next, complete checkout for full access to Metristech
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.