In this tutorial, we will go through the deployment of the ioFog stack into an existing Kubernetes cluster.
First, we need a working Kubernetes cluster. To set up a cluster on the Google Kubernetes Engine (GKE), follow the Creating a cluster tutorial. Using alternative managed cluster providers will work as well as custom installations of Kubernetes, e.g. Minikube.
The core ioFog stack installed by Helm does not require any Agents to be set up. Agents are edge nodes where microservices are deployed. In order to leverage all ioFog capabilities, we will need to set up Agents. These can simply be small compute instances from Google Cloud Platform (GCP), Amazon Web Services (AWS), Packet, or any other provider.
In order to provision these Agents, ioFog needs SSH access.
IoFog also provides tools for infrastructure setup to setup Kubernetes cluster and Agents. Please see Platform tutorial for more details.
The tutorial requires installation of Helm
and kubectl
executing the deployment.
From now on, we assume we have a running Kubernetes cluster and Agent nodes. We can verify that our kubernetes cluster is working by running kubectl cluster-info
. The output of a working cluster will look like this:
$ kubectl cluster-info
Kubernetes master is running at https://1.2.3.4
GLBCDefaultBackend is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Now that our cluster is up and running, we have to prepare the cluster for Helm installation.
On RBAC enabled Kubernetes clusters (e.g. GKE, AKE), it is necessary to create a service account for Tiller before initializing helm itself. See helm init instructions for more details.
In order to create the cluster role binding on GKE, we need to have roles/container.admin
permission. If our account doesn't have the role, it can be added using the following command or in the GCP Console.
gcloud projects add-iam-policy-binding $GCP_PROJECT --member=user:person@company.com --role=roles/container.admin
Then we can create service account for Tiller and bind cluster-admin role.
kubectl create serviceaccount --namespace kube-system tiller-svacc
kubectl create clusterrolebinding tiller-crb --clusterrole=cluster-admin --serviceaccount=kube-system:tiller-svacc
In order to create the cluster role binding on GKE, we need to have cluster.admin
permission.
gcloud projects add-iam-policy-binding $PROJECT --member=user:person@company.com --role=roles/container.admin
Now is the time to use our service account to initialize Helm.
helm init --service-account tiller-svacc --wait
Note that on Azure Kubernetes Service (AKS), we will also need to specify node selectors for Tiller.
helm init --service-account tiller-svacc --node-selectors "beta.kubernetes.io/os"="linux" --wait
Add this Helm repository to our Helm repository index and install the ioFog stack and Kubernetes services
helm repo add iofog https://eclipse-iofog.github.io/helm
helm install --version 1.2.0 --name iofog --namespace iofog iofog/iofog
If we want to have multiple instances of ioFog on the same Kubernetes cluster, it is necessary to tell Helm not to install custom resource definitions. This can be done by overriding the createCustomResource
(default: true
) variable.
helm install --version 1.2.0 --name iofog --namespace iofog --set createCustomResource=false iofog/iofog
Only use this option when the ioFog custom resource exists, either from another Helm installation or manual installation using iofogctl.
To check if the custom resource exists, run kubectl get crd iofogs.k8s.iofog.org
. If the resource exists, we must use createCustomResource=false
so that Helm does not try to create it again.
We can run a simple test suite on our newly deployed ioFog stack using helm:
helm test iofog
To see a detailed output from the tests, we can check test-runner logs using kubectl -n iofog logs test-runner
. In case we do not expect the need to inspect the logs, using helm test --cleanup iofog
will remove all test pods after running the tests.
To uninstall ioFog stack, simply delete the Helm release, where the release name refers to --name
arguments used during installation.
helm delete --purge iofog
Note that due to Helm's handing of custom resource definitions, all such definitions are orphaned when a release is created and thus need to be deleted manually.
kubectl delete crd iofogs.k8s.iofog.org