In this tutorial, we will go through a deployment of ioFog stack into an existing Kubernetes cluster. The ioFog stack consists of basic services (Controller, Connector) and supplementary Kubernetes ioFog components (Operator, Kubelet). This is the foundation for establishing a complete Edge Compute Network (ECN) with Agents and microservices. See Core Concepts for more details on ioFog components.
IoFog Helm chart allows users to easily deploy the ioFog stack onto exiting Kubernetes cluster.
First, we need a working Kubernetes cluster. We can simply set up a cluster on the Google Kubernetes Engine (GKE) by following the Creating a cluster tutorial. Using any other managed cluster providers works as well, so do custom installations of Kubernetes, e.g. Minikube.
IoFog also provides tools for infrastructure setup to setup a Kubernetes cluster in case we don't have one available. Please see Platform Tools for more details.
The tutorial requires installation of Helm
and kubectl
executing the deployment.
From now on, we assume we have a running Kubernetes cluster. We can verify that our kubernetes cluster is working by running kubectl cluster-info
. The output of a working cluster will look like this:
$ kubectl cluster-info
Kubernetes master is running at https://1.2.3.4
GLBCDefaultBackend is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://1.2.3.4/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Now that our cluster is up and running, we have to prepare the cluster for Helm installation.
On RBAC enabled Kubernetes clusters (e.g. GKE, AKE), it is necessary to create a service account for Tiller before initializing helm itself. See helm init instructions for more details.
In order to create the cluster role binding on GKE, we need to have roles/container.admin
permission. If our account doesn't have the role, it can be added using the following command or in the GCP Console.
gcloud projects add-iam-policy-binding $GCP_PROJECT --member=user:person@company.com --role=roles/container.admin
Then we can create service account for Tiller and bind cluster-admin role.
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
Now is the time to use our service account to initialize Helm.
helm init --service-account tiller --wait
Note that on Azure Kubernetes Service (AKS), we will also need to specify node selectors for Tiller.
helm init --service-account tiller --node-selectors "beta.kubernetes.io/os"="linux" --wait
Add this Helm repository to our Helm repository index and install the ioFog stack and Kubernetes services
helm repo add iofog https://eclipse-iofog.github.io/helm
We can list all available versions of the ioFog Helm chart using helm search -l iofog/iofog
. From Helm 2.16 onwards, only charts with production versions are listed by default. To list all versions, including development versions, use helm search -l --devel iofog
To install a specific version of ioFog, use --version <desired-version>
parameter to helm install
Keep in mind if there already is any existing ioFog stack on the cluster, a set of Custom Resource Definitions has probably already been created. In such case, you will need to disable deploying these CRDs as described in Multiple Edge Compute Networks.
The final helm install
command to install ioFog with CRDs then looks like this:
helm install \
--set controlPlane.user.email=user@domain.com \
--set controlPlane.user.password=any123password345 \
--version 1.3.0 \
--namespace my-ecn \
--name my-ecn \
iofog/iofog
The --name my-ecn
refers to the Helm release name as shown below, while the --namespace my-ecn
refers to the namespace taken by the Helm release in the target Kubernetes cluster.
To list all Helm releases (including deployed ioFog stacks), we can simply run helm list
. The result should look like this:
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
my-ecn 1 Tue Oct 1 21:34:42 2019 DEPLOYED iofog-1.3.0 1.3.0 my-ecn
The following is a complete list of all user configurable properties for the ioFog Helm chart. All of the properties are optional and have defaults. Use --set property.name=value
in helm install
to parametrize Helm release.
Property | Default value | Description |
---|---|---|
createCustomResources | true | See Multiple Edge Compute Networks |
controlPlane.user.firstName | First | First name of initial user in Controller |
controlPlane.user.surname | Second | Surname of initial user in Controller |
controlPlane.user.email | user@domain.com | Email (login) of initial user in Controller |
controlPlane.user.password | H23fkidf9hoibf2nlk | Password of initial user in Controller |
controlPlane.controller.replicas | 1 | Number of replicas of Controller pods |
controlPlane.controller.image | iofog/controller:1.3.1 | Controller Docker image |
controlPlane.controller.imagePullPolicy | Always | Controller Docker image pull policy |
controlPlane.kubeletImage | iofog/iofog-kubelet:1.3.0 | Kubelet Docker image |
controlPlane.loadBalancerIp | Pre-allocated static IP address for Controller | |
controlPlane.serviceType | LoadBalancer | Service type for Controller (one of LoadBalancer , NodePort or ClusterIP ) |
connectors.image | iofog/connector:1.3.0 | Connector Docker image |
connectors.serviceType | LoadBalancer | Service type for Connector (one of LoadBalancer , NodePort or ClusterIP ) |
connectors.instanceNames | ["first","second"] |
Array of Connector instance names |
operator.replicas | 1 | Number of replicas of Operator pods |
operator.image | iofog/iofog-operator:1.3.0 | OperatorDocker image |
operator.imagePullPolicy | Always | Operator Docker image pull policy |
Once the installation is complete, you will be able to connect to the ioFog Controller on K8s using iofogctl.
iofogctl connect --kube ~/.kube/config --name k8s-ctrl --email user@domain.com --pass any123password345 -n my-ecn
Once you are connected, you can use iofogctl
to deploy edge Agents. Then, you can use kubectl
or iofogctl
to deploy microservices to your edge Agents. See Setup Your Agents and Introduction to iofogctl for more details.
If we want to have multiple instances of ioFog on the same Kubernetes cluster, it is necessary to tell Helm not to install custom resource definitions. This can be done by overriding the createCustomResources
(default: true
) variable.
helm install \
--set createCustomResources=false \
--set controlPlane.user.email=user@domain.com \
--set controlPlane.user.password=any123password345 \
--version 1.3.0 \
--namespace second-ecn \
--name second-ecn \
iofog/iofog
Only use this option when the ioFog custom resource exists, either from another Helm installation or manual installation using iofogctl.
To check if the custom resources exist, run kubectl get crd | grep iofog
. If the resources exist, we must use createCustomResources=false
so that Helm does not try to create them again.
To uninstall ioFog stack, simply delete the Helm release, where the release name refers to --name
arguments used during installation.
helm delete --purge my-ecn
Note that due to Helm's handing of custom resource definitions, all such definitions are orphaned when a release is created and thus need to be deleted manually.
kubectl get crds | grep iofog | awk '{print $1}' | xargs kubectl delete crds