Kubernetes - Deploy Control Plane Using iofogctl

Every Edge Compute Network ('ECN') starts with a Control Plane that allows us to manage our ECN's resources.

In this guide, our Control Plane will deploy a single Controller instance.

Deploy a Control Plane on Kubernetes

Create a template of controlplane.yaml like so:

echo "---
apiVersion: iofog.org/v3
kind: KubernetesControlPlane
metadata:
  name: albatros-1
spec:
  iofogUser:
    name: Foo
    surname: Bar
    email: user@domain.com
    password: iht234g9afhe
  config: ~/.kube/config
  replicas:
    controller: 1
    nats: 2
  # database:
  #  provider: mysql/postgres
  #  user: 
  #  host: 
  #  port: 
  #  password: 
  #  databaseName: pot
  #  ssl: true/false 
  #  ca: base64 encoded string
  auth:
    url: https://example.com/
    realm: realm-name
    realmKey:
    ssl: external
    controllerClient: pot-controller
    controllerSecret: 
    viewerClient: ecn-viewer
  nats:
    enabled: true
    jetStream:
      storageSize: "10Gi"  # PVC and max_file_store
      memoryStoreSize: "1Gi"  # max_memory_store
      # storageClassName: ""
  images:
    # pullSecret: pull-srect
    operator: ghcr.io/eclipse-iofog/operator:3.7.2
    controller: ghcr.io/eclipse-iofog/controller:3.7.3
    router: ghcr.io/eclipse-iofog/router:3.7.0
    # nats: ghcr.io/eclipse-iofog/nats:2.12.4   # when NATs is enabled (spec.nats.enabled)
  services:
    controller:
      type:  LoadBalancer/ClusterIP
      # annotations:
      #  service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      # externalTrafficPolicy:
    router:
      type:  LoadBalancer/ClusterIP
      # annotations:
      #  service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      # externalTrafficPolicy:
    nats: # for NATs Cluster, Leaf, MQTT ports
      type:  LoadBalancer/ClusterIP
      # annotations:
      #  service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      # externalTrafficPolicy:
    natsServer:  # for core NATs server and monitoring ports
      type:  LoadBalancer/ClusterIP
      # annotations:
      #  service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      # externalTrafficPolicy:
  # controller:
  #  ecnViewerUrl: https://
  #  https: true
  #  secretName:
  #  logLevel: info
  # ingresses:
  #  controller:
  #    annotations:
  #      # cert-manager.io/cluster-issuer: letsencrypt
  #      # nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
  #      # nginx.ingress.kubernetes.io/backend-protocol: "https"
  #    ingressClassName: nginx
  #    host: 
  #    secretName:
  #  router:
  #    address: 
  #    messagePort: 5671
  #    interiorPort: 55671
  #    edgePort: 45671" > /tmp/controlplane.yaml

Make sure to specify the correct value for the config field. Here we implicitly use the default namespace. Note that iofogctl will deploy to the Kubernetes namespace that it is configured to use through the -n flag or to the default namespace we set via iofogctl configure current-namespace .... This means that by following these examples, we end up installing the Control Plane in default namespace on the cluster. Therefore it is recommended to use a namespace instead.

Once we have edited the fields to our liking, we can go ahead and run:

iofogctl deploy -f /tmp/controlplane.yaml

Naturally, we can also use kubectl to see what is happening on the Kubernetes cluster.

kubectl get all

The next section covers how to do the same thing we just did, but on a remote host instead of a Kubernetes cluster. We can skip ahead.

Verify the Deployment

We can use the following commands to verify the Control Plane is up and running:

iofogctl get controllers
iofogctl describe controller alpaca-1
iofogctl describe controlplane