Quick Start On Minikube and Vagrant

Prerequisite

Setting up minikube

Start minikube. Minikube will set up ~/.kube/config to point to your minikube cluster.

minikube start

Run the tunnel command in the background to enable Kubernetes LoadBalancers.

minikube tunnel &

Deploying the ioFog ControlPlane

Deploy a Kubernetes Control Plane which uses the Kube Config file generated by minikube:

echo "---
apiVersion: iofog.org/v2
kind: KubernetesControlPlane
metadata:
  name: ecn
spec:
  config: ~/.kube/config
  iofogUser:
    name: Quick
    surname: Start
    email: user@domain.com
    password: q1u45ic9kst563art" > /tmp/platform.yaml
iofogctl deploy -f /tmp/platform.yaml

Resources are visible using iofogctl get all or kubectl get all.

ioFog resources will be created in the same kubernetes namespace as the one used on iofogctl.

Create a Vagrant VM to host an ioFog Agent

Here is an example of minimal Ubuntu VM Vagrantfile:

VAGRANT_BOX = 'ubuntu/bionic64'
VM_NAME = 'iofog-demo'
VM_USER = 'vagrant'
REG_USER='John'

Vagrant.configure("2") do |config|
  config.vm.box = VAGRANT_BOX
  config.vm.hostname = VM_NAME
  config.vm.provider "virtualbox" do |v|
    v.name = VM_NAME
    v.memory=2048
  end
  config.vm.network "private_network", type: "dhcp"
  # Port forwarding for Agent
  config.vm.network "forwarded_port", guest: 54321, host: 54321, autocorrect: true
  # For each microservice port that you will want to access from your localhost, you need to add a port forwarding rule
  # I.E: ioFog tutorial deploys a web UI microservice on port 10102

  # config.vm.network "forwarded_port", guest: 10102, host: 10102, autocorrect: true

end

In the folder containing the Vagrantfile, run vagrant up to start the VM.

Run vagrant ssh-config to find the private key file path.

 %> vagrant ssh-config

Host default
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL

In this case, the private key is located at /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key.

Run vagrant ssh -c ifconfig | grep inet to find the box public IP.

~/Work/Edgeworx/iofogctl %> vagrant ssh -c ifconfig | grep inet
inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
inet6 fe80::76:54ff:fe76:5875  prefixlen 64  scopeid 0x20<link>
inet 172.28.128.11  netmask 255.255.255.0  broadcast 172.28.128.255
inet6 fe80::a00:27ff:fe77:88e9  prefixlen 64  scopeid 0x20<link>
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10<host>
Connection to 127.0.0.1 closed.

In this case, the public IP address is 172.28.128.11 as the 10.0.2.15 is private (As a general rule, the addresses 10.X.X.X are private).

You can verify this by running ssh vagrant@<IP> -i <private_key_path>, which in this specific case translates to ssh vagrant@172.28.128.11 -i /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key.

Deploy the Agent to the Vagrant instance:

echo "---
apiVersion: iofog.org/v2
kind: Agent
metadata:
  name: vagrant
spec:
  host: 172.28.128.11
  ssh:
    user: vagrant
    keyFile: /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key" > /tmp/agent.yaml
iofogctl deploy -f /tmp/agent.yaml -v

Congratulations, you are all set to deploy applications on your local minikube and vagrant setup! Keep in mind that there is absolutely no difference, as far as ioFog and iofogctl are concerned, between this local setup and an actual production setup on a cloud based Kubernetes cluster and an Agent running on a remote device!