minikube start
(With the optionnal --vm-driver=
option if needed)~/.kube/config
to point to your minikube kubernetes clusterminikube tunnel
to enable Kubernetes LoadBalancersplatform.yaml:
---
apiVersion: iofog.org/v1
kind: ControlPlane
metadata:
name: ecn
spec:
iofogUser:
name: Quick
surname: Start
email: user@domain.com
password: q1u45ic9kst563art
controllers:
- name: minikube-controller
kube:
config: ~/.kube/config
---
apiVersion: iofog.org/v1
kind: Connector
metadata:
name: minikube-connector
spec:
kube:
config: ~/.kube/config
iofogctl deploy -f platform.yaml
iofogctl get all
or kubectl get all
VAGRANT_BOX = 'ubuntu/bionic64'
VM_NAME = 'iofog-demo'
VM_USER = 'vagrant'
REG_USER='John'
Vagrant.configure("2") do |config|
config.vm.box = VAGRANT_BOX
config.vm.hostname = VM_NAME
config.vm.provider "virtualbox" do |v|
v.name = VM_NAME
v.memory=2048
end
config.vm.network "private_network", type: "dhcp"
# Port forwarding for Agent
config.vm.network "forwarded_port", guest: 54321, host: 54321, autocorrect: true
# For each microservice port that you will want to access from your localhost, you need to add a port forwarding rule
# I.E: ioFog tutorial deploys a web UI microservice on port 10102
# config.vm.network "forwarded_port", guest: 10102, host: 10102, autocorrect: true
end
Vagrantfile
, run vagrant up
to start the VMvagrant ssh-config
to find the private key file path %> vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
/Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key
vagrant ssh -c ifconfig | grep inet
to find the box public IP~/Work/Edgeworx/iofogctl %> vagrant ssh -c ifconfig | grep inet
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::76:54ff:fe76:5875 prefixlen 64 scopeid 0x20<link>
inet 172.28.128.11 netmask 255.255.255.0 broadcast 172.28.128.255
inet6 fe80::a00:27ff:fe77:88e9 prefixlen 64 scopeid 0x20<link>
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
Connection to 127.0.0.1 closed.
172.28.128.11
as the 10.0.2.15
is private (As a general rule, the addresses 10.X.X.X
are private)ssh vagrant@<IP> -i <private_key_path>
, which in this specific case translates to ssh vagrant@172.28.128.11 -i /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key
agent.yaml
---
apiVersion: iofog.org/v1
kind: Agent
metadata:
name: local-agent
spec:
host: 172.28.128.11
ssh:
user: vagrant
keyFile: /Users/pixcell/Work/Edgeworx/iofogctl/.vagrant/machines/default/virtualbox/private_key
iofogctl deploy -f agent.yaml
Congratulations, you are all set to deploy applications on your local minikube and vagrant setup ! Keep in mind that there is absolutely no difference, as far as ioFog and iofogctl are concerned, between this local setup and an actual production setup on a cloud based Kubernetes cluster and an Agent running on a remote device !