Spring Boot + Vagrant Setup in Windows + Docker, KubeAdm and Helm Installation Steps

In this blog we will learn more about Containerization. With the help of working example we will learn; How to Setup Vagrant in Windows VMs (Ubuntu). How to Create Docker Image, Deploy and Run the Docker Image in Local (Windows). Main concern is that, Docker Desktop is not an open-source for Windows. Hence, we need to configure a Virtual Box in which we can install docker, kubeAdm and helm as open-source over Ubuntu VM. Also, we will learn step by step; How to install KubeAdm & Helm v3 into Ubuntu VM.

  • Please find below updated steps and Vagrant file for local setup. Note, for some commands you might need to add sudo in front of them, if require.

Step 1: Download/Install Vagrant in your local (Windows)

First of all we need to download and install Vagrant in Windows. For downloading Vagrant exe you may refer the following link.

Downloads | Vagrant by HashiCorp (vagrantup.com)

Download and Install Vagrant

Step 2: Creating VM setup

  • Right-click the Window Start button and choose Apps and Features.
VM Setup
  • Click the Programs and Features link
VM Setup
  • Click Turn Windows feature on or off
VM Setup
  • Un-Check the Hyper-V and Containers check boxes and click OK
Virtual Box
  • Click on Download button. VirtualBox-6.1.36-152435-Win.exe will be downloaded. Now you may install this in Windows.

Step 3: Create a Vagrant file (Vagrantfile)

We need to create a Vagrant file (Vagrantfile) for installing Ubuntu (ubuntu/xenial64) image in the Virtual Box. It will map local ports E.g.

# -*- mode: ruby -*-
# vi: set ft=ruby :

machines = {
  'vagrant' => {
    'box' => 'ubuntu/xenial64',
    # VM => HOST
    'local_ports' => {
      # Web
      '80' => '8880',
    },
  },
}

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  machines.each do |vm, specs|
    config.vm.define vm do |machine|

      # All Vagrant configuration is done here. The most common configuration
      # options are documented and commented below. For a complete reference,
      # please see the online documentation at vagrantup.com.

      # Every Vagrant virtual environment requires a box to build off of.
      # machine.vm.box = "ubuntu/trusty64"
      machine.vm.box = specs['box']
	  config.vm.boot_timeout=800

      # Disable automatic box update checking. If you disable this, then
      # boxes will only be checked for updates when the user runs
      # `vagrant box outdated`. This is not recommended.
      # machine.vm.box_check_update = false

      # Create a forwarded port mapping which allows access to a specific port
      # within the machine from a port on the host machine. In the example below,
      # accessing "localhost:8080" will access port 80 on the guest machine.
      # machine.vm.network "forwarded_port", guest: 80, host: 8080

      specs['local_ports'].each do |guest_port, host_port|
        machine.vm.network "forwarded_port", guest: guest_port, host: host_port
      end

      # Create a private network, which allows host-only access to the machine
      # using a specific IP.
      # machine.vm.network "private_network", ip: "192.168.33.10"

      # Create a public network, which generally matched to bridged network.
      # Bridged networks make the machine appear as another physical device on
      # your network.
      # machine.vm.network "public_network"

      # If true, then any SSH connections made will enable agent forwarding.
      # Default value: false
      # config.ssh.forward_agent = true

      # Share an additional folder to the guest VM. The first argument is
      # the path on the host to the actual folder. The second argument is
      # the path on the guest to mount the folder. And the optional third
      # argument is a set of non-required options.
      # machine.vm.synced_folder "../data", "/vagrant_data"
      machine.vm.synced_folder ".", "/vagrant", :mount_options => ["dmode=777","fmode=644"]

      # Provider-specific configuration so you can fine-tune various
      # backing providers for Vagrant. These expose provider-specific options.
      # Example for VirtualBox:
      #
      # machine.vm.provider "virtualbox" do |vb|
      #   # Don't boot with headless mode
      #   vb.gui = true
      #
      #   # Use VBoxManage to customize the VM. For example to change memory:
      #   vb.customize ["modifyvm", :id, "--memory", "1024"]
      # end
      #
      # View the documentation for the provider you're using for more
      # information on available options.
      machine.vm.provider :virtualbox do |vb|
        vb.memory = 12192
        vb.cpus = 2
      end

      # Enable provisioning with CFEngine. CFEngine Community packages are
      # automatically installed. For example, configure the host as a
      # policy server and optionally a policy file to run:
      #
      # machine.vm.provision "cfengine" do |cf|
      #   cf.am_policy_hub = true
      #   # cf.run_file = "motd.cf"
      # end
      #
      # You can also configure and bootstrap a client to an existing
      # policy server:
      #
      # machine.vm.provision "cfengine" do |cf|
      #   cf.policy_server_address = "10.0.2.15"
      # end

      # Enable provisioning with Puppet stand alone.  Puppet manifests
      # are contained in a directory path relative to this Vagrantfile.
      # You will need to create the manifests directory and a manifest in
      # the file default.pp in the manifests_path directory.
      #
      # machine.vm.provision "puppet" do |puppet|
      #   puppet.manifests_path = "manifests"
      #   puppet.manifest_file  = "default.pp"
      # end

      # Enable provisioning with chef solo, specifying a cookbooks path, roles
      # path, and data_bags path (all relative to this Vagrantfile), and adding
      # some recipes and/or roles.
      #
      # machine.vm.provision "chef_solo" do |chef|
      #   chef.cookbooks_path = "../my-recipes/cookbooks"
      #   chef.roles_path = "../my-recipes/roles"
      #   chef.data_bags_path = "../my-recipes/data_bags"
      #   chef.add_recipe "mysql"
      #   chef.add_role "web"
      #
      #   # You may also specify custom JSON attributes:
      #   chef.json = { mysql_password: "foo" }
      # end

      # Enable provisioning with chef server, specifying the chef server URL,
      # and the path to the validation key (relative to this Vagrantfile).
      #
      # The Opscode Platform uses HTTPS. Substitute your organization for
      # ORGNAME in the URL and validation key.
      #
      # If you have your own Chef Server, use the appropriate URL, which may be
      # HTTP instead of HTTPS depending on your configuration. Also change the
      # validation key to validation.pem.
      #
      # machine.vm.provision "chef_client" do |chef|
      #   chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
      #   chef.validation_key_path = "ORGNAME-validator.pem"
      # end
      #
      # If you're using the Opscode platform, your validator client is
      # ORGNAME-validator, replacing ORGNAME with your organization name.
      #
      # If you have your own Chef Server, the default validation client name is
      # chef-validator, unless you changed the configuration.
      #
      #   chef.validation_client_name = "ORGNAME-validator"
      #machine.vm.provision :shell,
      #  path: "vagrant-libs/bootstrap.sh"
    end
  end
end
  • Copy the Vagrantfile into a directory and make it up using ‘vagrant up‘ command.
vagrant up
vagrant up

Step 4: How to increase vm.boot_timeout in Vagrant file

Sometimes vm takes time to becoming up. So, we can increate it by overriding boot_timeout parameter in Vagrantfile. E.g.

config.vm.boot_timeout=800

Step 5: User Creation and Providing Permissions

For user creation first of all we need to login to Vagrant VM using vagrant ssh [Git bash]. E.g.

vagrant ssh

Create a user (thebasictechinfo)

vagrant@ubuntu-xenial:~$ sudo su
root@ubuntu-xenial:/home/vagrant# useradd thebasictechinfo
root@ubuntu-xenial:/home/vagrant# passwd thebasictechinfo
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root@ubuntu-xenial:/home/vagrant#

Providing Permissions to the new user (thebasictechinfo)

Clone your project in your vagrant shared directory & build

You may do this in windows itself using git clone and maven build commands.

Build it from windows and it will be available into shared directory in vagrant virtual box

Yes, it will be available under /vagrant shared directory. Now, you can create a docker image for it.

Step 6: Create a secureCRT session with IP 127.0.0.1 and port 2222

To connect with Ubuntu VM we need to create a secureCRT session. Login using your username & password we created in above step E.g.

sercureCRT session login

Step 7: Docker Installation Steps in Ubuntu Virtual Box

Following commands we need to execute for installing docker software in Ubuntu Virtual Box.

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  OK

$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

$ sudo apt-get update
  Reading package lists... Done

$ apt-cache policy docker-ce
docker-ce:
  Installed: (none)
  Candidate: 5:20.10.7~3-0~ubuntu-xenial
  Version table:
     5:20.10.7~3-0~ubuntu-xenial 500
        500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages
     5:20.10.6~3-0~ubuntu-xenial 500
        500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages     
     17.03.0~ce-0~ubuntu-xenial 500
        500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages

$ sudo apt-get install -y docker-ce
  Reading package lists... Done
  Building dependency tree       
  Reading state information... Done

$ sudo systemctl status docker
 Jun 13 07:26:53 ubuntu-xenial systemd[1]: Started Docker Application Container Engine.
 Jun 13 07:26:53 ubuntu-xenial dockerd[14682]: time="2022-06-13T07:26:53.174713066Z" level=info msg="API listen on /var/run/docker.sock"

Step 8: Docker Restart Steps

sudo systemctl enable docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker.service

Step 9: Validate Docker Login

sudo groupadd docker
sudo usermod -aG docker $USER
chmod 777 /var/run/docker.sock
docker login -u thebasictechinfo@gmail.com -p thebasictechinfo

# Now can build docker image and push docker image to docker HUB or git artifactory using following simple commands
docker build -t hello-world:1.0.0 . --no-cache

# First of all you need to tag you image, then you can push it into library or repository
docker tag <hello-world>:latest <URL>/<hello-world:1.0.0>
docker push <URL>/<hello-world:1.0.0>

Step 10: Docker Logout

We can use following command for docker logout.

docker logout

Step 11: Configure docker for kubeadm

We have to do some configuration changes to docker to make it work with Kubernetes or the kubeadm pre-flight checks will fail.

# Configure docker to use overlay2 storage and systemd
sudo mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {"max-size": "100m"},
    "storage-driver": "overlay2"
}
EOF

# Restart docker to load new configuration
sudo systemctl restart docker

# Add docker to start up programs
sudo systemctl enable docker

# Allow current user access to docker command line
sudo usermod -aG docker $USER1

# Download cri-dockerd
curl -sSLo cri-dockerd_0.2.3.3-0.ubuntu-focal_amd64.deb \
  https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.3/cri-dockerd_0.2.3.3-0.ubuntu-focal_amd64.deb

# Install cri-dockerd for Kubernetes 1.24 support
sudo dpkg -i cri-dockerd_0.2.3.3-0.ubuntu-focal_amd64.deb

Step 12: Install kubeadm, kubelet & kubectl

You need to ensure the versions of kubeadm, kubelet and kubectl are compatible.

# Add Kubernetes GPG key
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg \
  https://packages.cloud.google.com/apt/doc/apt-key.gpg

# Add Kubernetes apt repository
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" \
  | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Fetch package list
sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

# Prevent them from being updated automatically
sudo apt-mark hold kubelet kubeadm kubectl

Step 13: Ensure swap is disabled

1: Kubeadp init command will give issue. Run the below commands.

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

2:  We need to remove the taint from master node so that pods can be scheduled.

Check the taints in master node:

kubectl get nodes -o json | jq '.items[].spec'

Remove the taint:

taint nodes --all node-role.kubernetes.io/control-plane-

Step 14: Configure kubectl

To be able to access the cluster we have to configure kubectl.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 16: Install Helm V3 into Ubuntu VM

To install our packages, we’re installing helm v3.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Step 17: kubectl create namespace

For creating namespace we can use kubectl create namespace <namespace name> command. E.g.

kubectl create namespace localdev

Step 18: Validate Helm charts and Helm templates

We can validate Helm charts and templates using ‘helm lint’ command.

helm lint <hello-world>
helm template <hello-world>

Step 19: helm list -n localdev

We can list out all the helm charts in a specific namespace e.g. localdev

Step 20: helm install

We can install helm chart using helm install command. E.g. here we can use –dry-run attribute -> it will not deploy the package, it will display helm charts and templates so that we can validate it.

helm install hello-world <chart folder name> -n localdev --atomic --dry-run

helm install hello-world <chart folder name> -n localdev

Step 21: helm uninstall

We can uninstall package using helm uninstall command. E.g.

helm uninstall hello-world -n localdev

Step 22: kubectl get nodes -o wide -n localdev

We can use following kubectl command for listing out all the node in a particular namespace

kubectl get nodes -o wide -n localdev

Step 23: kubectl get all -n localdev

This kubectl command will display all the pods and services in a particular namespace.

Step 24: helm history and helm rollback

helm history command will be used for listing out all the version of your microservice. So, that it can be rolled back with old/rev version at anytime.

If Rollback required.
helm rollback hello-world <revision no> -n localdev
Note: revision no - should be once on which you would like to rollback e.g. 2 failed then 1 would be revion no. which was running fine earlier

E.g.
helm rollback hello-world 1 -n localdev

You have successfully created a single-node Kubernetes Cluster using kubeadm on Ubuntu 16.04, and the cluster has everything you need to install your application. Let me know if any issue.

Thank you for your time. Keep learning 🙂