This document outlines some of my experiences with setting up Docker Swarm on an Azure Cloud. This post targets people who have some Docker experience, and that might have already deployed a Swarm cluster on-premise, or on cloud infrastructure, but haven’t taken a look at deploying it on Azure yet.

There are many ways on how to get Docker up and running on Azure, and choosing the right way isn’t always that straightforward.

For example :

  • Option 1 : You can spin up some VMs and install docker yourself
  • Option 2 : You can go to the Microsoft Marketplace and use a Docker CE Template
  • Option 3 : You can use the Azure Container Service, and use Swarm as your orchestrator
  • Option 4 : You can use the Docker Azure template provided by Docker

We’ll go over the different options, and explain why we decided to use the Azure template provided by Docker to setup our Swarm cluster on Azure.


Having the ability to run a Kernel Basd Virtual Machine (KVM) on a hosted solution is a luxury you won’t easily find on most cloud providers (Amazon, Google, Azure). On my hosting provider, I can get a quad core i7 32gb RAM server for less than 70 euros / month, a fraction of the cost that you would pay with a cloud provider.

I tought it might be interesting to see what kind of virtualization options I had on Centos and stumbled upon KVM, LibVRT and Qemu.


In this post we’ll be looking at several ways to use NiFi to interact with HTTP resources, both from a client and from a server perspective.

Nifi comes with a set of core processors allowing you to interact with filesystems, MQTT brokers, Hadoop filesystems, Kafka, ……. It also comes bundled with a set of HTTP processors that you can use to either expose or consume HTTP based resources.

We’ll be looking at the following processors that ship with Nifi:

For each processor we’re going to take a closer look at

  • Flow definition (how a typical NiFi flow would look like with this processor)
  • Processor configuration (how the processor can be configured)
  • Scheduling (how the processor is scheduled)
  • Data flow (how the data flows within the flow)
  • Thoughts and use-cases


This is a small tip on getting your networking service up and running in Centos.

If you’ve ever wondered why you’re not getting an IP address on your CentOS VM in VirtualBox, even with the Bridged network adapter, then look no further….

Notice how you’re not getting an IP on the main enp0s3 interface.

[root@localhost ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:db:6f:94 brd ff:ff:ff:ff:ff:ff


I’m sure most of you have experienced this scenario : A server is put online, and although you’ve secured it properly, you still see people attempting to brute force attack your server by attempting to login via SSH.

sshd[25808]: input_userauth_request: invalid user ubnt [preauth]
sshd[25808]: Received disconnect from 11:  [preauth]
sshd[25810]: Invalid user test from
sshd[25810]: input_userauth_request: invalid user test [preauth]
sshd[25810]: Received disconnect from 11:  [preauth]
sshd[25812]: Invalid user tech from
sshd[25812]: input_userauth_request: invalid user tech [preauth]
sshd[25812]: Received disconnect from 11:  [preauth]
sshd[25814]: Received disconnect from 11:  [preauth]

Although you’ve setup your server to only allow SSH key based authentication (and as such nobody can login with a password), people are still trying to find their way in. You can dramatically recude these number of attacks by switching your SSH daeon to a non standard port.

In this post, I’ll show you how to change that port,