architecture-buildings-business-325185

Build your own cloud and deploy a PHP application

The Problems

Every PHP developer eventually struggles with this. You are writing an application, you test it locally, making things work the way you want them, but then you have to run it on some test server.
Now if the test server is all up and running, it may just be a case of uploading, and it works, but that is rarely the case. Sometimes, there are major differences between your local setup and your test server, including configuration variables etc. It even becomes a bigger problem, if there are multiple people working on the same application, and everyone wants to test their application. Then you need to have multiple instances of your database, multiple host names and more.

Lets say you manage to get past all that, and eventually your application goes to production. Then you have to deal with security and upgrading the application with minimal (preferably no) downtime. You actually have to prepare for real traffic, ssl certificates, available hardware, scalibility, replication, and many more. Most things a developer does not want to deal with.
If you are lucky, there is a system admin that will deal with most of the server related stuff,
but they still don’t do all of it.

If you are really lucky, you have a DevOps Engineer, a Platform Engineer or an SRE team that deal with all of that. But if that is the case, this article may not be for you.

Steep learning curve

As a possible solution you can setup your application to work with Docker and Docker Compose, and it is not a bad one. There are many problems Docker addresses (or any container for that matter) but Docker is so powerful that it can be a pain to learn. And even if you know most of the basics of Docker it doesn’t actually address any of the problems on it’s own. There is still a fair whack of setting up and managing of the docker platform. Now you can easily rent a docker platform that will let you deploy your containers anywhere, but you still have to build them, maintain them, and more. And there is is still all the Operations stuff on top: DNS, security, routing, networking, SSL, etc.

Now one might say, let’s use Kubernetes, which is a container platform build on Docker (but can also work with other containers like CoreOS). Kubernetes is a step closer to what we want, but is an even higher learning curve, you can read up on it, but that is not what this article is about.

Our main topic of this article is Openshift.

Openshift and OKD

Openshift is a platform build by Red Hat (the linux foundation that built RHEL, Centos and Fedora, but also sponsors many more projects).
Openshift is sold by Red Hat as a PaaS, a Platform as a Service. If PaaS is new to you, especially if you are not one of the lucky ones we described above, then don’t worry.
You may have heard of SaaS, Software as a Service. SaaS is like any application you write. But instead of selling the application, you host the application and rent an instance of the application out, usually on a subscription or monthly invoice.
PaaS is very similar, but instead of renting software, you rent a Platform. A platform is a widely used term, but in this case it means a single unified layer that runs about any software you drop on top of it.

Other then SaaS and PaaS, there is also IaaS. Infrastructure as a Service. think of AWS, Digital Ocean, Azure, etc. Usually when you rent a VPS somewhere, you are renting it on top of IaaS.

Now you may think, renting Openshift from Red Hat, that sounds expensive, or a big buy-in, that may not turn out profitable for a while. And while you may be right, there are plenty of organisations that rent from them, but they equally buy RHEL (Red Hat Enterprise Linux) for their operating systems on their server.

In any case, for my side projects I do not have money to throw away either. Luckily for us, poor developers, there is OKD (used to be called Openshift Origin). OKD is the upstream community edition of Openshift. Like Fedora is for RHEL/Centos. Everything that works on Openshift also works on OKD. You just miss out on a ton of support and training.
In fact latest stable version of Openshift is (at the time of writing) 3.9 while OKD’s latest version is 3.11.
Most of the problems described above will be solved with OKD.

Note: During the rest of this article I refer a lot to Openshift, because that is the actual platform and it is referenced everywhere on the internet, even in the official OKD documentation.

But enough talking/reading. Lets get into some action and see what it is all about.

Requirements

This article assumes you have installed a few things. I recommend you install a VM (Virtual Machine) with Centos 7.4+ Desktop version (latest version as of now is 7.6). But all this may equally work in WSL (Linux subsystem in Windows 10), or MacOS X, or even just on any Linux desktop.
I will continue this article assuming you have a Centos 7 VM, if not you will have to figure it out on your own what the differences are, but it shouldn’t be that hard.

On the VM, run the following, to install Docker, and oc (Openshift Client).

$ sudo yum -y update
$ sudo yum -y install docker-1.13.1
$ wget -qO- https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz | tar xzv
$ cd openshift-origin-client-*
$ sudo cp kubectl /usr/bin/ && sudo cp oc /usr/bin/
$ sudo systemctl start docker && sudo systemctl enable docker
$ echo '{"insecure-registries": ["172.30.0.0/16"]}' | sudo tee /etc/docker/daemon.json > /dev/null && sudo systemctl restart docker

You should now have access to the oc command.
Lets give it a go:

$ oc
OpenShift Client

This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible
platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand.
...
Other clients for Openshift can be downloaded from https://github.com/openshift/origin/releases

We are now ready to get going.

Our first cluster

Cluster ? What ? Sounds complicated. Maybe not my cup of tea.
You are not wrong, wasn’t mine either, luckily, if you just want to try it out they made it simple enough:

To setup a simple dev cluster with OKD and you need to do all of the following. Get ready, here it comes:

$ oc cluster up --public-hostname=thephpcommunity.local

Phew, that was a lot of hard work, after 1 command and waiting a bit, we have our own cluster up and running.
Yes they made it that easy.
Now there is a bunch of flags you can use to configure it a bit more, but we’ll just stick to the default configuration with a custom hostname (we need this since we use a VM).

Now to verify if that all worked (and to give you a sneak peek into the GUI).
Since the easy local cluster is meant for development, the console ip is hardoced for 127.0.0.1.
For this article it is too long winded to fix. In the VM open a browser and go to https://127.0.0.1:8443/.  You can login with username developer and any password.

If you want to work with your host, lookup on how to do a proper (production ready) install.
There is a great tutorial made by Grant Shipley, one the Openshift / Red Hat developers.
It includes Lets Encrypt singed SSL and everything. It only takes about 30-45 mins.

For the next few sections, we will be using command line, we will cover the GUI/console in a follow up article.

Hello World

It begins, where every tutorial ever starts, we all know this so lets create a hello world application.

$ mkdir phphelloworld && cd phphelloworld

This is our little app folder.
Lets get setup in oc.

$ oc login -u developer -p 123
$ oc new-project sandbox

Lets write a massively over complicated app

$ echo '<?php echo "Hello PHP Community!"; ' > index.php

Next we have to actually create the app in the cluster

$ oc new-app . --name=phphello

Wow, a whole bunch of output. What happened is that because there was no Openshift template or Dockerfile in our directory, Openshift recognised it as source code. Openshift comes with some intelligent feature called Source-to-Image (or S2I).
Furthermore it detected it was PHP code and PHP and spun up a Apache container for us.
S2I is cabable of detecting all sorts of languages, like: Node.js, PHP, PHP with composer, golang, donet, ruby, etc. For advanced users you can even override the scripts in a .s2i/ folder.

Ok great so we have an app. How can I see it.
Well be patient my young padawan. Security is ones of those things Openshift is driven by.
So our application might be all well and ready but it is not “exposed” to the open world yet.
Most deployments will never see the day of light, think about a php-mysql-apache-redis app.
Only Apache needs to be reachable, hence not exposing apps by default.

A few resources used by an Openshift Application:
– pod : this is where are application lives and breathes
–  buildconfig :  a template (created by S2I) that builds a container for us (used in the pod)
– deploymentconfig : a template (created by S2I) to tell openshift what resources are being deployed. this includes networking, what ports the pod wants to expose, what conainter image to use (from the build config)
– service : simply put a tiny proxy in front of your app that exposes your app to the rest of the project

The latter one is what we are interested in right now. you can sort of forget about the rest.
To see our deployed service type

$ oc get service

or

$ oc get svc

This should print something among the lines of

NAME     TYPE      CLUSTER-IP     EXTERNAL-IP PORT(S)           AGE
phphello ClusterIP 172.30.111.161 <none>      8080/TCP,8443/TCP 42y

The empty EXTERNAL-IP kind of indicates it is not exposed. So lets expose it.

$ oc expose svc phphello
$ oc get route

Bah, that doesn’t look great, some random domain phphello-sandbox.127.0.0.1.nip.io and it exposed http not https. Well for the hostname by default it will use <app>-<project>-<defaultSubDomain>. Since we never configured that default subdomain it looks a bit wonkey, but we didn’t tell it about a domain either. lets fix that. (we need to delete the route as -by default- a service can only have 1 domain assigned)

$ oc delete route phphello
$ oc expose svc phphello --hostname=hello.thephpcommunity.local
$ oc get route

Much better, our own domain assigned. However, still not happy, still on http.

To enable https, we have to manually create a route and assign it to a service.
for https there are 3 options:
– passthrough : send the connection to the pod and let the pod deal with the certificate
– reencrypt : the secure connection ends at the router and the connection is reencrypted using the internal certificates. in this case the router global certificate is used. (Most used in Production)
– edge : the secure connection ends at the router and the internal traffic is insecure.

To keep things simple, we will be using edge. So lets do this:

$ oc delete route phphello
$ oc create route edge --service phphello --hostname=hello.thephpcommunity.local
$ oc get route

And there it is, SSL enabled endpoint for our 1 line application.
Lets test that.

$ curl -k https://hello.thephpcommunity.local

Well that worked. The smarter cookies among you might have noticed 2 things:
– curl -k : yes since the ssl will be self signed, we tell curl to ignore it
– we didn’t add this domain in our /etc/hosts file. Correct. Openshift comes with it’s own build in DNS server that is exposed to the host (well the VM in this case) so any other application on the VM can now use this domain. Great for building multiple apps that communicate with each other.

Scaling

Yes, the big one, where do we even start with scaling in our normal apps. Much work to build it in from the start, loads of server config and load balancers and .. well lets forget all that. We have Openshift.

First of lets change our code a bit, so we can reflect the Openshift behaviour

$ echo '<?php echo "Hello, I am " . $_SERVER["SERVER_ADDR"] . "\n"; ' > index.php

$ oc start-build --from-dir=. bc/phphello

This rebuilds this image with S2I for the code changes. S2I can detect code changes automatically when used from repositories or other sources that have webhooks, but our code does not have a webhook, so no can do.

Lets make sure our build finished (it takes a couple of minutes)

$ oc logs -f bc/phphello

Now lets see if that worked:

$ curl -k https://hello.thephpcommunity.local

In theory if we would have ran curl in a loop, there should have been no downtime.
Openshift would have brought the new pod up, switch the traffic to the new pod, and only then terminated the old pod. ( It also migrates connected sessions etc, but thats a topic for another time). Now we can run that a few times. the internal IP will always be the same.

Lets have a look at the deploymentconfig :

$ oc get dc/phphello

The deployment config wants 1 pod, and currently there is 1 pod. Great, but I have too much traffic, my hello word is massively popular, almost as bad as Facebook.
Lets get 3 pods instead.

$ oc scale --replicas=3 dc/phphello
$ oc get pod ### bit to much output, lets get our app ones
$ oc get pod -l app=phphello
$ oc get dc

So we can see the dc’s desired and current was updated, and in the list of pods we see 3 running pods.
Lets curl our app again (5 times in a row)

$ for ((i=1;i<=5;i++)); do curl -k https://hello.thephpcommunity.local  ; done

Wow, 3 different ip addresses, in nice order. So that’s that. Openshift, scaling and round robin load balancing, with 1 command.

Now it would be Openshift if they left it there.

$ oc autoscale dc/phphello --min 1 --max 10 --cpu-percent=80

Bam! if the vcpu reaches 80% auto scale another pod, to a maximum of 10 pods, and scale back down to a min of 1, according to some built in metrics and graceful cooldown we don’t have to care about.

What’s next

As you can imagine, a giant like Red Hat, would not just have a platform with just this.
We can build our own images (automated or not) and publish them into the build in registry which would trigger a new deployment. We could create our own multi tenancy templates, like a mysql cluster, some php-fpm resources and an nginx frontend. We can deploy templates build by other people. We can mount volumes (say to share static content between mutiple instances). It’s too much to cover here.

In short, we have barely scratched the surface of Openshift. But I am certainly going to pick up some more topics around Openshift in the future.

Stay tuned, and happy coding.

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email