Tuesday, February 21, 2017

Virtualization, Docker, OpenShift, and Poker

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

Let's say you need to see how a poker website looks from CentOS, but you have a windows box.  What you do?  Probably install hyper-v (or vmplayer), download a .iso image of CentOS, and create a new vm out of that ISO.  Contained within that VM is every dll, every file, everything CentOS needs.

OK, let's say you invented a winning poker algorithm, and you want it to play 500 games of poker simultaneously.  You don't have enough hard drive space for 500 vm's - but you have enough for 500 containers.  So on that CentOS VM you install Docker and download 1 thin CentOS container image, and Docker knows what files on the CentOS VM each container needs to run.  Docker makes it super easy to download an image in docker: docker pull  is all.

Now let's say you're making tons of money and want to get 10,000 containers playing poker.  That many images won't fit in your computer - you need more computers.  So you buy a bunch of windows computers, and on each you install hyper-v, get a CentOS VM up, and install docker.  You have your 10,000 poker games running...but one computer dies, taking down the VM and 500 containers with it.  You lose all the chips you had in those 500 games.  What's more, people are starting to copy your algorithm, and you start losing!  You improve your algorithm, but how can you change the 9,500 containers in time?

So you cluster all your computers with hyper-v.  Good first step.  And then install OpenShift on all the CentOS VMs.  Now when a server dies, OpenShift is replicating each container to another VM and you don't lose the poker game.  What's more, every time you update the algorithm, you can use docker to create a new image and have OpenShift deploy it in place of the old algorithm container after each poker game.  

Friday, February 17, 2017

OpenShift, Trident, Docker, and SolidFire: Part 1

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

We have a group of NetApp/SolidFire customers already live with OpenShift on SolidFire, which is very exciting but a bit scary too.  It's a bit scary because many of these clients went live without ever chatting with us!  This means they're running into issues like having to manually create hundreds of volumes, because they hadn't heard of our dynamic volume manager, Trident.

So we're partnering with RedHat to get a local OpenShift lab implementation tricked out with all the best SolidFire has to offer.  The goal is to get OpenShift running, then move on to containerized Elasticsearch and MongoDB and all sorts of other fun stuff.

Note: YOU DO NOT need NDVP in order to install/use Trident.  We do so here only for experience and demonstration purposes.

Here's the basic layout of the lab:  
1) 3 RHEL servers running as VMs in VMware (1 master, 2 other nodes)
2) NetApp Docker Volume Plugin installed
3) SolidFire for persistent storage (great API, all flash performance)
4) Trident for the automatic volume management
5) OpenShift Enterprise (instructions) as our container platform, installed on the RHEL servers.
6) OSE has several requirements, such as Docker as our container engine


The instructions for each of these are actually really good, so I'll just elaborate on a few things for this specific workflow.
  • Start with the OSE requirements instructions.  You need to make sure you have the correct RHEL licensing to access the OSE repos or you'll hit a roadblock in a real hurry!
    • Once you get to "Configuring Docker Storage" I recommend you detour over to the NDVP instructions, where you see "iSCSI RHEL/CentOS." 
    • Complete those steps, then continue with the "Configuring Docker Storage" instructions and complete through the rest of the page.  I used option 1, presenting a LUN for docker to use as the storage pool.
  • You can then install the NetApp Docker Volume Plugin (NDVP).  
    • Note that this is where the storage expertise comes in.  You'll need to know the management and storage IP's for the SolidFire, you'll need to setup iSCSI on the RHEL servers, and you'll need to present targets from the SolidFire to your RHEL servers. 
    • Create an access group with all the IQNs.
    • If you need help, I recommend this video.  You can find your iSCSI IQN with cat /etc/iscsi/initiatorname.iscsi and then create an access group on the solidfire for it:
    • Don't forget to add the port (3260) to the iscsiadm discover command
    • sudo iscsiadm -m discoverydb -t st -p 172.21.40.X:3260 --discover
    • iscsiadm -m node -l to log into all available targets
    • fdisk -l will show whether your mounts were successful and their device names
  • Once your RHEL server has logged into each target, you're able to run netappdvp --config=/etc/netappdvp/solidfire-san.json &
    • Don't forget the ampersand.  If you just run the command, it'll appear to hang but it's actually running.
NDVP Ready to Go!
And here we go, all three RHEL servers have the OSE prereqs and NDVP installed.  Here's our first NDVP-created volume!