Tuesday, March 7, 2017

Trident in Action

Some screenshots of NetApp's dynamic storage provisioner for K8s!  In this case, we're using OpenShift on SolidFire.

Saturday, March 4, 2017

OpenShift, Docker, and Elasticsearch

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

There are already many good setup docs for Elasticsearch on OpenShift, like here and here.  So what I'm going to do is flush out the concepts so those instructions make more sense.


OpenShift has a concept of a project.  This is how they provide multitenancy.

Next is namespace, which is similar to a project. 

Next is the image.  An image is a pre-packed container probably with an application installed, like MongoDB or JBOSS.  You can create new images by installing an application into a container and saving that container.  The image concept is analogous to a VM Template: you keep it updated and deploy fresh containers from it.

A stream seems to be a set of evolving images: for example, when you download the latest CentOS, you're not asking for a specific version, just the latest version.  The stream is the set of successive images that you get the latest from.

Application.  This is the normal definition of an application, however in the context of containers, it's good to think about the application as separate from a container

The catalog is a view, a view of the images held in the registry.

The registry is merely what OSE calls the collection of images, held in a folder.

A Pod is the OSE equivalent of a container

A deployment is the mechanism that manages where, how many, and replication on pods for a given application.

Persistent Volume Claim (PVC): In OpenShift, this is a request for storage.  It can sit unfulfilled.

Persistent Volume (PV): When you create a PVC, and then match it to a real LUN/Export, you have a PV.

Don't panic if you see the word "cartridge."  That's just British for container.

Some notes:
openshift's gui is very foreign. Take some time getting used to it.
Unless you're an old linux admin, you're going to need to take the linux learning curve seriously. You'll really need to brush up on vi, ls, cat, curl, and wget.
The & symbol is your friend.  For any command that appears hung (but isn't hung, it's actually running) like "docker run ", throw an '&' at the end and I betcha it'll give you your prompt back. 
I had no luck connecting to the OpenShift GUI via chrome, but firefox worked fine (remember, https and 8443)
When deploying images from docker into OpenShift, don't underestimate the Pull Secrets.  OpenShift has to be allowed to pull the images from Docker.  This mechanism is intended to prevent unauthorized access of Docker images.

Declarative State Storage

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

Anyone working on containers will have heard the term "declarative state" with regards to the number and type of containers.  Basically, it just means you have a platform that ensures you always have x number of containers.  In other words whenever a container dies/fails for any reason, the platform will recreate one in its place.  You're declaring how you want it to be from now on.

Say Kubernetes you tell it "I want 100 apache webserver containers" on a cluster of 10 hardware servers. It will spread 10 containers onto each server.  If you lose 2 servers, Kubernetes will re-create those 20 lost containers and spread them across the 8 remaining.*

This is very different from a Vmware-type mindset.  In Vmware you can say "deploy 100 vm's off this template," but Vmware doesn't watch the vm's, count them, and re-start them if they come down.  This is "imperative code" meaning "do what I tell you now," following a single order.

The existence of declarative state means storage has to change.  Previously, the storage layer just creates 10 LUNS, present them to the right servers, and you're done.  If the server dies, storage doesn't automatically present those LUNs to a new server, or delete them.  It sits, static, until someone comes and manually fixes it.

Luckily, in Kubernetes 1.5 there is a new concept of StatefulSets.  StatefulSets are all about enabling stability (i.e. persistence), and when combined with NetApp's powerful Trident connector it means you can create Declarative State Storage.  Which is exciting, because as far as I know I coined the phrase :-)

Declarative state storage means when a container is deleted, its corresponding volume is deleted.  When a container dies and is recreated, its corresponding volume is connected automatically.  When an application is scaled from 10 to 100 containers, the volumes are provisioned automatically.

More to come!

*In reality, Kubernetes creates a synchronous copy of each container.  So it doesn't actually re-create the container: it switches the secondary copy of the container to be primary (lowering the impact of interruption) and then re-creates a secondary copy elsewhere.  Cool stuff.

Tuesday, February 21, 2017

Virtualization, Docker, OpenShift, and Poker

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

Let's say you need to see how a poker website looks from CentOS, but you have a windows box.  What you do?  Probably install hyper-v (or vmplayer), download a .iso image of CentOS, and create a new vm out of that ISO.  Contained within that VM is every dll, every file, everything CentOS needs.

OK, let's say you invented a winning poker algorithm, and you want it to play 500 games of poker simultaneously.  You don't have enough hard drive space for 500 vm's - but you have enough for 500 containers.  So on that CentOS VM you install Docker and download 1 thin CentOS container image, and Docker knows what files on the CentOS VM each container needs to run.  Docker makes it super easy to download an image in docker: docker pull  is all.

Now let's say you're making tons of money and want to get 10,000 containers playing poker.  That many images won't fit in your computer - you need more computers.  So you buy a bunch of windows computers, and on each you install hyper-v, get a CentOS VM up, and install docker.  You have your 10,000 poker games running...but one computer dies, taking down the VM and 500 containers with it.  You lose all the chips you had in those 500 games.  What's more, people are starting to copy your algorithm, and you start losing!  You improve your algorithm, but how can you change the 9,500 containers in time?

So you cluster all your computers with hyper-v.  Good first step.  And then install OpenShift on all the CentOS VMs.  Now when a server dies, OpenShift is replicating each container to another VM and you don't lose the poker game.  What's more, every time you update the algorithm, you can use docker to create a new image and have OpenShift deploy it in place of the old algorithm container after each poker game.  

Friday, February 17, 2017

OpenShift, Trident, Docker, and SolidFire: Part 1

***This is part of an ongoing series I call "Mode 1 Storage Guy goes to a Mode 2 World."  I'm not an expert (yet), YMMV.***

We have a group of NetApp/SolidFire customers already live with OpenShift on SolidFire, which is very exciting but a bit scary too.  It's a bit scary because many of these clients went live without ever chatting with us!  This means they're running into issues like having to manually create hundreds of volumes, because they hadn't heard of our dynamic volume manager, Trident.

So we're partnering with RedHat to get a local OpenShift lab implementation tricked out with all the best SolidFire has to offer.  The goal is to get OpenShift running, then move on to containerized Elasticsearch and MongoDB and all sorts of other fun stuff.

Note: YOU DO NOT need NDVP in order to install/use Trident.  We do so here only for experience and demonstration purposes.

Here's the basic layout of the lab:  
1) 3 RHEL servers running as VMs in VMware (1 master, 2 other nodes)
2) NetApp Docker Volume Plugin installed
3) SolidFire for persistent storage (great API, all flash performance)
4) Trident for the automatic volume management
5) OpenShift Enterprise (instructions) as our container platform, installed on the RHEL servers.
6) OSE has several requirements, such as Docker as our container engine

The instructions for each of these are actually really good, so I'll just elaborate on a few things for this specific workflow.
  • Start with the OSE requirements instructions.  You need to make sure you have the correct RHEL licensing to access the OSE repos or you'll hit a roadblock in a real hurry!
    • Once you get to "Configuring Docker Storage" I recommend you detour over to the NDVP instructions, where you see "iSCSI RHEL/CentOS." 
    • Complete those steps, then continue with the "Configuring Docker Storage" instructions and complete through the rest of the page.  I used option 1, presenting a LUN for docker to use as the storage pool.
  • You can then install the NetApp Docker Volume Plugin (NDVP).  
    • Note that this is where the storage expertise comes in.  You'll need to know the management and storage IP's for the SolidFire, you'll need to setup iSCSI on the RHEL servers, and you'll need to present targets from the SolidFire to your RHEL servers. 
    • Create an access group with all the IQNs.
    • If you need help, I recommend this video.  You can find your iSCSI IQN with cat /etc/iscsi/initiatorname.iscsi and then create an access group on the solidfire for it:
    • Don't forget to add the port (3260) to the iscsiadm discover command
    • sudo iscsiadm -m discoverydb -t st -p 172.21.40.X:3260 --discover
    • iscsiadm -m node -l to log into all available targets
    • fdisk -l will show whether your mounts were successful and their device names
  • Once your RHEL server has logged into each target, you're able to run netappdvp --config=/etc/netappdvp/solidfire-san.json &
    • Don't forget the ampersand.  If you just run the command, it'll appear to hang but it's actually running.
NDVP Ready to Go!
And here we go, all three RHEL servers have the OSE prereqs and NDVP installed.  Here's our first NDVP-created volume!

Monday, January 16, 2017

Social Media Marketing

Here are a few things I've learned about social media marketing for a small business:

1) Facebook is the easiest and best platform by far.  However, you an expect your first couple posts to get great exposure among your followers, and then a big drop off.  It appears they try to incentivize you to pay for exposure by manipulating the algo - several others online have observed this as well.  The ability to target age, sex, and interests is very valuable.

2) Twitter has a ridiculously counter-intuitive UI.  It can easily take you 1-2 hours to get a single campaign up and running!  The best part of twitter ads is targeting people who follow a specific page: it's a ready-made demographic engine.

3) Linkedin is almost as complex as Twitter, but I think it's a hidden gem that is underutilized by pubs/restaurants.  Particularly the ability to target employees of specific companies is fantastic.  ROI is still to be determined...

4) Yelp is one of the most frustrating things in the world.  It's very expensive, low ROI, and very hard to get them to transfer your business to you.  It also has an algorithm that (for some mysterious reason) hides good recommendations but not bad ones, which can really damage your business.  My suspicion is they drive up bad reviews to incentivize you to pay for a premium membership, which allows you to pin a good review to the top.  I recommend this article for Yelp: http://marketingland.com/5-yelp-facts-business-owners-should-know-163054

5) Google My Business is just an awful UI.  It's almost as bad as Twitter.  But you absolutely have to focus here: google, and google maps, are the most important to your online presence.  I'm still experimenting with AdWords and site analytics to see what kind of results you can get.

6) TripAdvisor is pretty solid.  I haven't done any advertising here yet, but overall they've got the essentials. 

Sunday, January 8, 2017

Trident for Kubernetes

Last week NetApp dropped a huge development in the emerging tech market.  It’s called Project Trident, and it makes storage easier for Kubernetes.  Backstory: Kubernetes (also called k8s) started at google, it’s software that manages containers.  Basically you take a bunch of linux servers with Docker installed and tie them together with k8s, and it manages which container should live where.  If a container dies, k8s replaces it with a new one, that kind of thing.  You can think of k8s as Vmware for containers, except free and open source.

Most clients aren’t using basic k8s, but rather enterprise versions of it like RedHat OpenShift or Apprenda for reasons like security, support, and version management.  Trident is compatible with any version of k8s, which means it solves a big problem for RedHat.

Trident is similar to our vCenter plugin only even smoother: it allows k8s to ask a storage array for an NFS share or iSCSI LUN instantly, plus all sorts of LUN management abilities.  Better yet it’s free, open source, and storage vendor agnostic.