Easily login to Rancher containers locally

Sometimes managing containers through the Rancher web console can be tedious and painful.  Especially if you need to copy/paste things into or out of the terminal.  I recently discovered a nice little project on Github called Rancher SSH which allows you to connect to a container running in your Rancher environment as if it was local to the machine you are working on, much like SSH and hence the name.

I am still playing around with the functionality but so far it has been very nice and is very easy to get started with.  To get started you can either install it via Homebrew or with Golang.  I chose to use the homebrew option.

brew install fangli/dev/rancherssh

After it is finished installing (it might take a minute or two), you should have access to the rancherssh command from the CLI.  You might need to source your shell in order to pick up tab completion for the command but you should be able to run the command and get some output.

rancherssh

In order to do anything useful with this tool, you will first need to create an API key for rancherssh in Rancher.  Navigate to the environment you’d like to create the key for and then click the API tab in Rancher.  Then click  the “Add Environment API Key” to bring up the dialogue to create a new key.

add api key

After you create your key make not of the Access key (username)  and Secret key (password).  You will need these to configure rancherssh in the step below.  First, create a file somewhere that is easy to remember, called config.yml and populate it, similar to the following, updating the endpoint, access key and secret key.

endpoint: https://your.rancher.server/v1
user: access_key
password: secret_key

That’s pretty much it.  Make sure the endpoint matches your environment correctly, otherwise you should now be able to connect to a container in your Rancher environment.  You’ll need to make sure you run the rancherssh command from the same directory that you configured your config.yml file, but otherwise it should just work.

rancherssh my-stack_container_1

Optionally you can provide all of the configuration information to the CLI and just skip the config file completely.

rancherssh --endpoint="https://your.rancher.server/v1" --user="access_key" --password="secret_key" my-test-container_1

There is one last thing to mention.  rancherssh provides a nice fuzzy matching mechanism for connecting to containers.  For example, if you can’t remember which containers are available to a stack in Rancher you can run a pattern to match the stack, and rancherssh will tell you which containers are running in the stack and allow you to choose which one to connect to.

ranchserssh %my-stack%

If there are multiple containers this command will allow you to pick which one to connect to.

Searching for container %my-stack%
We found more than one containers in system:
[1] my-stack_container_1, Container ID 1i91308 in project 1a216, IP Address 10.42.154.115
[2] my-stack_container_2, Container ID 1i94034 in project 1a216, IP Address 10.42.119.103
[3] my-stack_container_3, Container ID 1i94036 in project 1a216, IP Address 10.42.146.57

I didn’t have any issues at all getting started with this tool, I would definitely recommend checking it out.  Especially if you do a lot of work in your Rancher containers.  It is fast, easy to use and is really useful for the times that using the Rancher UI is too cumbersome.

Read More

Bootstrap servers to a Rancher environment

If you’re not familiar already, Rancher is an orchestration and scheduling tool for containers.  I have written a little bit about Rancher in the past but haven’t covered much on the specifics about how to manage a Rancher environment.  One cool thing about Rancher is its “single pane of glass” approach to managing servers and containers, which allows users and admins to quickly and easily manage complicated environments.  In this post I’ll be covering how to quickly and automatically add servers to your Rancher environment.

One of the manual steps that can(and in my opinion should) be automated is the server bootstrapping process.  The Rancher web interface allows users to add hosts across different cloud providers (AWS, Azure, GCE, etc) and importantly the ability to add a custom host.  This custom host registration is the piece that allows us to automate the host addition process by exposing a registration token via the Rancher API.  One important thing to note if you are going to be adding hosts automatically is that you will need to actually create the entries necessary in the environment that you bootstrap servers to.  So for example, if you create a new environment you will either need to programatically hit the API or in the web interface navigate to Infrastructure -> Add Host to populate the necessary tokens and entries.

Once you have populated the API with the values needed, you will need to create an API token to allow the server(s) that are bootstrapping to connect to the Rancher server to add themselves.  If you haven’t done this before, in the environment you’d like to allow access to navigate to API -> Add Environment API Key -> name it and make a note of key that gets generated.

rancher api

That’s pretty much all of the prep work you need to do to your Rancher environment for this method to work.  The next step is to make a script to bootstrap a server when it gets created.  The logic for this bootstrap process can be boiled down to the following snippet.

#!/bin/bash

INTERNAL_IP=$(ip add show eth0 | awk '/inet/ {print $2}' | cut -d/ -f1 | head -1)
SERVER="https://example.com"
TOKEN="access_key:secret_key"
PROJID="unique_environment"
AGENT_VER="v1.0.1"

RANCHER_URL=$(curl -su $TOKEN $SERVER/v1/registrationtokens?projectId=$PROJID | head -1 | grep -nhoe 'registrationUrl[^},]*}' | egrep -hoe 'https?:.*[^"}]')

docker run \
  -e CATTLE_AGENT_IP=$INTERNAL_IP \
  -e CATTLE_HOST_LABELS='your=label' \
  -d --privileged --name rancher-bootstrap \
  -v /var/run/docker.sock:/var/run/docker.sock \
  rancher/agent:$AGENT_VER $RANCHER_URL

The script is pretty straight forward.  It attempts to gather the internal IP address of the server being created, so that it can add it to the Rancher environment with a unique name.  Note that there are a number of variables that need to get set to reflect.   One that uses the DNS name of the Rancher server, one for the token that was generated in the step above and one for the project ID, which can be found by navigating to the Environment and then looking at the URL for /env/xxxx.

After we have all the needed information and updated the script, we can curl the Rancher server (this won’t work if you didn’t populate the API in the steps above or if your keys are invalide) with the registration token.  Finally, start a docker container with the agent version set (check your Rancher server version and match to that) along with the URL obtained from the curl command.

The final step is to get the script to run when the server is provisioned.  There are many ways to do this and this step will vary depending a number of different factors,  but in this post I am using Cloud-init for CoreOS on AWS.  Cloud-init is used to inject the script into the server and then create a systemd service to run the script the first time the server boots and use the result of the script to run the Rancher agent which allows the server to be picked up by the Rancher server and its environment.

Here is the logic to run the script when the server is booted.

coreos:

  units:
  - name: rancher-agent.service
    command: start
    content: |
      [Unit]
      Description=Rancher Agent
      After=docker.service
      Requires=docker.service
      After=network-online.target
      Requires=network-online.target

      [Service]
      Type=oneshot
      RemainAfterExit=yes
      ExecStart=/etc/rancher-agent

The full version of the cloud-init file can be found here.

After you provision your server and give it a minute to warm up and run the script, check your Rancher environment to see if your server has popped up.  If it hasn’t, the first place to start looking is on the server itself that was just created.  Run docker logs -f rancher-agent to get information about what went wrong.  Usually the problem is pretty obvious.

A brand new server looks something like this.

bootstrapped server

I typically use Terraform to provision these servers but I feel like covering Terraform here is a little bit out of scope.  You can image some really interesting possibilities with auto scaling groups and load balancers that can come and go as your environment changes, one of the beauties of disposable infrastructure as well as infrastructure as code.

If you are interested in seeing how this Rancher bootstrap process fits in with Terraform let me know and I’ll take a stab at writing up a little piece on how to get it working.

Read More

Useful Vim Plugins

This post is mostly a reference for folks that are interested in adding a little bit of extra polish and functionality to the stock version of Vim.  The plugin system in Vim is a little bit confusing at first but is really powerful once you get past the initial learning curve.  I know this topic has been covered a million times but having a centralized reference for how to set up each plugin is a little bit harder to find.

Below I have highlighted a sample list of my favorite Vim plugins.  I suggest that you go try as many plugins that you can to figure out what suits your needs and workflow best.  The following plugins are the most useful to me, but certainly I don’t think will be the best for everybody so use this post as a reference to getting started with plugins and try some out to decide which ones are the best for your own environment.

Vundle

This is a package manager of sorts for Vim plugins.  Vundle allows you to download, install, search and otherwise manage plugins for Vim in an easy and straight forward way.

To get started with Vundle, put the following configuration at THE VERY TOP of your vimrc.

set nocompatible              " be iMproved, required
filetype off                  " required
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#rc()
"" let Vundle manage Vundle
Bundle 'gmarik/Vundle.vim'
...

Then you need to clone the Vundle project in to the path specified in the vimrc from above.

git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim

Now you can install any other defined plugins from within Vim by  running :BundleInstall.  This should trigger Vundle to start downloading/updating its list of plugins based on your vimrc.

To install additional plugins, update your vimrc with the plugins you want to install, similar to how Vundle installs itself as shown below.

"" Example plugin
Bundle 'flazz/vim-colorschemes'

Color Schemes

Customizing the look and feel of Vim is a very personal experience.  Luckily there are a lot of options to choose from.

The vim-colorschemes plugin allows you to pick from a huge list of custom color schemes that users have put together and published.  As illustrated above you can simply add the repo to your vimrc to gain access to a large number of color options.  Then to pick one just add the following to your vimrc (after the Bundle command).

colorscheme xoria256

Next time you open up Vim you should see color output for the scheme you like.

Syntastic

Syntastic is a fantastic syntax highlighter and linting tool and is easily the best syntax checker I have found for Vim.  Syntastic offers support for tons of different languages and styles and even offers support for third party syntax checking plugins.

Here is how to install and configure Syntastic using Vundle.  The first step is to ddd Syntastic to your vimrc,

" Syntax highlighting
 Bundle 'scrooloose/syntastic'

There are a few basic settings that also need to get added to your vimrc to get Syntastic to work well.

" Syntastic statusline
 set statusline+=%#warningmsg#
 set statusline+=%{SyntasticStatuslineFlag()}
 set statusline+=%*
 " Sytnastic settings
 let g:syntastic_always_populate_loc_list = 1
 let g:syntastic_auto_loc_list = 1
 let g:syntastic_check_on_open = 1
 let g:syntastic_loc_list_height=5
 let g:syntastic_check_on_wq = 0
 " Better symbols
 let g:syntastic_error_symbol = 'XX'
 let g:syntastic_warning_symbol = '!!'

That’s pretty much it.  Having a syntax highlighter and automatic code linter has been a wonderful boon for productivity.  I have saved  myself so much time chasing down syntax errors and other bad code.  I definitely recommend this tool.

YouCompleteMe

This plugin is an autocompletion tool that adds tab completion to Vim, giving it a really nice IDE feel.  I’ve only tested YCM out for a few weeks now but have to say it doesn’t seem to slow anything down very much at all, which is nice.  An added bonus to using YCM with Syntastic is that they work together so if there are problems with the functions entered by YCM, Syntastic will pick them up.

Here are the installation instructions for Vundle.  The first thing you will need to do is add a Vundle reference to your vimrc.

"" Autocomplete
Bundle 'Valloric/YouCompleteMe'

Then, in Vim, run :BundleInstall – this will download the git repo for YouCompleteMe.  Once the repo is downloaded you will need a few other tools installed to get things working correctly.  Check the official documentation for installation instruction for your system.  On OS X you will need to have Python, cmake, MacVim and clang support.

xcode-select --install
brew install cmake

Then, to install YouCompleteMe.

cd ~/.vim/bundle/YouCompleteMe
git submodule update --init --recursive (not needed if you use Vundle)
./install.py --clang-completer

vim-better-whitespace

Highlights pesky whitespace automatically.  This one is really useful to just have on in the background to help you catch whitespace mistakes.  I know I make a lot of mistakes with regards to missing whitespace so having this is just really nice.

To install it.

"" Whitespace highlighting
Bundle 'ntpeters/vim-better-whitespace'

That’s it.  Vundle should handle the rest.

ctrlp / nerdtree

These tools are useful for file management and traversal.  These plugins become more powersful when you work with a lot of files and move around different directories a lot.  There is some debate about whether or not to use nerdtree in favor of the built in netrw.  Nonetheless, it is still worth checking out different file browsers and see how they work.

Check out Vim Unite for a sort of hybrid file manager for fuzzy finding like ctrlp with additional functionality, like the ability to grep files from within Vim using a mapped key.

Bonus – Shellcheck

This is a shell and bash linting tool that integrates with vim and is great.  Bash is notoriously difficult to read and debug and the shellcheck tools helps out with that a lot.

Install shellcheck on your system and syntastic will automatically pick up the installation and automatically do its linting whenever you save a file.  I have been writing a lot of bash lately and the shellcheck tool has been a godsend for catching mistakes, and especially useful in Vim since it runs all the time.

By combining the powers of a good syntax highlighter and a good solid understanding of Bash you should be able to be that much more productive once you get used to having a build in to syntax and style checker for your scripts.

Read More

Fixing docker-machine shared folder performance issues

It is a known issue that vboxsf (Virtualbox Shared Folders) has performance problems.  This ugly fact becomes a problem if you use docker-machine with the default Virtualbox driver to mount volumes, both on Windows and OS X, especially when mounting directories with a large amount (~17k and above files).  Linux does not suffer from this performance problem since it is able to run Docker natively and does not require you to run docker-machine.

There are various issues floating around Github referencing this problem, most of which remain unresolved.

Unfortunately there is currently not a proper fix for the vboxsf performance issues on OS X and Windows.  In fact, I reached out to the Virtualbox developers around a year ago asking what the deal was and was basically told that fixing shared folder performance was not a high priority issue for their dev team.

After hearing the unsettling news, I set out to find a good way to deal with shared volumes.  I stumbled across a few different approaches to solving the problem but most of them ended up being glitchy (at the time) or overly complicated.  There is a nice write up that mentions many of the tools that I tried myself.

Having tried most of the methods out there, easily the best workaround I have found is to use NFS file shares to mount the “Users” directory using a tool called docker-machine-nfs.  It is easy to install and run and most importantly it just works out of the box, which is exactly what most folks are looking for.

Sadly this method does not work on Windows.  And as far as I can tell there is not a good workaround to this problem if you are running docker-machine on Windows.  It does sounds like some folks maybe have had some success using samba but I have not attempted to get fast volumes working on Windows so can’t say for sure what the best approach is.

To install docker-machine-nfs

curl -s https://raw.githubusercontent.com/adlogix/docker-machine-nfs/master/docker-machine-nfs.sh |
  sudo tee /usr/local/bin/docker-machine-nfs > /dev/null && \
  sudo chmod +x /usr/local/bin/docker-machine-nfs

To run it

Make sure you already have created a docker-machine VM and verify that it is running.  Then run the following command.

docker-machine-nfs <machine-name>

And that’s pretty much it…  It requires admin access to do the NFS mounting so you might need to punch in your password, other than that you can pretty much follow along with what the output is doing.

There are a few caveats to be aware of.

I have noticed that on newer versions of docker-machine, if you run the script too quickly after creating the VM, docker-machine ends up having issues communicating with the Docker daemon.  The work around (for now) is just to wait ~30 seconds the docker-machine VM to boot fully before running the command to mount nfs.

There is also currently an issue on the docker-machine side on version 0.5.5 and above that breaks docker-machine-nfs on the first run, which is described here.  The workaround for that issue is to modify the script and place a “sleep 20” in the script, as per the comments in the issue.  The author appears to have brought the issue up with docker-machine developers so should fixed properly in the near future.

Read More

Change CoreOS default toolbox

This is a little trick that allows you to override the default base OS in the CoreOS “toolbox“.  The toolbox is a neat trick to allow you to debug and troubleshoot issues inside containers on CoreOS without having to do any outside work of setting up a container.

The default toolbox OS defaults to Fedora, which we’re going to change to Ubuntu.  There is a custom configuration file that will get read in via the .toolboxrc file, located at /home/core/.toolboxrc by default.  To keep things simple we will only be changing the few pieces of the config to get the toolbox to behave how we want.  More can be changed but we don’t really need to override anything else.

TOOLBOX_DOCKER_IMAGE=ubuntu
TOOLBOX_DOCKER_TAG=14.04

That’s pretty cool, but what if we want to have this config file be in place for all servers?  We don’t want to have to manually write this config file for every server we log in to.

To fix this issue we will add a simple configuration in to the user-data file that gets fed in to the CoreOS cloud-config when the server is created.  You can find more information about the CoreOS cloud-configs here.

The bit in the cloud config that needs to change is the following.

-write_files:
  - path: /home/core/.toolboxrc
    owner: core
    content: |
      TOOLBOX_DOCKER_IMAGE=ubuntu
      TOOLBOX_DOCKER_TAG=14.04

If you are already using cloud-config then this change should be easy, just add the bit starting with -path to your existing -write_files section.  New servers using this config will have the desired toolbox defaults.

This approach gives us an automated, reproducible way to clone our custom toolbox config to every server that uses cloud-config to bootstrap itself.  Once the config is in place simply run the “toolbox” command and it should use the custom values to pull the desired Ubuntu image.

Then you can run your Ubuntu commands and debugging tools from within the toolbox.  Everything else will be the same, we just use Ubuntu now as our default toolbox OS.  Here is the post that gave me the idea to do this originally.

Read More