Templated Nginx configuration with Bash and Docker

Shoutout to @shakefu for his Nginx and Bash wizardry in figuring a lot of this stuff out.  I’d like to take credit for this, but he’s the one who got a lot of it working originally.

Sometimes it can be useful to template Nginx files to use environment variables to fine tune and adjust control for various aspects of Nginx.  A recent example of this idea that I recently worked on was a scenario where I setup an Nginx proxy with a very bare bones configuration.  As part of the project, I wanted a quick and easy way to update some of the major Nginx configurations like the port it uses to listen for traffic, the server name, upstream servers, etc.

It turns out that there is a quick and dirty way to template basic Nginx configurations using Bash, which ended up being really useful so I thought I would share it.  There are a few caveats to this method but it is definitely worth the effort if you have a simple setup or a setup that requires some changes periodically.  I stuck the configuration into a Dockerfile so that it can be easily be updated and ported around – by using the nginx:alpine image as the base image the total size all said and done is around 16MB.  If you’re not interested in the Docker bits, feel free to skip them.

The first part of using this method is to create a simple configuration file that will be used to substitute in some environment variables.  Here is a simple template that is useful for changing a few Nginx settings.  I called it nginx.tmpl, which will be important for how the template gets rendered later.

events {}

http {
  error_log stderr;
  access_log /dev/stdout;

  upstream upstream_servers {
    server ${UPSTREAM};
  }

  server {
    listen ${LISTEN_PORT};
    server_name ${SERVER_NAME};
    resolver ${RESOLVER};
    set ${ESC}upstream ${UPSTREAM};

    # Allow injecting extra configuration into the server block
    ${SERVER_EXTRA_CONF}

    location / {
       proxy_pass ${ESC}upstream;
    }
  }
}

The configuration is mostly straight forward.  We are basically just using this configuration file and inserting a few templated variables denoted by the ${VARIABLE} syntax, which are just environment variables that get inserted into the configuration when it gets bootstrapped.  There are a few “tricks” that you may need to use if your configuration starts to get more complicated.  The first is the use of the ${ESC} variable.  Nginx uses the ‘$’ for its variables, which also is used by the template.  The extra ${ESC} basically just gives us a way to escape that $ so that we can use Nginx variables as well as templated variables.

The other interesting thing that we discovered (props to shakefu for this magic) was that you can basically jam arbitrary server block level configurations into an environment variable.  We do this with the ${SERVER_EXTRA_CONF} in the above configuration and I will show an example of how to use that environment variable later.

Next, I created a simple Dockerfile that provides some default values for some of the various templated variables.  The Dockerfile aslso copies the templated configuration into the image, and does some Bash magic for rendering the template.

FROM nginx:alpine

ENV LISTEN_PORT=8080 \
  SERVER_NAME=_ \
  RESOLVER=8.8.8.8 \
  UPSTREAM=icanhazip.com:80 \
  UPSTREAM_PROTO=http \
  ESC='$'

COPY nginx.tmpl /etc/nginx/nginx.tmpl

CMD /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

There are some things to note.  First, not all of the variables in the template need to be declared in the Dockerfile, which means that if the variable isn’t set it will be blank in the rendered template and just won’t do anything.  There are some variables that need defaults, so if you ever run across that scenario you can just add them to the Dockerfile and rebuild.

The other interesting thing is how the template gets rendered.  There is a tool built into the shell called envsubst that substitutes the values of environment variables into files.  In the Dockerfile, this tool gets executed as part of the default command, taking the template as the input and creating the final configuration.

/bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf

Nginx gets started in a slightly silly way so that daemon mode can be disabled (we want Nginx running in the foreground) and if that fails, the rendered template gets read to help look for errors in the rendered configuration.

&& nginx -g 'daemon off;' || cat /etc/nginx/nginx.conf"

To quickly test the configuration, you can create a simple docker-compose.yml file with a few of the desired environment variables, like I have below.

version: '3'
services:
  nginx_proxy:
    build:
      context: .
      dockerfile: Dockerfile
    # Only test the configuration
    #command: /bin/sh -c "envsubst < /etc/nginx/nginx.tmpl > /etc/nginx/nginx.conf && cat /etc/nginx/nginx.conf"
    volumes:
      - "./nginx.tmpl:/etc/nginx/nginx.tmpl"
    ports:
      - 80:80
    environment:
    - SERVER_NAME=_
    - LISTEN_PORT=80
    - UPSTREAM=test1.com
    - UPSTREAM_PROTO=https
    # Override the resolver
    - RESOLVER=4.2.2.2
    # The following would add an escape if it isn't in the Dockerfile
    # - ESC=$$

Then you can bring up Nginx server.

docker-compose up

The configuration doesn’t get rendered until the container is run, so to test the configuration only, you could add in a command in the docker-compose file that renders the configuration and then another command that spits out the rendered configuration to make sure it looks right.

If you are interested in adding additional configuration you can use the ${SERVER_EXTRA_CONF} as eluded to above.  An example of this extra configuration can be assigned to the environment variable.  Below is an arbitrary snippet that allows for connections to do long polling to Nginx, which basically means that Nginx will try to hold the connection open for existing connections for longer.

error_page 420 = @longpoll;
if ($arg_wait = "true") { return 420; }
}
location @longpoll {
# Proxy requests to upstream
proxy_pass $upstream;
# Allow long lived connections
proxy_buffering off;
proxy_read_timeout 900s;
keepalive_timeout 160s;
keepalive_requests 100000;

The above snipped would be a perfectly valid environment variable as far as the container is concerned, it will just look a little bit weird to the eye.

nginx proxy environment variables

That’s all I’ve got for now.  This minimal templated Nginx configuration is handy for testing out simple web servers, especially for proxies and is also nice to port around using Docker.

Read More

Fix the JenkinsAPI No valid crumb error

If you are working with the Python based JenkinsAPI library you might run into the No valid crumb was included in the request error.  The error below will probably look familiar if you’ve run into this issue.

Traceback (most recent call last):
 File "myscript.py", line 47, in <module>
 deploy()
 File "myscript.py", line 24, in deploy
 jenkins.build_job('test')
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/jenkins.py", line 165, in build_job
 self[jobname].invoke(build_params=params or {})
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/job.py", line 209, in invoke
 allow_redirects=False
 File "/usr/local/lib/python3.6/site-packages/jenkinsapi/utils/requester.py", line 143, in post_and_confirm_status
 response.text.encode('UTF-8')
jenkinsapi.custom_exceptions.JenkinsAPIException: Operation failed. url=https://jenkins.example.com/job/test/build, data={'json': '{"parameter": [], "statusCode": "303", "redirectTo": "."}'}, headers={'Content-Type': 'application/x-www-form-urlencoded'}, status=403, text=b'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>\n<title>Error 403 No valid crumb was included in the request</title>\n</head>\n<body><h2>HTTP ERROR 403</h2>\n<p>Problem accessing /job/test/build. Reason:\n<pre> No valid crumb was included in the request</pre></p><hr><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.4.z-SNAPSHOT</a><hr/>\n\n</body>\n</html>\n'

It is good practice to enable additional security in Jenkins by turning on the “Prevent Cross Site Forgery exploits” option in the security settings, so if you see this error it is a good thing.  The below example shows this security feature in Jenkins.

enable xss protection

The Fix

This error threw me off at first, but it didn’t take long to find a quick fix.  There is a crumb_requester class in the jenkinsapi that you can use to create the crumbed auth token.  You can use the following example as a guideline in your own code.

from jenkinsapi.jenkins import Jenkins
from jenkinsapi.utils.crumb_requester import CrumbRequester

JENKINS_USER = 'user'
JENKINS_PASS = 'pass'
JENKINS_URL = 'https://jenkins.example.com'

# We need to create a crumb for the request first
crumb=CrumbRequester(username=JENKINS_USER, password=JENKINS_PASS, baseurl=JENKINS_URL)

# Now use the crumb to authenticate against Jenkins
jenkins = Jenkins(JENKINS_URL, username=JENKINS_USER, password=JENKINS_PASS, requester=crumb)

...

The code looks very similar to creating a normal Jenkins authentication object, the only difference being that we create and then pass in a crumb for the request, rather than just a username/password combination.  Once the crumbed authentication object has been created, you can continue writing your Python code as you would normally.  If you’re interested in learning more about crumbs and CSRF you can find more here, or just Google for CSRF for more info.

This issue was slightly confusing/annoying, but I’d rather deal with an extra few lines of code and know that my Jenkins server is secure.

Read More

Remote Jenkins builds using Github auth

Having the ability to call Jenkins jobs remotely is pretty slick and adds some extra flexibility and allows for some interesting applications.  For example, you could use remote builds to call a script from a chat app or from some other web application.  I have chosen to write a quick bash script as a proof of concept, but this could easily be extended or written in a different language via one of its language specific libraries.

The instructions for the method I am using assume that you are using the Jenkins freestyle build as I haven’t experimented much yet with pipelines for remote builds yet.

The first step is to enable remote builds for the Jenkins job that will be triggered.  There is an option in the job for “Trigger builds remotely” which allows the job to be called from a script.

trigger remote builds

The authentication token can be any arbitrary string you choose.  Also note the URL below, you will need that later as part of the script to call this job.

With the authentication token configured and the Jenkins URL recorded, you can begin writing the script.  The first step is to populate some variables for kicking off the job.  Below is an example of how you might do this.

jenkins_url="https://jenkins.example.com"
job_name="my-jenkins-job"
job_token="xxxxx"
auth="username:token"
my_repo="some_git_repo"
git_tag="abcd123"

Be sure to fill in these variables with the correctly corresponding values.  job_token should correspond to the string you entered above in the Jenkins job, auth should correspond to your github username/token combination.  If you are not familiar with Github tokens you can find more information about setting them up here.

As part of the script, you will want to create a Jenkins “crumb” using your Github credentials that will be used to prevent cross-site scripting attacks.  Here’s what the creation of the crumb looks like (borrowed from this Stackoverflow post).

crumb=$(curl -s 'https://'auth'@jenkins.example.com/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')

Once you have your variables configured and your crumb all set up, you can test out the Jenkins job.

curl -X POST -H "$crumb" $jenkins_url/job/$job_name/build?token=$job_token \
  --user $auth \
  --data-urlencode json='
  {"parameter":
    [
      {"name":"parameter1", "value":"test1"},
      {"name":"parameter2", "value":"test2"},
      {"name":"git_repo", "value":"'$my_repo'"},
      {"name":"git_tag", "value":"'$git_tag'"}
    ]
  }'

In the example job above, I am using several Jenkins parameters as part of the build.  The json name values correspond to the parameters.  Notice that I am using variables for a few of the values above, make sure those variables are wrapped in singe quotes to correctly escape the json.  The syntax for variables is slightly different but allows for some additional flexibility in the job configuration and also allows the script to be called with dynamic values.

If you call this script now, it should kick off a Jenkins job for you with all of the values you have provided.

Read More

Curl on Windows using a Docker wrapper

Does the Windows built-in version of “curl” confuse or intimidate you?  Maybe you come from a Linux or Unix background, and yearn for some of your favorite go-to tools?  Newer versions of Powershell include a cmdlet for interacting with the web called Invoke-WebRequest, which is useful, but is not a great drop in replacement for those with experience in non Windows environments.  The Powershell cmdlets are a move in the right direction to unifying CLI experiences but there are still many folks that have become attached to curl over the years, including myself.  It is worth noting that a Windows compatible version of curl has existed for a long time, however it has always been a nuisance dealing with the zip file, just as using SSH has always been a hassle on Windows.  It has always been possible to use the *nix equivalent tools, it is just clunky.

I found a low effort solution for adding curl to my Windows CLI flow, that acts as a nice middle ground between learning Invoke-WebRequest and installing curl binaries directly, which I’d like to share.  This alias trick is a simple way to use curl for working with API’s and other various web testing in Windows environments without getting tangled in managing versions, and dealing with vulnerabilities.  Just download the latest Docker image to update curl to the newest version, and don’t worry about its implementation across different systems.

Prerequisites are light.  First, make sure to have the Docker for Windows app installed (stable or beta are both fine) as well as a semi-recent version of Powershell.

Next step.  If you haven’t set up a Powershell profile, there are also lots of links and resources about how to do it.   I even wrote about it recently, so I am skipping that step as well.  Start by adding the following snippet to your Powershell profile (by default located in C:\Users\<user>\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) and saving.

# Curl alias using docker
function Docker-Curl {
   docker run --rm byrnedo/alpine-curl $args
}

# Aliases
New-Alias dcurl Docker-Curl

Then source you terminal and run the curl command that was just created.

dcurl -h

One issue you might notice from the snippet above is that the Docker image is not an “official” image.  If this bothers you (security concerns, etc.), it is really easy to create your own, secure image.  There are lots of examples of how to create minimal images with Curl pre-installed.  Just be aware that your custom image will need to be maintained and occasionally rebuilt/published to guard against future vulnerabilities.  For brevity, I have skipped this process, but here’s an example of creating a custom image.

Optional

To update curl, just run the docker pull command.

docker pull apline-curl

Now you have the best of both worlds.  The built-in Invoke-WebRequest cmdlet provided by Powershell is available, as well as the venerable curl command.

My number one case for using curl in a container is that it has been in existence for such a long time (less bugs and edge cases) and it can be used for nearly any web related task.  It is also much handier to use curl for those with a background using *nix systems, rather than digging around in unfamiliar Powershell docs for similar functionality.  Having the ability to run some of my favorite tools in an easy, reproducible way on Windows has been a refreshing experience while sliding back into the Windows world.

Read More

Backup Route 53 zones

We have all heard about DNS catastrophes.  I just read about horror story on reddit the other day, where an Azure root DNS zone was accidentally deleted with no backup.  I experienced a similar disaster a few years ago – a simple DNS change managed to knock out internal DNS for an entire domain, which contained hundreds of records.  Reading the post hit close to home, uncovering some of my own past anxiety, so I began poking around for solutions.  Immediately, I noticed that backing up DNS records is usually skipped over as part of the backup process.  Folks just tend to never do it, for whatever reason.

I did discover, though, that backing up DNS is easy.  So I decided to fix the problem.

I wrote a simple shell script that dumps out all Route53 zones for a given AWS account to a json file, and uploads the zones to an S3 bucket.  The script is a handful lines, which is perfect because it doesn’t take much effort to potentially save your bacon.

If you don’t host DNS in AWS, the script can be modified to work for other DNS providers (assuming they have public API’s).

Here’s the script:

#!/usr/bin/env bash

set -e

# Dump route 53 zones to a text file and upload to S3.

BACKUP_DIR=/home/<user>/dns-backup
BACKUP_BUCKET=<bucket>
# Use full paths for cron
CLIPATH="/usr/local/bin"

# Dump all zones to a file and upload to s3
function backup_all_zones () {
  local zones
  # Enumerate all zones
  zones=$($CLIPATH/aws route53 list-hosted-zones | jq -r '.HostedZones[].Id' | sed "s/\/hostedzone\///")
  for zone in $zones; do
  echo "Backing up zone $zone"
  $CLIPATH/aws route53 list-resource-record-sets --hosted-zone-id $zone > $BACKUP_DIR/$zone.json
  done

  # Upload backups to s3
  $CLIPATH/aws s3 cp $BACKUP_DIR s3://$BACKUP_BUCKET --recursive --sse
}

# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Backup up all the things
time backup_all_zones

Be sure to update the <user> and <bucket> in the script to match your own environment settings.  Dumping the DNS records to json is nice because it allows for a more programmatic way of working with the data.  This script can be run manually, but is much more useful if run automatically.  Just add the script to a cronjob and schedule it to dump DNS periodically.

For this script to work, the aws cli and jq need to be installed.  The installation is skipped in this post, but is trivial.  Refer to the links for instructions.

The aws cli needs to be configured to use an API key with read access from Route53 and the ability to write to S3.  Details are skipped for this step as well – be sure to consult the AWS documentation on setting up IAM permissions for help with setting up API keys.  Another, simplified approach is to use a pre-existing key with admin credentials (not recommended).

Read More