I have created a Github project that has basic instructions for getting started. You can take a look over there for ideas of how all of this works and to get ideas for your own set up.
I used the following links as reference for my approach to Dockerizing Sentry.
If you have configurations to use, it is probably a good idea to start from there. You can check my Github repo for what a basic configuration looks like. If you are starting from scratch or are using version 7.1.x or above you can use the “sentry init” command to generate a skeleton configuration to work from.
For this setup to work you will need the following prebuilt Docker images/containers. I suggest using something simple like docker-compose to stitch the containers together.
- redis – https://registry.hub.docker.com/_/redis/
- postgres – https://registry.hub.docker.com/_/postgres/
- memcached – https://hub.docker.com/_/memcached/
- nginx – https://hub.docker.com/_/nginx/
NOTE: If you are running this on OS X you may need to do some trickery and give special permission on the host (mac) level e.g. create ~/docker/postgres directory and give it the correct permission (I just used 777 recursively for testing, make sure to lock it down if you put this in production).
I wrote a little script in my Github project that will take care of setting up all of the directories on the host OS that need to be set up for data to persist. The script also generates a self signed cert to use for proxying Sentry through Nginx. Without the certificate, the statistics pages in the Sentry web interface will be broken.
To run the script, run the following command and follow the prompts. Also make sure you have docker-compose installed beforehand to run all the needed command.
The certs that get generated are self signed so you will see the red lock in your browser. I haven’t tried it yet but I imagine using Let’s Encrytpt to create the certificates would be very easy. Let me know if you have had any success generating Nginx certs for Docker containers, I might write a follow up post.
After setting up directories and creating certificates, the first thing necessary to getting up and going is to add the Sentry superuser to Postgres (at least 9.4). To do this, you will need to fire up the Postgres container.
docker-compose up -d postgres
Then to connect to the Postgres DB you can use the following command.
docker-compose run postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
Once you are logged in to the Postgres container you will need to set up a few Sentry DB related things.
First, create the role.
CREATE ROLE sentry superuser;
And then allow it to login.
ALTER ROLE sentry WITH LOGIN;
Create the Sentry DB.
CREATE DATABASE sentry;
When you are done in the container, \q will drop out of the postgresql shell.
After you’re done configuring the DB components you will need to “prime” Sentry by running it a first time. This will probably take a little bit of time because it also requires you to build and pull all the other needed Docker images.
docker-compose build docker-compose up
You will quickly notice if you try to browse to the Sentry URL (e.g. the IP/port of your Sentry container or docker-machine IP if you’re on OS X) that you will get errors in the logs and 503’s if you hit the site.
Repair the database (if needed)
To fix this you will need to run the following command on your DB to repair it if this is the first time you have run through the set up.
docker-compose run sentry sentry upgrade
The default Postgres database username and password is sentry in this setup, as part of the setup the upgrade prompt will ask you got create a new user and password, and make note of what those are. You will definitely want to change these configs if you use this outside of a test or development environment.
After upgrading/preparing the database, you should be able to bring up the stack again.
docker-compose up -d && docker-compose logs
Now you should be able to get to the Sentry URL and start configuring . To manage the username/password you can visit the /admin url and set up the accounts.
The Sentry server should come up and allow you in but will likely need more configuration. Using the power of docker-compose it is easy to add in any custom configurations you have. For example, if you need to adjust sentry level configurations all you need to do is edit the file in ./sentry/sentry.conf.py and then restart the stack to pick up the changes. Likewise, if you need to make changes to Nginx or celery, just edit the configuration file and bump the stack – using “docker-compose up -d”.
I have attempted to configure as many sane defaults in the base config to make the configuration steps easier. You will probably want to check some of the following settings in the sentry/sentry.conf.py file.
- SENTRY_ADMIN_EMAIL – For notifications
- SENTRY_URL_PREFIX – This is especially important for getting stats working
- SENTRY_ALLOW_ORIGIN – Where to allow communications from
- ALLOWED_HOSTS – Which hosts can communicate with Sentry
If you have the SENTRY_URL_PREFIX set up correctly you should see something similar when you visit the /queue page, which indicates statistics are working.
If you want to set up any kind of email alerting, make sure to check out the mail server settings.
docker-compose.yml example file
The following configuration shows how the Sentry stack should look. The meat of the logic is in this configuration but since docker-compose is so flexible, you can modify this to use any custom commands, different ports or any other configurations you may need to make Sentry work in your own environment.
# Caching redis: image: redis:2.8 hostname: redis ports: - "6379:6379" volumes: - "/data/redis:/data" memcached: image: memcached hostname: memcached ports: - "11211:11211" # Database postgres: image: postgres:9.4 hostname: postgres ports: - "5432:5432" volumes: - "/data/postgres/etc:/etc/postgresql" - "/data/postgres/log:/var/log/postgresql" - "/data/postgres/lib/data:/var/lib/postgresql/data" # Customized Sentry configuration sentry: build: ./sentry hostname: sentry ports: - "9000:9000" - "9001:9001" links: - postgres - redis - celery - memcached volumes: - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py" # Celery celery: build: ./sentry hostname: celery environment: - C_FORCE_ROOT=true command: "sentry celery worker -B -l WARNING" links: - postgres - redis - memcached volumes: - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py" # Celerybeat celerybeat: build: ./sentry hostname: celerybeat environment: - C_FORCE_ROOT=true command: "sentry celery beat -l WARNING" links: - postgres - redis volumes: - "./sentry/sentry.conf.py:/home/sentry/.sentry/sentry.conf.py" # Nginx nginx: image: nginx hostname: nginx ports: - "80:80" - "443:443" links: - sentry volumes: - "./nginx/sentry.conf:/etc/nginx/conf.d/default.conf" - "./nginx/sentry.crt:/etc/nginx/ssl/sentry.crt" - "./nginx/sentry.key:/etc/nginx/ssl/sentry.key"
The Dockerfiles for each of these component are fairly straight forward. In fact, the same configs can be used for the Sentry, Celery and Celerybeat services.
# Kombu breaks in 2.7.11 FROM python:2.7.10 # Set up sentry user RUN groupadd sentry && useradd --create-home --home-dir /home/sentry -g sentry sentry WORKDIR /home/sentry # Sentry dependencies RUN pip install \ psycopg2 \ mysql-python \ supervisor \ # Threading gevent \ eventlet \ # Memcached python-memcached \ # Redis redis \ hiredis \ nydus # Sentry ENV SENTRY_VERSION 7.7.4 RUN pip install sentry==$SENTRY_VERSION # Set up directories RUN mkdir -p /home/sentry/.sentry \ && chown -R sentry:sentry /home/sentry/.sentry \ && chown -R sentry /var/log # Configs COPY sentry.conf.py /home/sentry/.sentry/sentry.conf.py #USER sentry EXPOSE 9000/tcp 9001/udp # Making sentry commands easier to run RUN ln -s /home/sentry/.sentry /root CMD sentry --config=/home/sentry/.sentry/sentry.conf.py start
Since the customized Sentry config is rather lengthy, I will point you to the Github repo again. There are a few values that you will need to provide but they should be pretty self explanatory.
Once the configs have all been put in to place you should be good to go. A bonus piece would be to add an Upstart service that takes care of managing the stack if the server either gets rebooted or the containers manage to get stuck in an unstable state. The configuration is a fairly easy thing to do and many other guides and posts have been written about how to accomplish this.