Production hosting is managed by the Shields ops team:
|Component||Subcomponent||People with access|
|shields-production-us||Full access||@calebcartwright, @chris48s, @paulmelnikow, @pyvesb|
|shields-production-us||Access management||@calebcartwright, @chris48s, @paulmelnikow, @pyvesb|
|Compose.io Redis||Account owner||@paulmelnikow|
|Compose.io Redis||Account access||@paulmelnikow|
|Compose.io Redis||Database connection credentials||@calebcartwright, @chris48s, @paulmelnikow, @pyvesb|
|Zeit Now||Team owner||@paulmelnikow|
|Zeit Now||Team members||@paulmelnikow, @chris48s, @calebcartwright, @platan|
|Raster server||Full access as team members||@paulmelnikow, @chris48s, @calebcartwright, @platan|
|shields-server.com redirector||Full access as team members||@paulmelnikow, @chris48s, @calebcartwright, @platan|
|Cloudflare (CDN)||Account owner||@espadrine|
|Cloudflare (CDN)||Access management||@espadrine|
|Cloudflare (CDN)||Admin access||@calebcartwright, @chris48s, @espadrine, @paulmelnikow, @PyvesB|
|OpenStreetMap (for Wheelmap)||Account owner||@paulmelnikow|
|DNS||Read-only account access||@espadrine, @paulmelnikow, @chris48s|
|Sentry||Error reports||@espadrine, @paulmelnikow|
Shields has mercifully little persistent state:
- The GitHub tokens we collect are saved on each server in a cloud Redis database. They can also be fetched from the GitHub auth admin endpoint for debugging.
- The server keeps the regular-update cache in memory. It is neither persisted nor inspectable.
To bootstrap the configuration process, the script that starts the server sets a single environment variable:
With that variable set, the server (using
config) reads these
local-shields-io-production.yml. This file contains secrets which are checked in with a deploy commit.
shields-io-production.yml. This file contains non-secrets which are checked in to the main repo.
default.yml. This file contains defaults.
Sitting in front of the three servers is a Cloudflare Free account which provides several services:
- Global CDN, caching, and SSL gateway for
- Analytics through the Cloudflare dashboard
- DNS resolution for
Cloudflare is configured to respect the servers' cache headers.
Both the badge server and frontend are served from Heroku.
After merging a commit to master, heroku should create a staging deploy. Check this has deployed correctly in the
shields-staging pipeline and review http://shields-staging.herokuapp.com/
If we're happy with it, "promote to production". This will deploy what's on staging to the
DNS is registered with DNSimple.
Logs can be retrieved from heroku.
Error reporting is one of the most useful tools we have for monitoring
the server. It's generously donated by Sentry. We bundle
raven into the application, and the Sentry DSN is configured via
local-shields-io-production.yml (see documentation).
The canonical and only recommended domain for badge URLs is
img.shields.io. Currently it is possible to request badges on both
shields.io i.e: https://img.shields.io/badge/build-passing-brightgreen and https://shields.io/badge/build-passing-brightgreen will both work. However:
- We never show or generate the
img.-less URL format on https://shields.io/
- We make no guarantees about the
img.-less URL format. At some future point we may remove the ability to serve badges on
img.) without any warning.
img.shields.ioshould always be used for badge urls.
Overall server performance and requests by service are monitored using Prometheus and Grafana.
Request performance is monitored in two places: