Booting Nelson locally is fairly straightforward. First, obtain a Personal Access Token for Github. Once you have this, add it to your
~/.bash_profile as the
GITHUB_TOKEN environment variable
In addition, you are required to add the following environment variables:
# these can be randomly assigned strings export NELSON_GITHUB_SECRET="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" export NELSON_GITHUB_TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" # the value here should be the PAT that you got from Github export GITHUB_TOKEN="XXXXXXXXXXXXXXXXXX"
Next, add a line to your
/etc/hosts such that you have a
nelson.local domain pointing to your local loopback interface. This looks like:
This is the bare minimum required to run Nelson. You can then instruct Nelson to boot up by using the following command:
Nelson will then boot up and be running on
http://nelson.local:9000. Be aware that unless you have correctly configured a development OAuth application on Github for your local Nelson, you will get rejected from any activity related to Github. This is covered in the operator guide section of the Operator Guide.
By default, the local development configuration assumes you’re running Nomad on the loopback address,
127.0.0.1. If you wish to point to a remote Nomad cluster, then you must set the following environment variables:
export NOMAD_ADDR=XXXXXXXX export NELSON_NOMAD_DOCKER_HOST=XXXXXXXXXXXX # the following are required if your docker registry that the # remote cluster uses requires authentication export NELSON_NOMAD_DOCKER_USERNAME=XXXXXXXXXXXX export NELSON_NOMAD_DOCKER_PASSWORD=XXXXXXXXXXXX
Technically, you can run Nelson locally without the other system dependencies running locally - some functionality will of course not work. If the feature you’re working on doesn’t need those systems, please be aware your logs will contain errors reporting that those dependencies are not running. To remove these errors, install and run the following locally before booting Nelson.
Do be aware that you could also run these dependencies as containers but it can often become tricky with the bridge networking. This is something that is certainly possible to overcome, but it’s more hassle than most people want when getting setup.
Install Consul with
brew install consul, or by downloading and installing here. Next, modify the Nelson config at
<project-dir>/etc/development/http/http.dev.cfg: in the
consul config, update
endpoint to be:
datacenters.<yourdc>.consul.endpoint = "http://127.0.0.1:8500"
Then, run the Consul binary with
consul agent -dev.
Install Vault with
brew install vault, or by downloading and installing here. Modify the Nelson config at
<project-dir>/etc/development/http/http.dev.cfg: in the
vault config, update
endpoint to be
endpoint = "http://127.0.0.1:8200"
Then, run the Vault binary with
vault server -dev.
To run tests, you must have
promtool available on your path. Developers on a Mac may run this script to fetch
promtool and install to
If you prefer to install this binary manually, then please fetch it from the prometheus site and install at your favourite location on your
There are a few conventions at play within the Nelson codebase:
JSONresponses from the API should use
Any functions that are going to be called for user or event actions should reside in the
Nelsonobject and have a
NelsonK[A]return value (where
Ais the type you want to return). Functions in this object are meant to assemble results from various other sub-systems (e.g.
Storage) into something usable by clients of the API.
If you’re running docker on OSX, it’s possible that your boot2docker/docker-machine does not have enough entropy to run the container, as we’re using SecureRandom. If this is the case, then you can run the following container to augment the random number generation usign the following:
docker pull harbur/haveged docker run --privileged -d harbur/haveged
Nelson can be started with the following command, assuming you’re in the root of the Nelson source tree:
nelson.env file looks like this (but with valid values, naturally):
NELSON_SECURITY_ENCRYPTION_KEY=xxxxxxxxxxxx NELSON_SECURITY_SIGNATURE_KEY=xxxxxxxxxxxx NELSON_GITHUB_SECRET=xxxxxxxxxxxx NELSON_GITHUB_CLIENT_ID=xxxxxxxxxxxx NELSON_DOMAIN=xxxxxxxxxxxx GITHUB_USER=xxxxxxxxxxxx GITHUB_TOKEN=xxxxxxxxxxxx
The session encryption, signing, and verification keys can be generated by running
bin/generate-keys. The below is an example. Run this yourself! Do not use these! Do not share your signing key or encryption key!
export NELSON_SECURITY_ENCRYPTION_KEY=WWD4N/4oxgPmGlai/MW4Hw== export NELSON_SECURITY_SIGNATURE_KEY=YNhJUcF8ggQ7HoWkmGqaxw==
Nelson’s primary data store is a H2 database. This deliberately doesn’t scale past a single machine, and was an intentional design choice to limit complexity in the early phases of the project. With that being said, H2 is very capable, and for most users this will work extremely well. If Nelson were reaching the point where H2 on SSD drives were a bottleneck, you would be doing many thousand of deployments a second, which is exceedingly unlikely.
If you start to contribute to Nelson, then its useful to understand the data schema, which is as follows:
As can be seen from the diagram, Nelson has a rather normalized structure. The authors have avoided denormalization of this schema where possible, as Nelson is not in the runtime hot path so the system does not suffer serious performance penalties from such a design; in short it will be able to scale far in excess of the query and write load Nelson actually receives.
Upon receiving notification of a release event on Github, Nelson converts this to events published to its internal event stream (called
Pipelineand messages on it, are not durable. If Nelson is processing a message (or has messages queued because of contention or existing backpressure), and an outage / upgrade or any reason that causes a halt to the JVM process, will loose said messages.
Nelson does not have a high-availability data store. As mentioned above in the database section, this is typically not a problem, but should be a consideration. In the future, the authors may consider upgrading Nelson so it can cluster, but the expectation is that scaling-up will be more cost-effective than scaling-out for most users. Nelson will currently eat up several thousand deployments a minute, which is larger than most organizations will ever reach.
The Nelson CLI is useful for debugging the Nelson API locally. Particularly useful are the client’s
--debug-curl flags. You can read about them in the client’s documentation. One option that you need to pay attention to for local usage is the
--disable-tls flag on the
login subcommand. To login to a local Nelson instance, you should run the following:
nelson login --disable-tls nelson.local:9000
It’s important to note that to use the API locally, a change to the development config at
<project-dir>/etc/development/http/http.dev.cfg is needed. Add the following line inside the
organization-admins = [ "<your-github-handle-here>" ]
This ensures that when you login via the UI that you are specified as an admin and do not have limited access to the operations you can locally perform.
There are a couple of options for testing documentation locally. First you need to install Hugo, which is a single, native binary and just needs to be present on your
The most convenient method for viewing documentation locally is to run via SBT using the following command:
This will open your default web browser with the documentation site, which is handy for locally viewing the docs. It does however not support dynamic reloading of pages when the source changes. Luckily this is supported by Hugo, and can easily be run with a script locally:
cd docs/src/hugo hugo server -w -b 127.0.0.1 -p 4000
Hugo will automatically refresh the page when the source files are changed, which can be very helpful when one is itterating on the documentation site over time.