Skip to content
Piyush Ahuja edited this page Jan 8, 2020 · 3 revisions

How the PhD Student Portal is deployed at Sanger

Currently, deployment is done with docker-compose; this repository (https://github.com/wsi-cogs/deploy) contains the relevant docker-compose configuration files.

Bringing up a fresh server instance

  • Create the FCE instance
    • For development, m2.tiny is ok (1 CPU is ok for dev (although having 2 makes things go noticeably faster), 10GB of RAM is way overkill, 30GB disk is plenty – probably m1.tiny would be fine too, if you can get Docker not to keep an unboundedly-large cache of old images)
    • For production, multiple CPUs is a good idea – m1.small or m1.medium
    • Use a source image with Docker preinstalled (e.g. bionic-WTSI-docker_b5612)
    • Add the instance to cogs-security-group (allows inbound SSH, and TCP on ports 800x)
  • Add a volume
    • It will contain a couple of config files, the Postgres database, and uploaded project reports
    • For development, any size will be fine
    • For production: all development databases combined are currently under 200MiB, and the maximum size of each upload (one per student per rotation) is currently capped at 30MiB; assuming 15 students uploading in each rotation, this is under 1.5GiB per year
    • The database containing live data from one rotation (the largest thing in there is probably the project abstracts) is currently at 80MiB
  • Format the volume (fdisk /dev/vdb, n, accept the defaults, w, mkfs.ext4 /dev/vdb1)
  • Mount the volume at /cogs. sudo mount /dev/vdb1 /cogs
  • Make directories /cogs/config, /cogs/postgres, and /cogs/uploads, and make a directory called dev or test in /cogs/postgres and /cogs/uploads (depending on what kind of instance you are bringing up)
  • Copy the configuration file from elsewhere (e.g. the Zeta backup) or write a new one based on the example in wsi-cogs/backend, and put it in /cogs/config/dev.yaml or /cogs/config/test.yaml
  • Clone wsi-cogs/deploy into ~/
    • git clone --recursive https://github.com/wsi-cogs/deploy will get the submodules as well

    • For whatever reason, docker-compose complains if you have the docker-compose files outside your home directory, e.g. with the deploy repo at /cogs/deploy:

      ERROR: build path /cogs/deploy/frontend either does not exist, is not accessible, or is not a valid URL.
      

      If this worked, everything could be stored on a volume, but it doesn’t, so currently the code has to live on the instance itself.

  • cd ~/deploy/development && docker-compose up --build (or, for a test/live environment, cd ~/deploy/testing && ...)

Updating an existing instance

$ cd ~/deploy
$ git pull --recurse-submodules=on-demand
$ git submodule update --init --recursive
$ cd development # or testing, depending on the type of instance
$ docker-compose up --build

Instances in use

Nominally, the idea is to have three separate instances of the app:

  • "development": used only by developers of the application, developer-mode enabled (which enables some UI on the frontend for manually poking the database), not behind PageSmith (therefore must not be available outside Sanger)
  • "testing": used for (sporadic) user testing, developer-mode disabled, accessible via a friendly URL, probably behind PageSmith
  • "production" (aka "live"): the actual production instance of the application, used (eventually) by real students with real data, developer-mode disabled, accessible at https://student-portal.sanger.ac.uk, behind PageSmith

However, there’s currently no configuration for "production" – the live instance is just using the "testing" config.

Currently, there are two separate FCE instances (on OpenStack Eta, in the "cogs" project), cogs and cogs-dev, hosting the testing/live and development environments respectively. The testing/live instance is accessible via PageSmith at https://student-portal.sanger.ac.uk, and the development instance via its floating IP. (Instances configured to be accessible via PageSmith expect to receive a PageSmith authentication cookie and so are not accessible via floating IP.)

The "cogs" OpenStack project also has an instance called cogs-dev-old; this can be deleted when its resources are needed for something else (it contains nothing of interest, everything on it has been pushed to GitHub).

The volumes currently in use are as follows:

  • cogs-test: mounted at /cogs on cogs (testing/live data)
  • cogs-20190520-volume: mounted read-only at /mnt on cogs (old data from Zeta, used for reference)
  • cogs-dev: mounted at /cogs on cogs-dev (development data)
  • cogs-zeta-data: not currently in use (old data from Zeta – unmodified copy)