FRONT-END CI/CD AUTOMATION. PART 2

 CD includes versioning, release and deployment processes. Now step-by-step.

Versioning and Release

What for

Versioning solves a large number of problems: when developing libraries and frameworks, and related to compatibility. We focus on the problems that arise when developing applications for end-users. What helps to solve this:

1) Stable revision markers. They simplify the search for the latest stable revision if necessary to roll back the version of the application (if, for example, a critical bug got into production).

2) Naming convention. You have the opportunity to discuss fills, without calling them “where we added the profile” or “where we fixed the registration”, and using version numbers – capacious and unambiguous, write more accurate changelogs, more efficiently investigate and reproduce bugs.

 

How

frontend continuous integration automation

1) Semantic Versioning – a methodology for generating version numbers. One of many, but this one is used for versioning npm packages (it is convenient to combine it with the version in package.json).

2) Npm version, yarn version – commands that increase the version of your application. They automatically change the version in package.json, commit with the corresponding message and put a tag in which the name of the new version will be.

Deployment

Deployment is the delivery and unloading of files to the place from where they will be distributed. How the deployment happens depends heavily on how your application is hosted. This can be one of many options, for example AWS S3 Bucket / AWS CloudFront / another AWS service, of which there are many, Heroku / Dokku, VPS / VPH.

What for

Obviously, if we do not upload our application to the server, from where it will be hosted, people will not be able to use it.

The process needs to be automated: once you spend time writing a script to save a lot of time and nerves, as well as reduce the risk associated with the human factor.

How

continuous delivery tools

Deployment is simply uploading files to another server. The only difference is in the protocol by which it will occur:

– SSH – with some caveats, you can think of it as a push to some remote (in the sense of being far away) repository.

 – HTTP is a simple and familiar method for front-end vendors when each file is sent in the body of the corresponding HTTP request.

– FTP is the oldest of these protocols for which you can find a client on Node.js, but you may have to sweat by setting it up.

The file upload operation can be minimized to a single npm script that will run the Node.js. file Most APIs run on Node.js (e.g. AWS).

Total

By analogy with CI, we will get some simple npm-scripts that will allow you to run more complex and responsible processes.

Pipelines

If you translate the word pipeline from English in the context of computer science, one of the translations will be “pipeline.” And this word describes the situation well.

What for

If we take a simplified analogy with a machine, then first we need to assemble an engine running from wheels and axles. Then connect them together so that the engine spins the wheels. Then hang the case on top of it all so that the driver cannot in the rain. 

There are interdependencies and order processes. For example, it makes no sense to deploy the application if the tests fail. A simplified pipeline for our application looks like this: linting and tests – versioning – build – deploy.

This is where the pipelines come into play – as a tool that describes and runs the pipeline for CI / CD processes.

How

gitlab continuous delivery process

1) Gitlab

2) Bitbucket

3) GitHub & Azure Pipelines;

4) Jenkins and many others.

Almost everything that we listed on the list is the repository hosting, except for Jenkins’s (which we added for the sake of completeness so that it is clear that such tools are not necessarily part of the repository hosting).

Here are some examples of how this looks in GitLab Pipelines. For example, we took GitLab for several reasons. we have experience working closely with this service. A free account on GitLab provides a good pipeline package that is enough for you to practice on a pet project. The same goes for the standalone GitLab server. It also gives a general understanding of how pipelines are configured. It was not difficult, by analogy with GitLab, to understand what Bitbucket Pipelines offered.

GitLab CI / CD

What does it look like. A pipeline is launched for each commit that is running. Below you can see a list of pipes that were run for different commits.

gitlab continuous deliveru scheme

Fig. 1. Successfully completed pipelines

A pipeline consists of steps. Steps, in turn, consist of tasks (jobs). Below you can see the expanded structure of the pipeline. The columns Setup, Code_quality and beyond are steps. Each block with a green icon is a separate job.
gitlab2

Fig. 2. Pipeline decomposition

If one of the jobs crashes, the pipeline stops. At this moment, the benefit of the hosting repository and pipeline bundle is clearly visible: if for the last commit in the merge request the pipeline fell, it would not be possible to contain such a request. This will not allow stable branches code that, for example, does not pass the test of linter or tests.

continuous integration pipeline

Fig. 3. The pipeline that failed because the linters fell.

.gitlab-ci.yml

How to set it up. The pipeline is described in the .gitlab-ci.yml file, which should be in the root folder of the repository.

image: node:8

variables:
 REACT_APP_ENV_NAME: $CI_ENVIRONMENT_NAME

stages:
 - setup
 - code_quality
 - testing
 - semver
 - deployment

Lines 1-11 .gitlab-ci.yaml

image – indicates in which docker container the pipeline should be launched. In short, docker is a technology that provides a predictable runtime. In this case, we want to run in conditional Linux, on which the 8th version of Node.js is installed.

variables – allows you to explicitly define environment variables during the work of the pipeline. In our example, we take a built-in variable that contains the name of the environment for which the pipeline works, and rewrites it into a variable that will be available inside the packaged application. In this case, this was done to integrate with the error tracking system – Sentry.

stages – describes the sequence of tasks. We put dependencies, lint scripts and styles, then test, after which we can deploy. It looks like an array of string values ​​that are used to mark tasks. The same stages are shown in Fig. 2.

Jobs & scripts

dependencies:installation:
 stage: setup
 cache:
   paths:
     - node_modules/
 script:
   - yarn --prefer-offline --no-progress --non-interactive --frozen-lockfile
 tags:
   - web-ci
lint:scripts:
 stage: code_quality
 cache:
   paths:
     - node_modules/
 script:
   - yarn run lint:scripts:check --max-warnings 0
 only:
   changes:
     - src/**/*.{ts,tsx}
 tags:
   - web-ci
lint:styles:
 stage: code_quality
 cache:
   paths:
     - node_modules/
 script:
   - yarn run lint:styles:check
 only:
   changes:
     - src/**/*.{css,scss}
 tags:
   - web-ci

unit:testing:
 stage: testing
 cache:
   paths:
     - node_modules/
 only:
   changes:
     - src/**/*.{ts,tsx}
 script:
   - yarn test
 tags:
   - web-ci

Lines 13-60 .gitlab-ci.yaml

jobs – further from the root are the names of tasks and then deeper – their description. The key parameters of the job are stages, that is, the binding of a particular job to the stage. This determines after which work it will be performed.

script – a set of commands that will be executed during the work of the job. For dependencies installation, we see that this is just one command – yarn – with arguments that say do not download too much if it is in the cache.

In a similar way, it works with script and style lint. Please note that both scripts and styles are tied to one stage. This means that if possible they will go in parallel.

only and exclude allow you to determine when a job should work, and when not. For example, we see that script lining occurs only with changes within the framework of .ts and .tsx files, CSS and SCSS styles.

In the same way, you can make deployment deployments available only to the master branch.

Versioning

Versioning is one of the confusing tasks when building a pipeline. The pipeline runs on one of the commits, and versioning itself provokes the creation of a new commit, in which the version of package.json will be changed and a new tag will be affixed. We will have to push to the repository from the pipeline and, in this way, one pipeline will provoke another pipeline.

.semver_script: &semver_script
 stage: semver
 when: manual
 only:
   - master
 except:
   refs:
     - /^v\d+\.\d+.\d+$/
 tags:
   - web-ci
 script:
   - mkdir -p ~/.ssh && chmod 700 ~/.ssh
   - ssh-keyscan $CI_SERVER_HOST >> ~/.ssh/known_hosts && chmod 644 ~/.ssh/known_hosts
   - eval $(ssh-agent -s)
   - ssh-add <(echo "$SSH_PRIVATE_KEY")
   - git remote set-url --push origin git@$CI_SERVER_HOST:$CI_PROJECT_PATH.git
   - git config --local --replace-all user.email "noreply@yourmail.com"
   - git config --local --replace-all user.name "Gitlab CI"
   - git checkout $CI_COMMIT_REF_NAME
   - git reset --hard origin/$CI_COMMIT_REF_NAME
   - npm version $SEMVER_LEVEL
   - git push -u origin $CI_COMMIT_REF_NAME --tags

semver:minor:
 <<: *semver_script
 variables:
   SEMVER_LEVEL: minor

semver:patch:
 <<: *semver_script
 variables:
   SEMVER_LEVEL: patch

Lines 62-93 .gitlab-ci.yaml

This fragment is already more complex. Two similar jobs are described here: for incrementing the minor and patch versions, respectively. The script describes operations that will allow pushing from the pipeline to your own repository:

  • Adding a private SSH key that is stored in environment variables and which has push access to the repository.
  • Adding a repository host to the list of known hosts.
  • The configuration of a git user with a name and email, which is also necessary in order to be able to commit and push.

In order not to copy this fragment for minor and patch versions, here the feature of YAML files is used, which is called YAML anchor. Thanks to such features, YAML files become the best format for describing configurations.

Deployment and environment variables

Deployment and environment variables

Fig. 4. Gitlab’s web interface for managing environments

In fig. Figure 4 shows the Hitlab web interface for creating and editing deployment environments. Once they are created here, they can be used in .gitlab-ci.yaml.

Below is a fragment of the deployment configuration using the example of uploading the build results to AWS S3 Bucket. It also uses the YAML anchor to prevent code duplication.

.deploy_script: &deploy_script
  cache:
    paths:
      - node_modules/
  stage: deployment
  script:
    - yarn run build
    - yarn run deploy
  tags:
    - web-ci
deploy:dev:
  <<: *deploy_script
  variables:
    AWS_S3_HOST_BUCKET_NAME: $AWS_S3_HOST_BUCKET_NAME__DEV
    REACT_APP_API_BASE: $REACT_APP_API_BASE__DEV
  environment:
    name: dev
    url: http://$AWS_S3_HOST_BUCKET_NAME.s3-website.us-east-1.amazonaws.com/
  only:
    - develop
deploy:qa:
  <<: *deploy_script
  when: manual
  variables:
    AWS_S3_HOST_BUCKET_NAME: $AWS_S3_HOST_BUCKET_NAME__QA
    REACT_APP_API_BASE: $REACT_APP_API_BASE__QA
  environment:
    name: qa
    url: http://$AWS_S3_HOST_BUCKET_NAME.s3-website.us-east-1.amazonaws.com/
  only:
    refs:
      - /^v\d+\.\d+.\d+$/
    changes:
      - package.json

Lines 95-131 .gitlab-ci.yaml

Notice how environment variables are used. The yarn runs build and yarn run deploy commands use variable names without postfixes, which are determined at the level of a particular job from the values ​​found in variables with postfixes.
Gitlab's web interface for managing environment variables

Fig. 5. Gitlab’s web interface for managing environment variables

In fig. 5 shows a web interface in which environment variables can be described. They will be available inside the pipeline when it starts. Here you can determine the backend API addresses, API keys for the services you use: for example, Google API key, SSH keys for versioning and other data that are not safe to commit.

 

Leave a Reply