Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

How Stitch Automates Code Quality Checks using GitHub Actions

10 June 2022, by Jethro Muller

When reviewing code, developers often check code quality manually. At Stitch, we found that we could get reviews done faster and catch more errors by using GitHub Actions to automate code quality checks. Here’s how we did that using freely available tools and why automating code quality adds value to your team and product.

Code lines with a bug under a magnifying glass

Code quality is a cornerstone of a successful project. Often the quality of the code informs how easy it is to understand. Understanding the code is essential to changing, fixing, and reviewing it.

Most code quality checks are done and enforced during the code review process by the reviewers themselves. Different reviewers care about different things, which makes the code quality standard subjective and the process error-prone and time-consuming.

That’s only one side of the equation: The developer whose code is reviewed is also affected by subjectivity. They can perceive overly zealous reviews as a personal attack. It’s also possible that different people are held to different standards because of interpersonal relationships. This applies even more to new developers in a team or a company.

Fortunately, there is a better way. It is possible to automate many of the code quality checks you currently do manually during code review. At Stitch, we’ve already automated many of the code quality checks we’ve decided on. We’re always looking to add new checks or improve what we have to make the code review process more accurate and as frictionless as possible.

Why code quality is important to Stitch

Stitch is a FinTech startup

At Stitch, we’re a small team, although we are growing quickly. Each developer’s time is incredibly valuable to our small team. We need to be able to move fast while ensuring we’re building a high-quality product that can grow as the company does.

Ensuring our code quality stays high is important for a few reasons. High-quality code is simpler to maintain. Having code that is easy to read and understand makes it much simpler to change the code with confidence.

In addition to this, Stitch is an API fintech company that helps businesses more easily launch, optimise, and scale financial products across Africa. Stitch solutions dramatically reduce the effort required for businesses to connect to their users’ financial accounts, to enable instant, secure bank-to-bank payments and access financial data. Dealing with people's financial data means we need to be reliable and secure. Automated code quality checks give us an automated way to validate our code, scripts, and configuration files so we can be as reliable as possible while still iterating rapidly.

Developer confidence

Maintaining high code quality consistently is an important part of building developer confidence. There is more confidence in the code if there is a set way of doing things because everyone is on the same page about how things should work and what to expect within the code.

Reducing uncertainty and improving confidence are essential to allow developers to move fast while minimising the risk of breaking things. This means we can spend more time building out product features instead of fixing easily preventable bugs.

Developer sanity

Developers are often under pressure to produce code that works perfectly. Realistically, this isn’t possible, but aiming for perfection is often a side-effect of working in an industry that deals with people’s finances.

Automated checks reduce the burden on the developer writing the code and the developers reviewing their code. Static code analysis or proper automated testing can detect many classes of bugs. Automated testing can reduce the review burden by detecting these issues before the code is even put up for review. Doing the same with code style and quality metrics can further reduce the review burden and improve reviewers’ confidence in the code they’re reviewing. This allows them to focus on reviewing the business logic in the code changes instead of spending time considering code style or structure.

Having standards in place makes code easier to understand too. The fewer variants you have to account for in each code review, the more battle-tested and well-known the variants you do support can be. Investing in code quality as early as possible makes maintaining code easier.

Every new check we’ve put in place has removed the need for us to check for those issues manually, thereby freeing up valuable developer time.

Enforced standards reduce complexity

It’s important to understand that code quality is subjective.

The subjectivity of the reviews means disagreements are bound to happen. These are huge wastes of time as they don’t generally contribute to the quality of the code. An enforced set standard removes the chances of these disagreements happening.

Set standards also mean that if you ever need to change them, it should be a simple operation. Finding all the different versions of how something is done to make a simple update or refactor wastes time. For example, you’re tasked with moving a package in a NodeJS project into a shared package and everywhere it was used has different ways of doing imports that aren’t enforced. Instead of a single find and replace operation, this becomes a much bigger task of finding out what you need to change first.

Consistency builds confidence

Having one standard also reduces onboarding time for new developers joining the team, and makes your codebase more consistent. This all feeds back into building developer confidence in your codebase. We haven’t fully achieved this at Stitch yet because we’re still grappling with defining what we consider good quality code. However, we have many checks in place to ensure we enforce the consistencies we have decided on. This is discussed in more detail below.

Entering a new codebase can be daunting, especially as it grows. At Stitch, we’re at the point already where different teams don’t often work on the same code anymore.

This separation could be used as a reason to neglect standardisation or unified code quality checks. However, having these standards is especially important for fast-growing teams. It’s a way to develop and train personnel that can easily move between teams and leverage their experience within the company no matter what team they’re on. Knowing what’s expected of you at the company level is useful. It allows every team to contribute to the company-wide knowledge of good code quality standards.

Incorporating knowledge new developers bring and the knowledge gained by each team in their own sections of the codebase is a helpful way to democratise and decentralise the discovery of new code quality metrics. It also ensures that we maintain the quality we expect at a company level instead of at a team level. Our developers can be confident in their code no matter where they’re making changes in the repository.

Automated code quality checks at Stitch

At Stitch, we use GitHub Actions to run our builds and automated code quality checks. We found that it keeps the code and checks close together, and provides easy access for us to write custom checks that report on pull requests directly. Many code quality checks already exist as GitHub Actions in the GitHub Marketplace. The direct integration allows us to make the checks a requirement before merging is allowed. We've enforced this for many of our checks.

Automated code quality checks

The following five code quality checks are checks we run against every pull request made in our main monorepo. To get to this point, we’ve added new workflows over time to cover more and more of the checks that are possible to automate. It’s entirely possible to take an iterative approach to add these to your repositories. Every check you add is more time you save during review.

Checking GitHub workflows

We have a GitHub workflow to check our GitHub workflows for configuration issues, general mistakes, and a shellcheck. It uses a linter called actionlint. This is a check I’d recommend you add first because of how simple it is to set up. Adding it allows you to add or modify workflows with confidence because it ensures you follow shell scripting best practices and the requirements of a Github Workflow file.

This is what our actionlint workflow looks like:

name: Lint Github Workflows

on:
  pull_request:
    paths:
      - '.github/**/*.yml'

concurrency:
  group: lint-workflows-${{ github.ref }}
  cancel-in-progress: true

jobs:
  lint-workflows:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - name: Install shellcheck
        run: sudo apt-get install -y shellcheck

      - name: Install actionlint
        env:
          ACTIONLINT_RELEASE_URL: "https://github.com/rhysd/actionlint/releases/download/v1.6.8/actionlint_1.6.8_linux_amd64.tar.gz"
        run: curl -L "${ACTIONLINT_RELEASE_URL}" | tar xvz actionlint

      - name: Check workflow files with actionlint
        run: ./actionlint -ignore 'SC2129:' -color

That’s all there is to it: You install actionlint, run it in your repository’s root directory and it’ll check anything in your .github folder. It will do type and syntax checking on any local actions or GitHub workflow files.

This has saved and continues to save us from breaking any of our workflows. It’s especially important because we do our test and production deployments using GitHub Actions. This makes sure that any changes we make to those workflow files are correct and won’t prevent us from releasing our code.

Linting Dockerfiles

Docker is one of the most popular ways to package services for deployment. Stitch uses Docker for all of our services and to make sure we’re following best practices. Hadolint gives us peace of mind by checking for many common mistakes when writing Dockerfiles.

name: Lint Dockerfiles
on:
  pull_request:
    paths:
      - "src/*/Dockerfile"

concurrency:
  group: docker-lint-${{ github.ref }}
  cancel-in-progress: true

jobs:
  lint-dockerfiles:
    runs-on: ubuntu-latest
    container: hadolint/hadolint:v2.8.0-alpine
    defaults:
      run:
        shell: sh

    steps:
      - name: Install Git
        run: apk add --no-cache git

      - uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Add ${{ github.base_ref }} branch
        run: git branch -f --create-reflog ${{ github.base_ref }} origin/${{ github.base_ref }}

      - name: Run Hadolint
        run: git diff ${{ github.base_ref }} --name-only --diff-filter=d 'src/*/Dockerfile' | xargs hadolint -t error

Again, this is a simple process that can save us from breaking any of our critical infrastructure. Hadolint checks for violations of best practices and general correctness in each Dockerfile in our repository.

Linting our TypeScript packages

We use TypeScript for almost all of our packages. Part of maintaining a high level of code quality is enforcing a consistent style across all of our packages. To do this, we use eslint. This prevents any malformed code from being added to the repository.

These checks are great because they allow for strict checks without hurting anyone’s feelings. Doing these same checks manually would result in pull requests with tens of review comments just on code style.

Another benefit of doing this automatically is that GitHub supports automated reporting for popular tools on each PR using GitHub’s Checks API. This makes it even easier to get feedback from your code quality tools, which exist for many popular languages. GitHub themselves provide a GitHub Action that combines multiple different linters into one easy to install package, super-linter.

name: ESLint

on:
  pull_request:

concurrency:
  group: npm-lint-${{ github.ref }}
  cancel-in-progress: true

jobs:
  eslint:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Use Node.js v16.x
        uses: actions/setup-node@v2
        with:
          node-version: 16.x

      - name: Use Cached Dependencies
        uses: actions/cache@v2
        with:
          path: ~/.npm
          key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-

      - name: Install dependencies
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          npm install -g npm@8.1.4
          # Install all project dependencies
          npm run installAll -- --ci --silent
      - name: Run Typescript linting
        run: npm run lint

That workflow installs the dependencies for all the packages in our monorepo, and then runs ESLint on them. This check enforces the rules we’ve set up internally to ensure consistent code style across all our packages.

Running tests

Tests are an important part of enforcing code quality. Tests can be used to validate the correctness of your code, validate the public API of your services, and make sure that you aren't introducing any code that will break once deployed. This works in tandem with the linting workflow to ensure no bad code is allowed into your main branch.

This also helps promote confidence in the code for both the developer writing the code and the developer reviewing the code by ensuring there is some baseline of correctness enforced by your test suite. Additionally, it ensures that any new code doesn’t violate the conditions you enforce with your tests.

name: Run Tests
on:
  pull_request:

concurrency:
  group: tests-${{ github.ref }}
  cancel-in-progress: true

jobs:
  run-tests:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Add ${{ github.base_ref }} branch
        run: git branch -f --create-reflog ${{ github.base_ref }} origin/${{ github.base_ref }}

      - name: Use Node.js v16.x
        uses: actions/setup-node@v2
        with:
          node-version: 16.x

      - name: Cache Node.js modules
        id: cache
        uses: actions/cache@v2
        with:
          path: ./src/package/node_modules/
          key: ${{ runner.OS }}-node16-${{ hashFiles('./src/package/package-lock.json') }}

      - name: Install dependencies
        if: steps.cache.outputs.cache-hit != 'true'
        run: npm ci

      - name: Run Tests
        run: npm test

The build will fail if the tests don’t pass. This means we can automatically enforce that all of our code has to pass the existing tests before it gets merged.

Coverage reporting

Ensuring existing tests are enforced is important, but it’s also essential to include tests for any new code added. This is a process we’re still establishing.

Our interim approach has been to identify the key areas that are most important to test and do coverage checking on just those files. This was a conscious decision to target specific areas of our code that we knew were most prone to failure. Obviously, not covering your code runs the risk of allowing bugs through, but it’s often a tradeoff you’ll have to make per project. Sometimes covering a subset of your codebase is enough to keep things sane and moving quickly.

Coverage checks are another area that can give developers confidence in the code they write and review. It’s an assurance that some amount of the code was run through automated code checks. Knowing that certain parts of the code aren’t tested is a good way to help direct your review efforts or encourage writing automated tests to target the uncovered code.

Giving visibility to everyone involved in the review process is one of the key focuses of Stitch’s current coverage reporting workflow. The next stage is to aggressively require a set percentage of coverage on specific files. The percentage you set depends on what you think you can tolerate as a team. Aiming for 100% is what I've done historically, but that puts a burden on the developers because they have to write tests to cover all new code completely. I’d recommend starting at around 75% to cover most of the code and then increasing that over time.

This workflow runs only the tests affected by the code changes and generates a coverage file that the diff-cover tool can use:

name: Coverage Check
on:
  pull_request:

concurrency:
  group: coverage-check-${{ github.ref }}
  cancel-in-progress: true

jobs:
  run-tests:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Add ${{ github.base_ref }} branch
        run: git branch -f --create-reflog ${{ github.base_ref }} origin/${{ github.base_ref }}

      - name: Use Node.js v16.x
        uses: actions/setup-node@v2
        with:
          node-version: 16.x

      - name: Cache Node.js modules
        id: cache
        uses: actions/cache@v2
        with:
          path: ./src/package/node_modules/
          key: ${{ runner.OS }}-node16-${{ hashFiles('./src/package/package-lock.json') }}

      - name: Install dependencies
        if: steps.cache.outputs.cache-hit != 'true'
        run: npm ci

      - name: Get Coverage
        run: npm test -- --changedSince=${{ github.base_ref }} --collectCoverageFrom="src/**/*.ts" --coverageReporters=cobertura

      - uses: actions/setup-python@v2
        with:
          python-version: '3.x'

      - name: Generate Diff Coverage
        run: |
          # stop-gap until Jest adds a way of doing this for us
          python -m pip install diff-cover
          # Get diff coverage
          DIFF_COVERAGE="$(diff-cover coverage/cobertura-coverage.xml --compare-branch=${{ github.base_ref }})"

          echo 'DIFF_COVERAGE<<EOF' >> "$GITHUB_ENV"
          echo "$DIFF_COVERAGE" >> "$GITHUB_ENV"
          echo 'EOF' >> "$GITHUB_ENV"

      - name: Report Coverage
        uses: actions/github-script@v5
        with:
          script: |
            const diffCoverage = process.env.DIFF_COVERAGE;
            const [_, coverageInfo, summary] = diffCoverage.split('-------------').map(s => s.trim()).filter(s => s.length !== 0);
            // Include a smaller message if there is no coverage information for the diff
            const reportBody = !summary ? `\`\`\`\n${coverageInfo}\n\`\`\`` : `\`\`\`
            ${summary}
            \`\`\`

            <details>
            <summary>Detailed Coverage Report</summary>

            \`\`\`
            ${coverageInfo}
            \`\`\`
            </details>`;
            const templatedMessage = `## 🔬 Coverage Report:\n\n${reportBody}`;
            const {data: existingComments} = await github.rest.issues.listComments({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number
            });

            const githubActionComments = existingComments.filter((comment) => comment.user.login === 'github-actions[bot]' && comment.body.includes('Coverage Report'));
            if (githubActionComments.length) {
              const latestComment = githubActionComments.reverse()[0]

              if (latestComment.body === templatedMessage) {
                core.info('Diff comment with the same message already exists - not posting.')
                return
              }
            }

            github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
              body: templatedMessage
            });

Once we have the coverage data, we use one of the most versatile GitHub Action features: github-script. GitHub script is an action made by GitHub themselves. It lets you run JavaScript code directly from the workflow file. It gives you access to the GitHub API to do things like adding a comment to the pull request as we did in the workflow above.

The result of this is a nicely formatted report on each pull request that we can use to determine where we’re missing test coverage.

Three more code quality checks I want to implement

As much as we’ve done to ensure we automatically enforce our expected level of code quality, there is still more we want to do. The next three code quality checks are checks I plan on implementing in the future.

Spell checking

Spelling mistakes are common in code because not all code editing tools provide good quality spell checking tools. Spelling mistakes are generally harmless, but they can create confusion and issues.

For example, you want to change all instances of the variable calculatedResponse to processedResponse. However, you’ve made the common error of writing calulatedResponse in the code. As a result, when searching for the places you need to change within the code, you’ll end up missing the misspellings.

Another place this is important is in any public code. Public code is a way of presenting yourself and your company to the world. Poorly written code littered with spelling mistakes is unlikely to inspire confidence.

A tool I’ve used to add this to my projects in the past is codespell. It doesn’t do general spell checking which is meant to process normal writing and can generate false positives when processing code. Instead, it checks against a list of commonly misspelt words.

Language usage checking

When writing documentation, whether internal or for public consumption, it’s important to consider how your writing comes across. It’s possible to make certain groups of people feel unwelcome in your community by the way you write.

A useful tool for preventing this is Alex.js. Before you publish it, you catch any potentially insensitive, offensive, or condescending writing. Making developers feel welcome is key to developing a strong and invested team, and encourages a diverse and welcoming community.

Interdependent code checks

For many companies, there are non-trivial dependencies between files that are important to maintain.

For example, if you have an ORM and change your models, the appropriate migration files must exist. Another example might be if you make a change to the file that defines your public API that you publish a changelog for: You’re required to have a changelog entry. These checks add overhead to every pull request because people are required to manually check that these constraints have been met.

In many cases, including previous projects, enforcing these requirements using GitHub workflows has been easy. I wrote a GitHub Action to help this process called file_changes.

If we were to flesh out the simple changelog example, you could have a workflow like this.

name: Changelog Check
on:
  pull_request:
    paths:
      - "src/package/public-api/**"

concurrency:
  group: changelog-check-${{ github.ref }}
  cancel-in-progress: true

jobs:
  check-changelog:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2
        with:
          fetch-depth: 0

      - name: Add ${{ github.base_ref }} branch
        run: git branch -f --create-reflog ${{ github.base_ref }} origin/${{ github.base_ref }}

      - name: Check for missing changelog change
        run: | 
          if [ -z "$(git --no-pager diff ${{ github.base_ref }} --name-only -- 'CHANGELOG.md')" ]; then
            echo "Missing CHANGELOG.md change!"
            exit 1
          fi

The above workflow uses GitHub Actions to check for the existence of a change to CHANGELOG.md in the current pull request. If there have been no changes to the CHANGELOG.md file, it exits with an error which will fail the check.

Using workflows makes it possible to check almost anything automatically. Each check is another thing reviewers and developers don’t have to remember to check themselves.

Positive effects of automated code quality checks

There are four main benefits to automated code quality checks that I’ve observed from using them at Stitch and my previous company:

  1. More thorough code screening allows reviewers to focus on the logic in the code instead of looking out for syntax and style errors.
  2. Automated code quality has the power to give developers confidence in their code, which means they can write code faster without the need for constant manual testing.
  3. Along with developers, reviewers are also more likely to approve a pull request if they have automated code reviews they can trust.
  4. New and existing contributors don’t have to worry about hidden requirements and dependencies.

What we’ve learnt

As we’ve tackled implementing automated code quality checks, we've learnt that it’s important to realise they aren’t magic. They aren’t broken if they don’t catch 100% of the issues you have in your code. Like testing, they’re a tool to help you build better code, but they can’t solve all your problems.

It’s important to foster a culture of personal responsibility and take pride in writing good quality code. Another essential organisational requirement is that everyone participates in code review. Maintaining a high level of code quality is a team effort.

Computers are great at removing the drudgery of enforcing the guidelines you develop as a team. If a computer can do a specific code quality check, it should be part of your automation suite. The time and effort you’ll save over time are immense.

Whenever we find ourselves repeating the same process in review or fixing the same problems repeatedly, we try to automate the process so we never have to deal with it again.

These benefits are easily accessible using existing tools that integrate with many continuous integration pipelines. The same results can be achieved using any CI platform.

A Tester's Guide to Unit Testing
How To Reduce Code Complexity


Jethro Muller is a senior full stack developer at Stitch. He primarily works in Typescript on server-side NodeJS code. He enjoys ORM query optimisation, building pipelines and tooling, optimising workflows, and playing with AI.

Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.