Creating a Next JS CI/CD pipeline with load testing and end-to-end automated integration testing
I've created some small CI/CD pipelines before with simple objectives, such as checking if my code builds or matrix building with different python versions. However, I wanted to create something more official, something I can actually use and that resembles real life CI/CD pipelines. So for this project, I'm creating a pipeline for my personal website that:
- Uses Gitlab runners with docker execution to run my pipeline
- Does static code analysis with SonarQube and linting
- Does end-to-end automated browser testing with Playwright. This will check multiple browsers, as well as record images and videos of the process.
- Does load testing with both frontend and backend with k6 orchestrated by docker compose
- Profiles frontend function metrics, SEO, network timing, etc with lighthouse
- Deploys to Vercel if everything looks ok
Future additions:
- Easily host on aws using terraform for quick build/tear-down of resources to save costs
- Matrix building for older versions of browsers
- View all information about testing and profiling on the web with grafana
This project will be an in-progress project and this page will continually get updated!
Currently accomplished features:
- Using Gitlab Runners with Docker Execution
- Running build check
- Running linting
- Running static code analysis with SonarQube
SonarQube
I've never used SonarQube before, so I had to do some research and prep to figure out what steps I needed to take to use SonarQube in my CI/CD pipeline, Here is what I learned about the new technology.
Issues
- Two different services, sonar scanner cli and sonarqube server
- Sonar scanner cli depends on sonar qube server, and requires a token https://docs.sonarsource.com/sonarqube-server/latest/analyzing-source-code/scanners/npm/using/
- Sonarqube token acquiring documentation seems to promote using the web gui, which is pretty worthless for a pipeline https://docs.sonarsource.com/sonarqube-server/10.1/user-guide/user-account/generating-and-using-tokens/#generating-a-token
- After some research, I found out that you can use the web api using password authentication by setting the environment variable SONAR_WEB_SYSTEMPASSCODE before starting the container https://docs.sonarsource.com/sonarqube-server/latest/extension-guide/web-api/
And with that, I generate the token with curl, do a little grep and regex, and I'm good
curl -v --request POST --url 'http://localhost:9000/api/user_tokens/generate' --user 'admin:admin' --header 'X-Sonar-Passcode: test' --data 'name=My Token' | grep -oP '"token":"\K[^"]*'
I change localhost to sonarqube service in my docker compose file
And then I am now able to use my scanner cli with my project
UPDATE: Apparently you don't even need the X-Sonar-Passcode authentication, despite what the docs say. You just need the --user 'admin:admin' header and you will be authenticated. Makes my life easier.
curl -v --request POST --url 'http://localhost:9000/api/user_tokens/generate' --user 'admin:admin' --data 'name=My Token' | grep -oP '"token":"\K[^"]*'
Now to get the SonarQube analysis into gitlab
SonarQube luckily is able to export its data into SAST format, which allows me to easily see all the issues and vulnerabilities in the GitLab project itself. However, Sonarqube docs focus more on integration with a long running Sonarqube instance that is accessible. However, my use case is much different from its intended use case, running an ephemeral instance of sonarqube and running analysis with that, as I don't want to support the infrastructure of maintaining a long running instance.
I was wondering if I can just use the web api to export the data and give it to gitlab with artifacts: reports: sast: sast-report.json in my gitlab config file. However, it seems somewhat hacky, so I wanted to see if there was a more official way to do this. To my surprise, that's exactly how they do it.
vulnerability-report:
stage: vulnerability-report
script:
- 'curl -u "${SONAR_TOKEN}:" "${SONAR_HOST_URL}/api/issues/gitlab_sast_export?projectKey=<projectKey>&branch=${CI_COMMIT_BRANCH}&pullRequest=${CI_MERGE_REQUEST_IID}" -o gl-sast-sonar-report.json' # Replace <projectKey> with your project key
allow_failure: true
only:
- merge_requests
- master
- main
- develop
artifacts:
expire_in: 1 day
reports:
sast: gl-sast-sonar-report.json
dependencies:
- sonarqube-check
Using Gitlab with this process was kind of annoying, as I was currently testing this setup with Docker Compose. I had a couple of options to get it into my pipeline:
- Rewrite my docker compose file into the gitlab config file
- Create a Docker in Docker runner to use docker compose with
- Create a Docker out of Docker runner
- Use a shell executor for my runner
The latter 3 options would make my life much more convenient, but they all come with some security concerns. I would be able to use my docker compose files, but since my runners are all running on my local computer, I would like to avoid this. However, once I get more complex Dockerfiles, I would consider it though a VM runner on the cloud.
