CI/CD for node modules with GitHub and Azure Pipelines
I’ve been spending some time lately exploring some of the things I can do with Azure DevOps, which is essentially Microsoft’s re-branded VSTS. I think it does have a lot to offer for Open Source projects. I got pretty excited by the notion of using it to publish to npm, which sounds great but for how I wanted to use it I was finding the documentation to be somewhat confusing. The workflow I wanted to implement was:
- Run CI builds on pull requests in GitHub to automatically run linting and tests
- Run that same CI build on a minimum when changes are pushed to
masteror a new
- When a new
tagis created and that build is successful, automatically publish from the resulting artifacts of that build to npm
The first two on the list I found to be pretty straightforward, but the third was less so. After a bit of trial-and-error, and some research in understanding the relationships behind what is referred to as Build pipeline versus a Release pipeline, I came up with something that for the most part implements the flow that I am thinking of. This guide will take you through the steps that I took to get that flow working. It maybe somewhat opinionated, though, so feel free to modify for your own usage as you see fit. My hope is that you will find this walk-through insightful in implementing Azure Pipelines into your workflow, and hopefully you can avoid some of the pitfalls I ran into along the way.
The example repo - hello-node-pipelines
For this walk-through, I created a fairly simple
Hello World node project, hello-node-pipelines. To make it a little more interesting, though, I wrote it in
TypeScript so I can also have more of a build step. That way we can see the code being transformed as the build artifact is built and then used in publishing.
Step 1. Get your repo ready for Azure Pipelines
Before we setup our CI/CD pipelines in Azure Pipelines, there are a couple pre-requisite things we should do first. Although not necessary, I think they make the setup process go a bit more smoothly.
Install a unit test reporter that can output JUnit xml files
Since Azure Pipelines expects JUnit xml files (I feel like this is some legacy baggage from the VSTS days, but oh well), you will want to have your tests use a reporter that can generate that output. In my hello-node-pipelines project I’m using jest, so all I need is to
npm i jest-junit --save-devSee example here
- Add a
test:ciscript passing additional parameters into
npm test. See example here
Create an Azure Pipelines configuration file in your repo
Pretty much any integration between Azure Pipelines and GitHub is going to require this. You’ll want to create a file named
azure-pipelines.yml in the root of your repo. Although the Azure Pipelines integration app can create one for you, I think it’s better to have it ready beforehand.
Here’s what mine looks like:
pool: vmImage: 'Ubuntu 16.04' steps: - task: NodeTool@0 inputs: versionSpec: '10.x' displayName: 'Install Node.js' - task: Npm@1 inputs: command: install - script: npm run lint displayName: 'Run linting' - script: npm run build displayName: 'Transpile TS files' - script: npm run test:ci displayName: 'Run Unit Tests' - task: PublishTestResults@2 inputs: testRunner: JUnit testResultsFiles: ./junit.xml condition: succeededOrFailed() - task: ArchiveFiles@2 inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)' includeRootFolder: false - task: PublishBuildArtifacts@1
A quick run-down of what this config will do:
- Use an Ubuntu 16.04 VM as the agent and assure that the latest 10.x version of node is installed
- Install node modules
- Run the linting script
- Run build so that the TypeScript files can be transpiled into .js
- Run the CI version of the test script. This will instruct the test runner to output the results in JUnit format, which Azure Pipelines expects.
- Publish test results using the results file generated from the previous unit test run.
- Take the current working directory and make a .zip compressed archive, placing it in the
$(Build.ArtifactsStagingDirectory). This is done for performance, as publishing a large number of artifact files is very time consuming.
- Finally, publish the build artifacts. This makes the artifacts available to other pipelines.
Setup .npmignore and do an initial publish to npm
It’s important to have a .npmignore file so that npm knows to exclude certain files when publishing. Here’s what mine looks like:
src .* *.test.js *.js.map junit.xml azure-pipelines* tsconfig.json
I found out the hard way that if you’re trying to publish to npm through Azure Pipelines and it is a package that has not yet been published, then you get some funky errors complaining about being unable to publish a private package. That is because for some reason npm decided to have new packages be published as private. But you can avoid this by manually publishing your package to initialize that package with this command:
# This is assuming your local npm is authenticated to publish to npm npm publish --access public
And that’s it. Once you’ve got all changes committed to your master branch and pushed to GitHub, you’re now ready to setup the project in Azure Pipelines.
Step 2. Configuring your project in Azure Pipelines
2.1 Create a new project in Azure pipelines
Assuming you’ve already got an account setup in Azure DevOps and have an organization created, you should just be able to add a project within that organization by clicking the
Create Project button.
Install the Azure Pipelines app in your repo as part of creating a new pipeline
Depending on if this is your first setup or not, you may need to go through an OAuth flow to authorize the Azure Pipelines app to access your GitHub repos.
- Go to Builds section of your newly created project, and click
- You will be asked where your code is, click
- If this is your first time setting up Azure Pipelines, click the option to
Authorize with OAuth, otherwise click
Install our app from the GitHub Marketplace
- After going through the necessary authorizations, select the GitHub org and repositories you want to install Azure Pipelines on. (I’m personally a fan of only installing it where I know I will use it)
After you’ve done these steps, you may need to re-authenticate with your account and there will be some initializing before going on to the next step in setting up your new build pipeline.
Finish creating the new build pipeline
Next select the repository you wish to associate with the pipeline. You will see in the
Template section the same yaml file we created earlier.
Run, and watch your first build go through! (actual build time for this pipeline was 59 seconds)
Setup branch protections and Pull Request checks in GitHub
Now that you’ve got your build setup, you can go ahead and setup checks and branch protections in GitHub so that whenever PRs are opened the build can run and check any pending changes to make sure they’re good. (Depending on the needs of your project, you may or may not want to use additional options in your branch protection rules)
- In GitHub, go to the Settings section of your repo
- Select the Branches tab
- Under Branch protection rules click
Apply rule toenter in
masterso that it applies to the master branch.
Require pull request reviews before merging,
Require branches to be up to date before mergingand
Require status checks to pass before merging
- You will also see your new build show up as a status check that. Check that so it is required to pass as part of a PR as well.
- Click the
Now we can see if we open a PR that has changes that cause tests to fail, the PR should now show the failed status check as well.
Add a personal access token to your Azure Pipelines project
Since publishing requires authentication to npm, we will need to setup what is called a
Service connection in your project. To do so, go to
Project settings, then in the
Pipelines section go to
Service connections. You will then want to click
New service connection and select
npm from the dropdown.
You will want to select the option for Authentication Token and then fill it in with the following:
- Connection name: Give this a meaningful name, you will reference this in your pipeline config later
- Registry URL: https://registry.npmjs.org
- Personal Access Token: A token you create on your npm account that has publish access. If you’ve not yet made one, please follow their instructions here.
With all the necessary info, click
OK to continue.
Step 3. Publishing to NPM
We’re just about there. Now that we have a build pipeline in place, we want to add in the ability to actually publish to NPM. There are a couple of different approaches we can take, both with their pros and cons.
|Within Build pipleline||Simpler approach, easier to implement||Does not leverage the features available to Release pipelines|
|Good for small scale projects||Checks could be bypassed|
|Using Release pipelines||Can define pre-deploy conditions like gates and approvals||Takes a little more effort to setup|
|Makes for a nice separation between CI and CD|
Option 1: Publishing within the Build pipeline
If your workflow is simple, and you are confident that it will only ever be you that will be publishing to NPM, then you could just simply include the publish task as part of your CI build with a condition so that it only executes when you want or need it to. All we need is to add this to our
azure-pipelines.yml after the
- task: Npm@1 inputs: commands: publish publishEndpoint: '<name of service connection goes here>' condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/tags/')) displayName: 'Publish to NPM'
You could also take out the
PublishBuildArtifacts@1 steps if you so wish, as these are mostly used for making the resulting files from the build available to a Release pipeline.
If you don’t want to use tags to drive publishing to NPM, but instead whenever commits are merged to master for example, then you would change you condition to
and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master')) instead. A fair warning if you do, though, that you better be sure that any merge into master includes a version bump, otherwise the publish step could fail. A custom task could probably be done to check such a condition, but I feel that is outside of the scope of this guide.
Option 2: Create a Release pipeline
From the Releases section, click
New pipeline. You will be prompted to select a template, but it is best for this to click
Empty job so that we can customize as needed. Close the initial
Stage 1 dialog for now, we’ll come back to that later.
Artifacts section, click the
Add button and select your project’s build pipeline. Default version should be set to
Specify at the time of release creation.
Going back to the
Stage 1 section of the release pipeline, add an
Extract Files task. For the
Destination folder you will want to enter
$(System.DefaultWorkingDirectory)\extracted. Leave all of the other defaults as is.
Now add an
npm task, using the following:
- Display name: npm publish
- Command: publish
- Working folder with package.json: $(System.DefaultWorkingDirectory)\extracted
- Registry location: External npm registry
- External registry: Reference the name of the npm service connection you created earlier. This can be selected in a dropdown so if you don’t remember exactly what you called it that is OK.
Go back to the Artifacts section and click what looks like a lightning-bolt. For
Continuous deployment trigger set to
Enabled. Then add a
Branch build filter where
Build branch is set to
refs/tags/*. This will allow continuous deployment to trigger anytime a new tag is created in your repo. (There is a
Build tags option, but apparently it has nothing to do with tags as far as how they are used in git repos so ignore that)
Now click where it says
New release pipeline to give your release a more meaningful name, then click
OK in the confirmation dialog.
Optionally, you might want to change the format that the pipeline will use to create release names. To sdo so, go to the
Options section of the pipeline. I specify mine as
Release-$(Date:yyyyMMddhhmmss)-$(Build.SourceBranch) so that it names it based off of time and the tagged version that will be used.
Tag your first release
Now to see our release pipeline in action, in my demo project I went ahead and just updated the version and pushed a new tag by doing the following. Note that using the
npm version command in this way will automatically create a new tag for the version we want to publish.
npm version patch -m "Bump to version %s" git push git push --tags
Once the build has completed we should see a new release created under the release pipeline automatically and then we can watch its status by drilling down into that release.
And we are done! Going back to the npm site we can verify that our package was indeed published successfully!
While the implementation I’ve illustrated here I think works rather well, and I’m pleased with the results, it is certainly not without flaws. Some of these are due to the nature of me just exploring and learning Azure Pipelines, but there are some that were inherited from the sample code provided by Microsoft. In Microsoft’s defense, I believe they were originally thinking that the artifact would be something like a shippable web application that you would deploy into something like a Docker container to run, and the use case of publishing to npm was more of an afterthought.
One problem is that the resulting zip compressed artifact includes both node_modules and the .git directory, which if we’re going to just publish the package to npm this is really just wasted space. I could probably make use of
npm pack to instead generate a tarball file that would only include that which I need to publish, but I may go back and do that at another time.
Another potential problem that I think I alluded to here and there is that for my flow I am driving the release off of the action of tagging within a repo. Where this can be a problem is that anyone with write access can create a tag anywhere (or even delete them), so this is where I think it is important to have a check in place to gate releases in the event someone does do something disruptive. Utilizing pre-deployment conditions can be a great way to protect against that. One that I think would be a good one would be to check the commit hash where master is currently and compare it to the tag’s commit hash. If they match, then go ahead and allow automated deployment.
I think there are certainly a lot of possibilities, and this only scratches the surface. Thank you for reading, and feel free to leave me feedback with what you think in the comments below.