This is yet another entry in the note-to-self series, this time to remind my Future Self how to push code from Bitbucket to a remote server automatically (or manually, if needed). Feel free to leave a comment below if I missed anything!
Bitbucket Pipelines is -as the Atlassian team defines it- “an integrated CI/CD service, built into Bitbucket. It allows you to automatically build, test and deploy your code, all of this based on a configuration file in your repository.”
As a reminder, CI stands for Continuous Integration. It refers to the process of automating the build and testing of code every time a developer commits a change to the central repository. CD, on the other hand, stands for Continuous Deployment. It’s the process where any code that passes the automated testing phase (CI) is automatically released into staging/production environment.
Before we go any further though let’s discuss why CI/CD are needed in the first place:
- Manual deployments can be time consuming: someone has to manually deploy to a staging/production environment and test that the changes didn’t break anything, time that could be invested on more pressing tasks.
- Manual deployments are very prone to errors: because of human intervention in these processes, important actions in a release can be missed accidentally (we’re human after all), faults that occur during the release may not be spotted, the incorrect versions of software can be shipped, etc.
- If something breaks during deployment, you’ll find yourself rushing to put everything back together again: A couple of inadvertent keystrokes from a distracted/tired developer can lead to disaster very quickly.
In other words, with CI/CD we can:
- Automate the deployment process: no more FTPing/SSHing into the server to upload code changes. Just push a commit to the repo and the service will do the rest for you.
- Guarantee that deployed code has been tested and works as intended: we can write tests that run automatically whenever we push code into the repo, validate that it’s good and meets the established standards. If it doesn’t, no code gets deployed to the server.
- Launch updates with no delays: patches/fixes/changes we push into the repository can be pushed into the staging/production server automatically, no need for manual approval.
- Make developers happier: they can work on important projects / tasks and stop worrying about time-consuming manual tasks. Also, they can rest at ease knowing that errors are caught immediately, not during/after deployment.
In Bitbucket’s case these CI/CD processes have been named as a whole as Pipelines.
For now let’s focus on the Continuous Deployment side of things. I may update this article later on (or maybe even write a separate article) to explain how to implement CI with Bitbucket in our workflow.
Deploying to Server with Bitbucket’s Pipelines
To have our repositories deploy code to the staging/production environment we’re going to use Atlassian’s ftp-deploy pipelines pipe. Before we do that though first we need to enable the Pipelines feature for our repository.
The Pipelines feature isn’t enabled by default (at least at the time of writing). To enable it:
- Go to your repository.
- Click on Repository Settings.
- Under Pipelines, click on Settings.
- Set Enable Pipelines to on.
Note that you need administrative privileges to access the Repository Settings screen. If your Bitbucket account doesn’t have admin permissions then you may need to ask the owner/administrator of the repository to help with this (or ask them to make you an administrator so you can do configure the pipeline yourself).
Setting Repositories Variables
Next, we’re going to create some repository variables so we can use them with ftp-deploy. These variables are:
Head to Repository Settings > Pipelines > Repository variables and register each of these variables using the FTP credentials information provided by your hosting.
Now it’s time to set up our bitbucket-pipelines.yml file. This file will hold the pipeline builds configuration for our repository.
Our bitbucket-pipelines.yml file needs to be placed at the root of our repository and it should look something like this:
With this example configuration every time someone pushes a commit or merges into the master branch the pipeline will deploy automatically. To learn what each line does I highly suggest checking out the official documentation, however I’ll try to explain the main bits here.
The variables section in our example holds information about our server, where things should get deployed to and how:
- USER: Refers to the FTP username, we set this up as a repository variable before.
- PASSWORD: Refers to the FTP password for our username. We also set this up as a repository variable earlier.
- SERVER: Refers to the server host. Like the two previous variables, we also registered a repository variable for our host already.
- REMOTE_PATH: Where code should be deployed to. In our example that would be the /public_html folder on our remote server (note the absence of the trailing slash here, that’s intended.)
- DELETE_FLAG: When set to ‘true’ our pipeline will delete everything in REMOTE_PATH before deploying our new code. If you don’t want that to happen simply set DELETE_FLAG to ‘false’ (single quotes included.)
- EXTRA_ARGS: Additional arguments passed to the lftp command (see LFTP docs for more details).
The reason why we set up the repository variables early in the process should be apparent to you by now: it’s so our FTP credentials are not publicly exposed in our repository for security reasons. You may argue that we should have created a variable for REMOTE_PATH as well but I don’t think it’s really necessary. I’ll leave that up to you.
When you’re ready, commit this file to your repository and with that we’re done. Bitbucket will now automatically push changes to our server via Pipelines every time we push a commit / merge into the master branch. You can of course use a different branch (eg. staging), just change master with the name of your branch in your bitbucket-pipelines.yml file.
Working with Multiple Branches
Let’s say for example that we have a project that has two branches: master, where our stable/production-ready code lives; and staging, where we test new features/fixes before pushing them into master.
We can set up individual pipelines for our branches in bitbucket-pipelines.xml to have our repository deploy changes automatically to our Production environment when pushing changes into the master branch or to Staging when pushing changes into the staging branch, like so for example:
In this instance, and as seen on the sample bitbucket-pipelines.xml above, setting up repository variables for each pipeline might be a good idea – especially if our environments (Production and Staging from our example) are hosted on separate servers or on different paths for example.
Running Pipelines Manually
There are some instances where we don’t want pipelines to deploy automatically after every commit. Fortunately Bitbucket also allows us to create custom pipelines that can be triggered manually, giving us full control over when code gets deployed to our server.
Using the example configuration file from before, we only need to make two changes so our pipeline can be run manually: replace the word branches (line 5) with the word custom to tell Bitbucket that the pipelines defined below are to be triggered manually, and then the word master (line 6) with whatever name we want to give to our pipeline (eg. deploy-to-production.) The rest of our configuration file should remain exactly the same.
Here’s how our bitbucket-pipelines.yml file should look like after these changes:
Now, to run our pipeline manually:
- In Bitbucket, select your repository and go to Pipelines.
- Click on Run Pipeline.
- Select a branch, the pipeline we want to run, and finally click on Run.
We can do more with custom pipelines, like scheduling triggers for example. For more details check out this article by the Atlassian team: Run pipelines manually.
Questions? Suggestions? Tips? Leave a comment below!