Tips for how to use 3 Musketeers to supercharge your Developer Experince

3 Musketeers is a pattern popularized by Frederic L where you can Test, build, and deploy your apps from anywhere, the same way.

Note: Frederic's website was preivously available via 3musketeers.io but is now https://3musketeersdev.netlify.app/

The key benifits include being able to have a mix of Linux, Mac, and Windows workstations. You also get consistency with pipeline/CI tools that also run the same steps in the same way.

More Info:

Examples that also include docs in their READMEs:

More Examples:

When to use 3 musketeers

It's important to understand the goals of Consistancy, Control and Confidence and the part each tool plays in the pattern.

Can't I just skip this and use my tools directly?

The benfitis of using these tools together might not be obvious if you've only worked in a single project, or in a small team or if you haven't seen a project grow in complexity over time.

You might also be new to tools such as make, compose and docker and as such the idea of introducing them into your pipeline might seem unessasary. This might be especially true if your team is working with legacy applications and you don't have any container presense in your organization or team.

Only your organization and your team can decide if the goals of Consistancy, Control and Confidence are worth the investment in pipeline standarization.

These patterns and tools truely shine when:

  • Developers start to seek "fast feedback" by "shifting left" - ie wanting to run some tests early (before they commit code)
  • Working in larger teams
  • With different underlying development machines - or with developers who install different versions of software (how much time did a new developer spend getting a working build env?)
  • Across multiple teams in large oragnizations
  • With CI/CD agents managed by other teams where changes to agents are not easy and you don't have permission/flexibility (did someone upgrade a build agent and break how java works for you? how would you know?)
  • When having to share code between vendors
  • The more complex the application gets, or as more changes are made to the application and it's build steps (having to refactor or share build steps either within your team or between teams)
  • Testing your pipeline code (how many times have you done 20 check-ins of code just to test one bit of functionaility "on the build server")

The 3 tools

As the name implies 3 musketeers is made up of 3 tools: https://3musketeersdev.netlify.app/about/tools.html

Docker

Docker is the most important musketeer of the three.

Many tasks such as testing, building, running, and deploying can all be done inside a lightweight Docker container.

The portability of Docker ensures you can execute the same tasks, the same way, on different environments like MacOS, Linux, Windows, and CI/CD tools.

Example: Docker can be invoked directly from make, or can be invoked from a docker-compose command in make.

Make

Make is a cross-platform build tool to test and build software and it is used as an interface between the CI/CD server and the application code.

A single Makefile per application defines and encapsulates all the steps for testing, building, and deploying that application.

Of course other tools like rake or ant can be used to achieve the same goal, but having Makefile pre-installed in many OS distributions makes it a convenient choice.

1# Makefile
2echo:
3	docker-compose run --rm alpine echo 'Hello, World!'

Compose

Docker Compose, or simply Compose, manages Docker containers in a very neat way.

It allows multiple Docker commands to be written as a single one, which allows our Makefile to be a lot cleaner and easier to maintain.

Testing also often involves container dependencies, such as a database, which is an area where Compose really shines. No need to create the database container and link it to your application code container manually — Compose takes care of this for you.

1# docker-compose.yml
2version: '3'
3services:
4  alpine:
5    image: alpine

Can't I choose my own mukseteers?

Who says make, compose, docker are the perfect combo?

The key to Consistancy, Control and Confidence is not only in the specific combination of make, compose, docker but actually the agreement of your team and your organization to use a standard that's easy for everyone to follow.

Consider:

  • Do most of your development teams have easy access to make, compose, docker - or can you agree/reach a state where this is possible?
  • Do you want to replace make with something else? (zeus, python, shell) Sure thing, just agree as a team what would be more easily to install and maintain
  • Do you want to start with Make and Docker and introduce Compose only when the complexity demands it?

Q & A

Are there pre-reqs? What if my workload or app isn't docker based?

This pattern doesn't operate at your workload or application/deployment architecture level, it's all about the CI steps (like build, test, publish, deploy etc).

The workload could be anything, that scope is outside of the pattern. It could be the case that your workload is containerized but that's unrelated to the higher-level pattern and how the local/CI steps are run.

You need to agree on having Make, Compose and Docker available on developer workstations, and for the CI tool.

Make is difficult to use, and hard to learn

make is a powerful tool that can do a lot of things. However, the intent is to use make as a simple interface between your build tool and your build steps.

As you should only be using the most basic features of make it shouldn't be difficult to implement. If you're having a lot of difficulties, consider if you're over complicating your task.

Consider some real-life simple make files (easy to read, maintain and run):

You could consider using multiple makefiles or consider another language if you have complex logic that demands it.

I really don't want to use docker compose - I'm not even deploying anything / I'm using k8s

Don't confuse using docker-compose during your build/test steps with deploying your application in production. With 3musketers we're looking at the patterns we use to inteface with our build tool (like building and testing your application in your pipeline).

People often use compose files (with 3musketeers) to avoid having long and messy make files.

To Compose or Not to Compose?

If your use of docker is simple for a particular task, feel free to skip the compose file.

https://3musketeersdev.netlify.app/guide/patterns.html#docker

Make calls directly Docker instead of Compose. Everything that is done with Compose can be done with Docker. Using Compose helps to keep the Makefile clean.

When you have a multi-line docker invocation, or 5 lines of docker commands in a single task, consider if compose could simplify things for you.

https://3musketeersdev.netlify.app/guide/patterns.html#task-management-tool shows an example of how to implement an "npm" command "cleanly".

Notice that the compose file does all the volume mounting, workdir (and would do all the env injection etc).

Consider an even more complex senerio that invovled multiple services, logs, naming of containers, starting and stopping containers, cleanup when done.

How will I manage container versions/instances

Without container images in your CI pipeline you may have managed binaries and software through an artifact store like Artifactory. You would use a similar approach for containers too.

Your organization will need to decide how teams can pull images (directly from the Internet, or through Artifactory as a proxy etc), and have a model for developers to be able to maintain and update docker images. Often docker images that need customization will need their own pipelines and independent lifecycle management (assuming you need to extend the features of a docker image maintained somewhere like Docker Hub).

This concern also exists without the use of 3 Muskteers, if your organization uses docker images in anywhere in any capacity (ie for application deployment or in any other part of the organization). As long as docker images aren't banned in your organization, and you have any need to modify or update a docker image anywhere, you'll have to solve for this.

Won't I have to do more scripting to do things like export the results of scan details and test results out to another location?

Yes, there's an extra layer with docker that introduces volume mounts, a more careful look at permissions and ownership, and files need to be stored/saved outside the container run. That's an overhead and a bit of a learning journey which can be painful to begin with, especially without docker knowledge. Generally once you get those patterns working well, you have less issues moving forward (and you can repeat/share the patterns), and imprpoving your docker skills may be useful if you need them for your app development too.

This question ends up being one about tradeoffs as you get more consistency and reuse across Linux/Mac/Windows and your CI tool, you make life easier if you ever move CI tools (think about moves to GitLab, Azure, GitHub etc), and setup is a breeze for new devs joining the company or the team.

The key advice here is to use small containers that have a single binary and a single purpose, when they do an operation that produces an output it should be available via a volume mount that allows easy access either via your workstation or CI tool, after that it's functionally the same as having run it without docker.

Alternative: In theory you could cheat the pattern a little here by using native container image steps in a tool like GitLab/GitHub to reduce the work you need to do to mount files and export them, but it does make it less portable for local usage or if you ever move CI tools.

Alternative: If you were to not use containers at all you wouldn't be able to run everywhere as easily and as consistently, which would arguably shift some toil back to developers.

Who's responsibility is it?

3musketeers will not stop you from writing bad pipelines or bad code.

3musketeers will not write your build steps for you.

3musketeers will not replace your build process or your build tool.

3musketeers will not deploy your application for you.

3musketeers will not store metadata or artifacts during your build process.

For example, if your project, team or organization creates a docker container to provide some sort of functionality (e.g a sonar scanner image) this docker image should have it's own:

  • documentation
  • development lifecycle
  • versioning / latest tags in git and in it's artifact store
  • versions should be immutable - never override a version with new code
  • if the container is run without the approriate environment variables, it should set sensible defaults where it can, and fail with explict error messages where it can't set a defaults
  • requirements about connectitity should be well documented (and properly testing with proper errors when there's lack of connecivity)
  • auth should be clearly documented and variablized
  • if volume mounts are required and if files are outputted, this should be clearly articulated

Making changes:

  • scripts/containers should allow you to override config files and/or supply additional config files or variables to override defaults (thus reducing the need to change the container/script itself each time)
  • if your container is really a packaging mechinisum for a script consider how the script will be used and if it's better to be baked into the container (ie each change to the script should cause a new version of the container) or if you'd like users to maintain the script themselves and simply call the container with the script mounted into it