Github actions custom docker image

Github actions custom docker image DEFAULT

Deploy a custom container to App Service using GitHub Actions

GitHub Actions gives you the flexibility to build an automated software development workflow. With the Azure Web Deploy action, you can automate your workflow to deploy custom containers to App Service using GitHub Actions.

A workflow is defined by a YAML (.yml) file in the path in your repository. This definition contains the various steps and parameters that are in the workflow.

For an Azure App Service container workflow, the file has three sections:

SectionTasks
Authentication1. Retrieve a service principal or publish profile.
2. Create a GitHub secret.
Build1. Create the environment.
2. Build the container image.
Deploy1. Deploy the container image.

Prerequisites

Generate deployment credentials

The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal but the process requires more steps.

Save your publish profile credential or service principal as a GitHub secret to authenticate with Azure. You'll access the secret within your workflow.

A publish profile is an app-level credential. Set up your publish profile as a GitHub secret.

  1. Go to your app service in the Azure portal.

  2. On the Overview page, select Get Publish profile.

    Note

    As of October , Linux web apps will need the app setting set to before downloading the file. This requirement will be removed in the future. See Configure an App Service app in the Azure portal, to learn how to configure common web app settings.

  3. Save the downloaded file. You'll use the contents of the file to create a GitHub secret.

You can create a service principal with the az ad sp create-for-rbac command in the Azure CLI. Run this command with Azure Cloud Shell in the Azure portal or by selecting the Try it button.

In the example, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later.

Important

It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.

Configure the GitHub secret for authentication

In GitHub, browse your repository, select Settings > Secrets > Add a new secret.

To use app-level credentials, paste the contents of the downloaded publish profile file into the secret's value field. Name the secret .

When you configure your GitHub workflow, you use the in the deploy Azure Web App action. For example:

In GitHub, browse your repository, select Settings > Secrets > Add a new secret.

To use user-level credentials, paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name like .

When you configure the workflow file later, you use the secret for the input of the Azure Login action. For example:

Configure GitHub secrets for your registry

Define secrets to use with the Docker Login action. The example in this document uses Azure Container Registry for the container registry.

  1. Go to your container in the Azure portal or Docker and copy the username and password. You can find the Azure Container Registry username and password in the Azure portal under Settings > Access keys for your registry.

  2. Define a new secret for the registry username named .

  3. Define a new secret for the registry password named .

Build the Container image

The following example show part of the workflow that builds a Node.JS Docker image. Use Docker Login to log into a private container registry. This example uses Azure Container Registry but the same action works for other registries.

You can also use Docker Login to log into multiple container registries at the same time. This example includes two new GitHub secrets for authentication with docker.io. The example assumes that there is a Dockerfile at the root level of the registry.

Deploy to an App Service container

To deploy your image to a custom container in App Service, use the action. This action has seven parameters:

ParameterExplanation
app-name(Required) Name of the App Service app
publish-profile(Optional) Applies to Web Apps(Windows and Linux) and Web App Containers(linux). Multi container scenario not supported. Publish profile (*.publishsettings) file contents with Web Deploy secrets
slot-name(Optional) Enter an existing Slot other than the Production slot
package(Optional) Applies to Web App only: Path to package or folder. *.zip, *.war, *.jar or a folder to deploy
images(Required) Applies to Web App Containers only: Specify the fully qualified container image(s) name. For example, 'myregistry.azurecr.io/nginx:latest' or 'pythonalpine/'. For a multi-container app, multiple container image names can be provided (multi-line separated)
configuration-file(Optional) Applies to Web App Containers only: Path of the Docker-Compose file. Should be a fully qualified path or relative to the default working directory. Required for multi-container apps.
startup-command(Optional) Enter the start-up command. For ex. dotnet run or dotnet filename.dll

Next steps

You can find our set of Actions grouped into different repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.

Sours: https://docs.microsoft.com/en-us/azure/app-service/deploy-container-github-action

Configure GitHub Actions

Estimated reading time: 9 minutes

This page guides you through the process of setting up a GitHub Action CI/CD pipeline with Docker containers. Before setting up a new pipeline, we recommend that you take a look at Ben’s blog on CI/CD best practices .

This guide contains instructions on how to:

  1. Use a sample Docker project as an example to configure GitHub Actions
  2. Set up the GitHub Actions workflow
  3. Optimize your workflow to reduce the number of pull requests and the total build time, and finally,
  4. Push only specific versions to Docker Hub.

Set up a Docker project

Let’s get started. This guide uses a simple Docker project as an example. The SimpleWhaleDemo repository contains an Nginx alpine image. You can either clone this repository, or use your own Docker project.

SimpleWhaleDemo

Before we start, ensure you can access Docker Hub from any workflows you create. To do this:

  1. Add your Docker ID as a secret to GitHub. Navigate to your GitHub repository and click Settings > Secrets > New secret.

  2. Create a new secret with the name and your Docker ID as value.

  3. Create a new Personal Access Token (PAT). To create a new token, go to Docker Hub Settings and then click New Access Token.

  4. Let’s call this token simplewhaleci.

    New access token

  5. Now, add this Personal Access Token (PAT) as a second secret into the GitHub secrets UI with the name .

    GitHub Secrets

Set up the GitHub Actions workflow

In the previous section, we created a PAT and added it to GitHub to ensure we can access Docker Hub from any workflow. Now, let’s set up our GitHub Actions workflow to build and store our images in Hub. We can achieve this by creating two Docker actions:

  1. The first action enables us to log in to Docker Hub using the secrets we stored in the GitHub Repository.
  2. The second one is the build and push action.

In this example, let us set the push flag to as we also want to push. We’ll then add a tag to specify to always go to the latest version. Lastly, we’ll echo the image digest to see what was pushed.

To set up the workflow:

  1. Go to your repository in GitHub and then click Actions > New workflow.
  2. Click set up a workflow yourself and add the following content:

First, we will name this workflow:

Then, we will choose when we run this workflow. In our example, we are going to do it for every push against the main branch of our project:

Now, we need to specify what we actually want to happen within our action (what jobs), we are going to add our build one and select that it runs on the latest Ubuntu instances available:

Now, we can add the steps required. The first one checks-out our repository under $GITHUB_WORKSPACE, so our workflow can access it. The second is to use our PAT and username to log into Docker Hub. The third is the Builder, the action uses BuildKit under the hood through a simple Buildx action which we will also setup

Now, let the workflow run for the first time and then tweak the Dockerfile to make sure the CI is running and pushing the new image changes:

CI to Docker Hub

Optimizing the workflow

Next, let’s look at how we can optimize the GitHub Actions workflow through build cache. This has two main advantages:

  1. Build cache reduces the build time as it will not have to re-download all of the images, and
  2. It also reduces the number of pulls we complete against Docker Hub. We need to make use of GitHub cache to make use of this.

Let us set up a Builder with a build cache. First, we need to set up cache for the builder. In this example, let us add the path and keys to store this under using GitHub cache for this.

And lastly, after adding the builder and build cache snippets to the top of the Actions file, we need to add some extra attributes to the build and push step. This involves:

Setting up the builder to use the output of the buildx step, and then Using the cache we set up earlier for it to store to and to retrieve

Now, run the workflow again and verify that it uses the build cache.

Push tagged versions to Docker Hub

Earlier, we learnt how to set up a GitHub Actions workflow to a Docker project, how to optimize the workflow by setting up a builder with build cache. Let’s now look at how we can improve it further. We can do this by adding the ability to have tagged versions behave differently to all commits to master. This means, only specific versions are pushed, instead of every commit updating the latest version on Docker Hub.

You can consider this approach to have your commits go to a local registry to then use in nightly tests. By doing this, you can always test what is latest while reserving your tagged versions for release to Docker Hub.

This involves two steps:

  1. Modifying the GitHub workflow to only push commits with specific tags to Docker Hub
  2. Setting up a GitHub Actions file to store the latest commit as an image in the GitHub registry

First, let us modify our existing GitHub workflow to only push to Hub if there’s a particular tag. For example:

This ensures that the main CI will only trigger if we tag our commits with something like Let’s test this. For example, run the following command:

Now, go to GitHub and check your Actions

Push tagged version

Now, let’s set up a second GitHub action file to store our latest commit as an image in the GitHub Container Registry. You may want to do this to:

  1. Run your nightly tests or recurring tests, or
  2. To share work in progress images with colleagues.

Let’s clone our previous GitHub action and add back in our previous logic for all pushes. This will mean we have two workflow files, our previous one and our new one we will now work on.

To authenticate against the GitHub Container Registry, use the for the best security and experience.

Now let’s change the Docker Hub login with the GitHub Container Registry one:

Remember to change how the image is tagged. The following example keeps ‘latest’ as the only tag. However, you can add any logic to this if you prefer:

Update tagged images

Now, we will have two different flows: one for our changes to master, and one for our pushed tags. Next, we need to modify what we had before to ensure we are pushing our PRs to the GitHub registry rather than to Docker Hub.

Conclusion

In this guide, you have learnt how to set up GitHub Actions workflow to an existing Docker project, optimize your workflow to improve build times and reduce the number of pull requests, and finally, we learnt how to push only specific versions to Docker Hub.

Next steps

You can now consider setting up nightly tests against the latest tag, test each PR, or do something more elegant with the tags we are using and make use of the Git tag for the same tag in our image.

To look at how you can do one of these, or to get a full example on how to set up what we have accomplished today, check out Chad’s repo which runs you through this and more details on our latest GitHub action.

CI/CD, GitHub Actions
Sours: https://docs.docker.com/ci-cd/github-actions/
  1. How to setup epson projector
  2. Cummins l10 engine manual pdf
  3. Zillow scott city, ks
  4. Word chums word finder
  5. Pixel 4 screen replacement kit

Build images on GitHub Actions with Docker layer caching

Save yourself hours of googling and learn how to build images on GitHub Actions with proper Docker layer caching. With Docker’s BuildKit capabilities that are now easy to set up on GitHub’s CI runners, you can use the native caching mechanism of Actions to keep your image layers neatly tucked in between the builds. See examples for a run-of-the-mill Rails application with single and multi-stage production Dockerfiles and an example of a matrix build for a single image destined for many platforms. Be warned: a lof of YAML is coming your way.

I know all about caching and CI, take me directly to sweet, sweet YAMLs

Whether or not you choose to use Docker in local development (and we won’t pick sides between fervent believers of Docker and its sworn enemies), when it comes to deployments in the modern age of the cloud, containers are essential for running scalable applications in production.

That means there is always some building and pushing happening at some point, whether you are doing it by hand or relying on highly automated DevOps pipelines for deploying your containers to Kubernetes clusters or one of the numerous PaaS engines that abstract out orchestration for you. AWS Fargate, Google Cloud Run, and even Heroku (that now supports Docker deploys) will be happy to run your containers. And with so much competition in the cloud hosting market, it makes no sense to stick to the old way of deploying applications to a virtual machine.

Docker caching in 

Docker images, as they teach us now in every elementary school, are like layered cakes. Every , , or  instruction in your Dockerfile creates a read-only layer (essentially, just a bunch of files) that a Docker host stacks on top of each other to use less space and then adds a thin writable container layer on top once you want to run your image. Files from different layers are combined to form a filesystem for your container. The main reason for this Union filesystem magic is to make sure that parts of a final system that change the least can be safely cached, re-used for subsequent builds, or even shared between different final containers. It also saves bandwidth, as you don’t have to pull all the layers over the network once a new version of the image you rely on comes out—only those that did change.

Thus, a well-written custom Dockerfile should respect this layering to make the most out of caching. Whatever is on top of the Dockerfile should change the least, and on the bottom—the most.

System dependencies usually go first, and application code is the last to be copied in a final container before deployment as this bit is the most malleable.

If you are a DevOps specialist, you have probably heard of BuildKit: it was announced back in  and matured enough to become with every Docker Engine distribution since version It also has a standalone CLI plugin for Docker, called , included in Docker CE since version (albeit in experimental mode only) and can be set as a default builder for .

If you are a casual Docker user, you can set the  environment variable before running a  command to see a cool new blueish CLI that employs some TTY tricks to fit the whole output of a build in one terminal screen. That’s BuildKit/buildx in action. It also makes it very easy to spot where Docker had used the cache and where it had to build a layer from scratch.

The output of docker build command with BuildKit enabled

Read “AnyCable Four years of real-time web with Ruby and Go” to see why we built an Action Cable extension and see what the future holds for the project.

On the screenshot above, you can see that while building an image for a demo application for AnyCable, Docker was able to resolve every layer as , except for the one that does , as Rails is primed to re-run asset compilation every time the source code of application is changed. However, compilation took only 30 seconds; otherwise, we would have to wait for the whole build that takes around 3 minutes on my local machine but can take much longer or much quicker depending on the builder system or the size of the the application in a container.

Without caching, the life of an average Docker user will be much harder (and believe us, we know, as many of us are running Docker on Macs).

The coolest thing about BuildKit, however, is that it allows storing Docker layers as artifacts in an arbitrary folder, thus making it super easy to export and import it: either from local storage or from a remote registry.

Of course, normally, a developer who works on a containerized application will not run a production Docker build on their local machine. Continuous integration providers make our life easier by taking the grunt work of building images out of the comfort of our homes and into efficient data centers that power the Cloud.

However, you lose all the advantages of caching once you start building your images on a virtual machine that gets destroyed after every run.

The CI providers will be happy to run your workloads for as long as possible, as most of them bill by the minute.

You, however, might be annoyed by the fact that your builds take longer, get queued up, cost you extra, and increase the incident response time when you need to quickly roll out a fix.

Of course, CI providers are not evil and don’t want to annoy clients, so most of them implement at least some sort of caching, especially if their primary business is building containers (hello, Quay). If you use a dedicated container build service, you might not even notice a problem, but a key to sanity is using fewer services, not more. Enter GitHub Actions.

Drinking the GitHub-Actions-Aide

GitHub Actions are a relatively new addition to the CI family, but it has immediately become an elephant—or should we say, a thousand-pound gorilla—in the room. GitHub Actions’ main purpose is to eat up your whole operations pipeline: from collaborating on code to automatic code checking to deployments. End-to-end developer happiness. GitLab had the “pipelines” feature for years (and, arguably, better implemented), but only when the biggest player on the  market entered the ring, the standalone providers for CI/CD started feeling the heat (we’ll miss you, Travis CI).

All GitHub’s Ubuntu and Windows runners (weirdly, not macOS) come with Docker preinstalled, so it’s a no-brainer to set up Docker builds on some meaningful events in your repository: like merging a PR into the main branch or rolling out a new release. All you need is the well-known sequence of steps somewhere in your workflow file.

Unfortunately, all your builds will run from scratch every time you run this on GitHub Actions.

GitHub has introduced caching for workflow dependencies a while ago but did not provide any official way to leverage it for Docker layers, which is surprising, as you can create your custom actions as Docker containers.

Several third-party solutions are built either around the  command or around pulling the previous version of an image before building a new one (so Docker can re-use layers). There is a nice tutorial explaining all the approaches, but they all seem quite contrived, especially when the task is to implement the default Docker functionality.

Frankly, it made more sense to create your own runner on a Digital Ocean droplet and make sure it keeps the cache between runs. That approach, while absolutely viable, requires some effort, as now you have to maintain your personal runner outside of GitHub’s ecosystem.

Finally, getting some action

The killer feature of GitHub Actions (even though it was not so apparent at the time of the rollout) are… actions themselves: reusable, self-contained sequences of steps that can be publicly shared and used inside arbitrary workflows. Docker as a company maintains several actions of its own, and one of them is self-explanatory called build-push-action.

GitHub even mentions it in their own documentation:

What the documentation, sadly, does not mention is that there exists a very simple way to set up buildx/BuildKit runner in the context of the VM that will make our Docker cache exportable and thus properly cacheable!

Here’s the official example from the docker/build-push-action repository:

Wow! The answer was staring at us for so long, yet no one really noticed. Should we check if this approach works for deploying real-world applications?

The cache dance-off

To test our caching approach in action, we will use a demo Rails application (source) by our colleague Vladimir Dementyev, a principal backend engineer at Evil Martians. It is not yet another “Hello Rails” example, but a functioning demo of a modern real-time application that uses a super-fast and memory-efficient AnyCable as a drop-in extension for Rails’ native Action Cable.

It is decently sized and, as anything that Vladimir does, is incredibly well-tested. Here are the :

rails stats for our app

We are going to fork it and add some “fake” production Dockerfiles:

  • Dockerfile.prod: a standard “production” Dockerfile that adds all common build dependencies, sets up PostgreSQL client and Node.js, installs Yarn and Bundler, builds all the Ruby and JS dependencies, copies the source code to the image, and, finally compiles static assets (scripts and styles).

  • Dockerfile.multi: a more evolved Dockerfile that uses a multi-stage build to reduce the size of the final image by more than half (only code, assets, and gems are included in the final stage).

Those Dockerfiles are “fake” only in the sense that the image built with them won’t run on its own; we will need some orchestration in place to make all the parts of the app click in a real production environment (say, a Kubernetes cluster). However, setting up orchestration is outside of the scope of this tutorial. We are mostly interested in approximating and benchmarking real-world build times, so this should be good enough.

Building a single-stage Dockerfile

Let’s create a fake_deploy_singlestage.yml workflow that simulates the deploy: it checks out the application code and builds an image.

In the real world, you would want to push the resulting image in some registry and then update your Kubernetes manifests or trigger the new Helm deploy. We will put two jobs inside the workflow. One uses caching, and another does not. Jobs inside a GitHub Actions workflow run in parallel by default, so this will create a healthy spirit of competition between two jobs: whichever completes the first wins our dance-off!

Now commit and push. Here are the results for the cold cache.

Building with cold cache

Expectedly, the cache action takes more time, as there are more steps involved. Now let’s push an empty commit to see if the cache worked.

Drumroll, please…

Building with warm cache

That’s more like it! Of course, empty commits are quite rare in real-world project repositories. The only use case we can think about is triggering re-deploy with the same code. However, this happens too, and the Docker cache will save you about two and a half minutes, which is nothing to complain about.

Let’s check for something more real: change some code (add a comment to any class or something) and push again:

Whenever we change our application code in any way, the  step from the Dockerfile will run anew, as we might have changed the code that requires new assets.

If your cache works properly, only a single layer should be re-built in our cached workflow. The most time-hungry and  steps should hit the cache.

Let’s see for ourselves!

Only the assets layer changed during a Docker build

As we can see on the screenshots above, uncached workflow took 3 minutes 38 seconds ( in total) to build the Docker image, while the one using a cache built the image in 1 minute 29 seconds ( in total).

That is still the reduction of almost a half!

Building with multi-stage Dockerfile

If you are worried (and you should!) about your image sizes, you will have to wrap your head around Docker’s multi-stage builds. This technique allows us to only have deployable code inside the final image and strip away all the noise like common build dependencies or (in the case of Rails)—the entire Node.js runtime. The resulting image of our demo app weighs MB when using multi-stage build versus MB without it. A considerable economy of space and bandwidth (as the images fly over the web between CI and productions servers)!

Take another look at a sample Dockerfile.multi to see how we do it. The idea is to target only the  stage in our production build. The workflow for caching multi-stage builds is almost identical to a single-stage one, with few notable differences. Omitting them can drive you mad when things don’t go as planned, so please be careful.

If you squint hard, you can see that pesky option in the  key of  step.

The benchmark results are roughly the same, compare for yourself:

  • Deploy with cold cache (/).
  • Deploy with warm cache and empty commit (/).
  • Deploy with assets compilation (/).

All in all, our little caching experiment proved to be a success!

We could cut our deployment times by half in the most common scenario when only the application code changes on commit but none of the dependencies.

Of course, if the  or  have changed between deploys, it will invalidate and  cache layers, and the build process will take more time. However, this rarely happens in a mature application, except when you need to update your dependencies.

Bonus: Matrix builds for vendor images (or libraries!)

Read our past article about migrating a production application on Kubernetes to Fullstaq Ruby to see the gains.

At Evil Martians, we love Fullstaq Ruby—an optimized Ruby distribution with and  patches that allows your applications to use 30% to 50% less memory.

While folks at Fulllstaq Ruby are still working on the official container edition of the platform, our backend engineers and external contributors are maintaining a set of Docker images for Rubies , , and  running on Debian 9 (stretch and stretch-slim) and Debian 10 (buster and buster-slim).

We have recently set up a GitHub Action that uses a build matrix strategy to build 24 (slightly) different images simultaneously.

Take a look at the build-push workflow in the repo to see how our approach plays with matrix builds.

With the layer caching enabled, builds only take from 16 to 60 seconds for each image.

We’ve reached the end of our caching journey. Thank you for reading!

Grab the complete workflows from this article in our repo, and feel free to use them for your gain.

This article would not be possible without the folks at Docker, who wrote a super clear blog post describing the caching mechanism and set up a repository with examples.

Further reading


If your organization needs any help with “terraforming” Rails applications, setting up robust deployment pipelines for the cloud, or building internal engineering culture around containers, feel free to give us a shout. Our engineers and DevOps specialists will be happy to help your digital product achieve maximum efficiency.

Sours: https://evilmartians.com/chronicles/build-images-on-github-actions-with-docker-layer-caching
Use GitHub Actions to build and push a container image

How to build and push Docker image with GitHub actions?

cover

In the previous post, I explained that with a few simple tricks, you can make your Docker image less cluttered and build faster. I explained practical patterns on how to do that.

This time I’ll take a step forward and explain how to publish the image to the Docker registry. It’s a place in which you can store your Docker images. You can use them to share images with your team or deploy them to your hosting environment (e.g. Kubernetes, or another container hosting). We’ll use GitHub Actions as an example. It has a few benefits:

  • it’s popular and easy to set up,
  • it’s free for Open Source projects,
  • by design, it integrates easily with GitHub tools like GitHub Container Registry.

It’s a quickly evolving, decent tool. Of course, it’s not perfect. Documentation is not great, but that’s also the reason why I’m writing this post.

We’ll use two popular Docker registries:

  • Docker Hub: the default one, provided by Docker. It’s commonly used for the public available images. If you run docker pull, it’ll try to load the image from it by default. However, from November , it has significant limits for free accounts.
  • GitHub Container Registry (GHCR): GitHub introduced its container registry as a Packages service spin-off (you can use it to host artefacts like NPM, NuGet packages, etc.). It allows both public and private hosting (which is crucial for commercial projects).

Before we push images, we need to do a basic setup for the container registry:

Docker Hub publishing setup

  1. Create an account and sign in to Docker Hub.
  2. Go to Account Settings => Security: link and click New Access Token.
  3. Provide the name of your access token, save it and copy the value (you won’t be able to see it again, you’ll need to regenerate it).
  4. Go to your GitHub secrets settings (Settings => Secrets, url https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions).
  5. Create two secrets (they won’t be visible for other users and will be used in the non-forked builds)
  • DOCKERHUB_USERNAME - with the name of your Docker Hub account (do not mistake it with GitHub account)
  • DOCKERHUB_TOKEN - with the pasted value of a token generated in point 3.

Github Container Registry publishing setup

  1. Enable GitHub Container Registry. Profile => Feature Preview => Improved Container Support => Enable.
  2. Create GitHub Personal Access Token in your profile developer settings page. Copy the value (you won’t be able to see it again, you’ll need to regenerate it). Select at least following scopes:
  • repo
  • read:packages
  • write:packages
  1. Go to your GitHub secrets settings (Settings => Secrets, url https://github.com/{your_username}/{your_repository_name}/settings/secrets/actions).
  2. Create the secret (they won’t be visible for other users and will be used in the non-forked builds).
  • GHCR_PAT - with the pasted value of a token generated in point 2.

    In theory, default GITHUB_TOKEN secret could be used. This would take the token of the user that triggered the workflow. Unfortunately, it doesn’t seem to be working.

Once we have Docker registries setup, we can create a workflow file. It should be located in the ..github\workflows directory in our repository. Let’s name it build-and-publish.yml.

We’ll run this pipeline when Pull Request is created and on the main branch. We’ll be pushing the Docker image only on the main branch because we don’t want to spam the registry with intermediate images. If you want to, e.g. run manual tests for the pull request branch - you may also consider publishing also prerelease packages.

The process will look as follows:

  1. Use working directory where Dockerfile is located (e.g. src)
  2. Checkout code.
  3. Log in to DockerHub and GHCR using credentials set up in the previous step.
  4. Build Docker image.
  5. Publish Docker image if the pipeline is running on the main branch.

The pipeline has to be run on the Linux machine, as Windows and macOS lack Docker configuration.

The resulting file will look as follow:

As you see the pipeline is technology agnostic. You can reuse it in whatever technology you choose. This gives a lot of possibilities to simplify pipelines and processing.

Cheers!

Oskar

p.s. if GitHub actions look interesting to you, read also other articles on this topic:

Loading

Event-Driven by Oskar Dudycz

Oskar Dudycz For over 14 years, I have been creating IT systems close to the business. I started my career when StackOverflow didn't exist yet. I am a programmer, technical leader, architect. I like to create well-thought-out systems, tools and frameworks that are used in production and make people's lives easier. I believe Event Sourcing, CQRS, and in general, Event-Driven Architectures are a good foundation by which this can be achieved.

Anti-patterns in event modelling - Property Sourcing

How to build an optimal Docker image for your application?

Sours: https://event-driven.io/en/how_to_buid_and_push_docker_image_with_github_actions/

Custom docker actions image github

While I was investigating Kyverno, I wanted to check my Kubernetes deployments for compliance with Kyverno policies. The Kyverno CLI can be used to do that with the following command:

kyverno apply ./policies --resource=./deploy/deployment.yaml

To do this easily from a GitHub workflow, I created an action called gbaeke/kyverno-cli. The action uses a Docker container. It can be used in a workflow as follows:

# run kyverno cli and use v1 instead of v - name: Validate policies uses: gbaeke/[email protected] with: command: | kyverno apply ./policies --resource=./deploy/deployment.yaml

You can find the full workflow here. In the next section, we will take a look at how you build such an action.

If you want a video instead, here it is:

GitHub Actions

A GitHub Action is used inside a GitHub workflow. An action can be built with Javascript or with Docker. To use an action in a workflow, you use uses: followed by a reference to the action, which is just a GitHub repository. In the above action, we used uses: gbaeke/[email protected]. The repository is gbaeke/kyverno-action and the version is v1. The version can refer to a release but also a branch. In this case v1 refers to a branch. In a later section, we will take a look at versioning with releases and branches.

Create a repository

An action consists of several files that live in a git repository. Go ahead and create such a repository on GitHub. I presume you know how to do that. We will add several files to it:

  • Dockerfile and all the files that are needed to build the Docker image
  • action.yml: to set the name of our action, its description, inputs and outputs and how it should run

Docker image

Remember that we want a Docker image that can run the Kyverno CLI. That means we have to include the CLI in the image that we build. In this case, we will build the CLI with Go as instructed on https://kyverno.io. Here is the Dockerfile (should be in the root of your git repo):

FROM golang COPY src/ / RUN git clone https://github.com/kyverno/kyverno.git WORKDIR kyverno RUN make cli RUN mv ./cmd/cli/kubectl-kyverno/kyverno /usr/bin/kyverno ENTRYPOINT ["/entrypoint.sh"]

We start from a golang image because we need the go tools to build the executable. The result of the build is the kyverno executable in /usr/bin. The Docker image uses a shell script as its entrypoint, entrypoint.sh. We copy that shell script from the src folder in our repository.

So go ahead and create the src folder and add a file called entrypoint.sh. Here is the script:

#!/usr/bin/env bash set -e set -o pipefail echo ">>> Running command" echo "" bash -c "set -e; set -o pipefail; $1"

This is just a bash script. We use the set commands in the main script to ensure that, when an error occurs, the script exits with the exit code from the command or pipeline that failed. Because we want to run a command like kyverno apply, we need a way to execute that. That&#;s why we run bash again at the end with the same options and use $1 to represent the argument we will pass to our container. Our GitHub Action will need a way to require an input and pass that input as the argument to the Docker container.

Note: make sure the script is executable; use chmod +x entrypoint.sh

The action.yml

Action.yml defines our action and should be in the root of the git repo. Here is the action.yml for our Docker action:

name: 'kyverno-action' description: 'Runs kyverno cli' branding: icon: 'command' color: 'red' inputs: command: description: 'kyverno command to run' required: true runs: using: 'docker' image: 'Dockerfile' args: - ${{ inputs.command }}

Above, we give the action a name and description. We also set an icon and color. The icon and color is used on the GitHub Marketplace:

As stated earlier, we need to pass arguments to the container when it starts. To achieve that, we define a required input to the action. The input is called command but you can use any name.

In the run: section, we specify that this action uses Docker. When you use image: Dockerfile, the workflow will build the Docker image for you with a random name and then run it for you. When it runs the container, it passes the command input as an argument with args: Multiple arguments can be passed, but we only pass one.

Note: the use of a Dockerfile makes running the action quite slow because the image needs to be built every time the action runs. In a moment, we will see how to fix that.

Verify that the image works

On your machine that has Docker installed, build and run the container to verify that you can run the CLI. Run the commands below from the folder containing the Dockerfile:

docker build -t DOCKER_HUB_USER/kyverno-action:v . docker run DOCKER_HUB_USER/kyverno-action:v "kyverno version"

Above, I presume you have an account on Docker Hub so that you can later push the image to it. Substitute DOCKER_HUB_USER with your Docker Hub username. You can of course use any registry you want.

The result of docker run should be similar to the result below:

>>> Running command Version: vrcg3ab Time: _AM Git commit ID: main/3abbbdea71dfbeb7ba5fff

Note: if you want to build a specific version of the Kyverno CLI, you will need to modify the Dockerfile; the instructions I used build the latest version and includes release candidates

If docker run was successful, push the image to Docker Hub (or your registry):

docker push DOCKER_HUB_USER/kyverno-action:v

Note: later, it will become clear why we push this container to a public registry

Publish to the marketplace

You are now ready to publish your action to the marketplace. One thing to be sure of is that the name of your action should be unique. Above, we used kyverno-action. When you run through the publishing steps, GitHub will check if the name is unique.

To see how to publish the action, check the following video:

Note that publishing to the marketplace is optional. Our action can still be used without it being published. Publishing just makes our action easier to discover.

Using the action

At this point, you can already use the action when you specify the exact release version. In the video, we created a release called v and optionally published it. The snippet below illustrates its use:

- name: Validate policies uses: gbaeke/[email protected] with: command: | kyverno apply ./policies --resource=./deploy/deployment.yaml

Running this action results in a docker build, followed by a docker run in the workflow:

The build step takes quite some time, which is somewhat annoying. Let&#;s fix that! In addition, we will let users use v1 instead of having to specify v or v etc&#;

Creating a v1 branch

By creating a branch called v1 and modifying action.yml to use a Docker image from a registry, we can make the action quicker and easier to use. Just create a branch in GitHub and call it v1. We&#;ll use the UI:

Make the v1 branch active and modify action.yml:

In action.yml, instead of image: &#;Dockerfile&#;, use the following:

image: 'docker://DOCKER_HUB_USER/kyverno-action:v'

When you use the above statement, the image will be pulled instead of built from scratch. You can now use the action with @v1 at the end:

# run kyverno cli and use v1 instead of v - name: Validate policies uses: gbaeke/[email protected] with: command: | kyverno apply ./policies --resource=./deploy/deployment.yaml

In the worflow logs, you will see:

Conclusion

We can conclude that building GitHub Actions with Docker is quick and fun. You can build your action any way you want, using the tools you like. Want to create a tool with Go, or Python or just Bash&#; just do it! If you do want to build a GitHub Action with JavaScript, then be sure to check out this article on devblogs.microsoft.com.

Like this:

LikeLoading

Sours: https://blog.baeke.info//04/09/building-a-github-action-with-docker/
Build \u0026 Push Docker Image Using Github Actions

This post will explain the process I followed to achieve an automated deployment of a Docker Hub image with version numbers.

You will need to have a few things in order before we can get started


Creating Website Files#

Create your project directory wherever you prefer. For this tutorial I’ll be using for the project folder.

In your terminal, navigate to this project folder and create a content folder.

Using your editor of choice, create an index.html file inside the content folder with the following content

Great, now we have our simple website that we’re ready to use and Generate our versioned Docker Image.


Running Container Locally#

Now that we have the website data we would like to use, we can add this to an existing Docker image like nginx. We will start by testing the website files on a nginx container to ensure functionality prior to building the image.

If you’re confident, you can skip this step and go to the next section Building a Docker Image.

At this point you should be able to navigate to the website. In my case the url is http://pod1.davidspencer.xyz

Yours would most likely be http://localhost or replace localhost with the hostname/ip of your podman host.

After you have successfully accessed your Hello World! Website, you can stop and remove your container

Hello World!


Building a Docker Image#

We can now build our custom Image.

There are a few benefits to this; however one point I’d like to make is that doing this will make it so that we don’t have to mount our local files to the container every time.

Add the following text to a new file

With this file created, you can now run the following command to build and image locally.

You should see container-demo in your images now

We can now create a container using this image to see our Hello World! Website

Test the site using the same URL from earlier

After a successful test, stop and remove the new container.

You can delete the image after you have removed the container.


Initialize GitHub Repo#

Ensure you are logged into GitHub

Click New

New Repo

Enter a name for the repo

Name Repo

Click “Create Repository” at the bottom

Copy either the HTTPS or SSH URL Depending on how you use git

Repo URL

Back to your terminal, we’ll push our existing content to GitHub

Now we should have our repo in GitHub with our project. We are now ready to build out the GitHub Actions and really tie this whole thing together.


Build GitHub Actions#

Add the following text to a new file,

Let’s push our changes to the repo now.

Now we can generate our Docker Hub personal token We will also setup our GitHub Secrets.

namedetail
DOCKER_HUB_USERNAMEUsername for Docker Hub
DOCKER_HUB_ACCESS_TOKENDocker Hub Token

Follow the directions here for generating your access token

Once you have your token go to your Repo on GitHub.com

GitHub Repo Settings

Click on Secrets Secrets

Click on New Repository Secrets New Repository Secrets

DOCKER_HUB_USERNAME

Repeat the same process for DOCKER_HUB_ACCESS_TOKEN


Push image to Docker Hub#

Open content/index.html in your editor and make a change

You can track the action status from the Action tab on your repo

GitHub Actions Status

You should eventually see the image in your Docker Hub

Docker Hub Image Created

Lastly you can even see the tags on the image, v and latest

The latest tag will always be the latest and you’ll have a history of versions associated with the image as you make changes.

Docker Hub Image Tags


Conclusion#

I hope you have found this tutorial informative and helpful. If you have any suggestions on how this can be improved, please use the “Suggest Edits” button at the top or send me an email.

Sours: https://davidspencer.xyz/posts/building-custom-docker-image-with-github-actions/

You will also like:

Creating a Docker container action

Introduction

In this guide, you'll learn about the basic components needed to create and use a packaged Docker container action. To focus this guide on the components needed to package the action, the functionality of the action's code is minimal. The action prints "Hello World" in the logs or "Hello [who-to-greet]" if you provide a custom name.

Once you complete this project, you should understand how to build your own Docker container action and test it in a workflow.

Self-hosted runners must use a Linux operating system and have Docker installed to run Docker container actions. For more information about the requirements of self-hosted runners, see "About self-hosted runners."

Warning: When creating workflows and actions, you should always consider whether your code might execute untrusted input from possible attackers. Certain contexts should be treated as untrusted input, as an attacker could insert their own malicious content. For more information, see "Understanding the risk of script injections."

Prerequisites

You may find it helpful to have a basic understanding of GitHub Actions environment variables and the Docker container filesystem:

Before you begin, you'll need to create a GitHub repository.

  1. Create a new repository on GitHub.com. You can choose any repository name or use "hello-world-docker-action" like this example. For more information, see "Create a new repository."

  2. Clone your repository to your computer. For more information, see "Cloning a repository."

  3. From your terminal, change directories into your new repository.

Creating a Dockerfile

In your new directory, create a new file. For more information, see "Dockerfile support for GitHub Actions."

Dockerfile

Creating an action metadata file

Create a new file in the directory you created above. For more information, see "Metadata syntax for GitHub Actions."

action.yml

This metadata defines one input and one output parameter. To pass inputs to the Docker container, you must declare the input using and pass the input in the keyword.

GitHub will build an image from your , and run commands in a new container using this image.

Writing the action code

You can choose any base Docker image and, therefore, any language for your action. The following shell script example uses the input variable to print "Hello [who-to-greet]" in the log file.

Next, the script gets the current time and sets it as an output variable that actions running later in a job can use. In order for GitHub to recognize output variables, you must use a workflow command in a specific syntax: . For more information, see "Workflow commands for GitHub Actions."

  1. Create a new file in the directory.

  2. Add the following code to your file.

    entrypoint.sh

    If executes without any errors, the action's status is set to . You can also explicitly set exit codes in your action's code to provide an action's status. For more information, see "Setting exit codes for actions."

  3. Make your file executable by running the following command on your system.

Creating a README

To let people know how to use your action, you can create a README file. A README is most helpful when you plan to share your action publicly, but is also a great way to remind you or your team how to use the action.

In your directory, create a file that specifies the following information:

  • A detailed description of what the action does.
  • Required input and output arguments.
  • Optional input and output arguments.
  • Secrets the action uses.
  • Environment variables the action uses.
  • An example of how to use your action in a workflow.

README.md

Commit, tag, and push your action to GitHub

From your terminal, commit your , , , and files.

It's best practice to also add a version tag for releases of your action. For more information on versioning your action, see "About actions."

Testing out your action in a workflow

Now you're ready to test your action out in a workflow. When an action is in a private repository, the action can only be used in workflows in the same repository. Public actions can be used by workflows in any repository.

Example using a public action

The following workflow code uses the completed hello world action in the public repository. Copy the following workflow example code into a file, but replace the with your repository and action name. You can also replace the input with your name. Public actions can be used even if they're not published to GitHub Marketplace. For more information, see "Publishing an action."

.github/workflows/main.yml

Example using a private action

Copy the following example workflow code into a file in your action's repository. You can also replace the input with your name. This private action can't be published to GitHub Marketplace, and can only be used in this repository.

.github/workflows/main.yml

From your repository, click the Actions tab, and select the latest workflow run. Under Jobs or in the visualization graph, click A job to say hello. You should see "Hello Mona the Octocat" or the name you used for the input and the timestamp printed in the log.

A screenshot of using your action in a workflow

Sours: https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action


667 668 669 670 671