Skip to main content

GitLab

Intro

GitLab is an open source GitHub alternative whose goal is to become an entire DevSecOps platform, by replacing multiple tools that would be used at different stages in the pipeline into one integrated suite.

Here is a list of some of the features it covers and some popular stand alone services for each of those features which it would replace.

FeatureStandalone Services (Companies)
PlanningJira, Trello, Monday.com, Asana
Source Code ManagementGitHub, Bitbucket, GitKraken, Gitea
Continuous IntegrationJenkins, CircleCI, Travis CI, GitHub Actions, Bamboo
SecuritySnyk, WhiteSource, SonarQube, Checkmarx, Veracode
ComplianceDrata, Vanta, Secureframe, A-LIGN
Artifact RegistryJFrog, AWS Artifact Repository, Nexus Repository, Google Artifact Registry
Continuous DeliveryHarness, Spinnaker, Argo CD, Octopus Deploy
ObservabilitySentry, Datadog, New Relic, Prometheus, Grafana

CI/CD

Let's get familiar with using GitLab for CI/CD by building a CD/CD pipeline with the following.

  • Run Tests
  • Build Docker image
  • Push to Docker Registry
  • Deploy to Server

I've already covered how to do a similar process using AWS CodePipeline in various scenarios, for comparison:

GitLab appears to be a good medium between Netlify (which is VERY simple) and AWS CodePipeline (which is VERY complicated). Providing a nice balance between power and ease of use.

Importing Existing Git Project

GitLab has an import feature, but as of this writing it's broken, so let's do that the old fashioned way.

In GitLab create a new repo.

Inside your project that has a git repo already, use the following:

git remote add <remote_name> <repo_url>

You'll probably want to use gitlab for the remote name and then the repo url you'll get from the GitLab repository you made just now.

If you get merge conflicts and it doesn't let you force push, create a new branch, push that to gitlab and then create a merge request and approve it to overwrite the main branch.

Pipeline

Let's create a pipeline that takes a dockerized Vite application from the GitLab repo and then builds and deploys the application.

Our CI/CD pipeline is going to do the following for us:

  • Build a Docker image
  • Docker image builds the project files
  • Deploy the project files

Just like in AWS CodePipeline, we configure the pipeline using a configuration file in YAML format titled .gitlab-ci.yml. We place this file in the root of the project directory.

The fundamental building block of a GitLab CI configuration files are Jobs. Each stage of the pipeline is a job.

image: docker:20.10.16

services:
- docker:20.10.16-dind

variables:
DOCKER_DRIVER: overlay2

stages:
- build
- deploy

build:
stage: build
script:
# Build the Docker image using Dockerfile.production
- docker build -f Dockerfile.production -t my-vite-app:latest .
# Create a container from the built image
- docker create --name my-vite-app-container my-vite-app:latest
# Copy the built files from the container to the CI/CD workspace
- docker cp my-vite-app-container:/app/dist ./dist
# Clean up the container
- docker rm my-vite-app-container
artifacts:
paths:
- dist/

pages:
stage: deploy
script:
# Create public directory if it doesn't exist
- mkdir -p public
# Move contents of dist/ into public/
- mv dist/* public/
artifacts:
paths:
- public
only:
- main # or your default branch

If you are running into issues you can add logs to the script to help troubleshoot, like I have in this version:

image: docker:20.10.16

services:
- docker:20.10.16-dind

variables:
DOCKER_DRIVER: overlay2

stages:
- build
- deploy

before_script:
- docker info

build:
stage: build
script:
# Debugging: Display current directory and list files
- echo "Current directory:"
- pwd
- echo "Listing files in the current directory:"
- ls -la
# Build the Docker image using Dockerfile.production
- docker build -f Dockerfile.production -t my-vite-app:latest .
# Create a container from the built image
- docker create --name my-vite-app-container my-vite-app:latest
# Copy the built files from the container to the CI/CD workspace
- docker cp my-vite-app-container:/app/dist ./dist
# Clean up the container
- docker rm my-vite-app-container
artifacts:
paths:
- dist/

pages:
stage: deploy
script:
- echo "Listing current directory contents before moving:"
- ls -la
# Create public directory if it doesn't exist
- mkdir -p public
# Move contents of dist/ into public/
- mv dist/* public/
- echo "Contents of public/:"
- ls -la public/

artifacts:
paths:
- public
only:
- main # or your default branch

DOCKER_DRIVER

The majority of this file is very straightforward, however the DOCKER_DRIVER: overlay2 bit is interesting. Docker uses storage drivers to manage the underlying filesystem and how image layers are stored and accessed. The choice of storage driver can affect performance, stability, and compatibility with various filesystems or environments. OverlayFS is a modern, memory-efficient, union filesystem that has become the preferred storage driver in Docker. It works by creating layers and is particularly efficient with copy-on-write mechanisms, which means that it only copies changed files rather than duplicating entire files. The overlay2 driver is the second version of the OverlayFS storage driver. It improves on the original by allowing more file descriptors to be opened, supporting larger numbers of layers for a given image, and generally providing better performance and stability.

Pages

GitLab Pages expects the index file for your application to be in the public folder. If it's not in the root of the public folder you'll get a 404 error.

Comments

Recent Work

Free desktop AI Chat client, designed for developers and businesses. Unlocks advanced model settings only available in the API. Includes quality of life features like custom syntax highlighting.

Learn More

BidBear

bidbear.io

Bidbear is a report automation tool. It downloads Amazon Seller and Advertising reports, daily, to a private database. It then merges and formats the data into beautiful, on demand, exportable performance reports.

Learn More