My third ever blog post was on git hooks and how I could use them to get something similar to github's CI/CD pipeline, but in plain bash script rather than yml.
The script we ended up with at the end of that blogpost was the following
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == *main ]]; then
mkdir -p ~/Documents/edvid.net.temp
git clone ~/edvid.net.git ~/Documents/edvid.net.temp
mkdir -p ~/Documents/edvid.net.temp/node_modules
mv ~/Documents/edvid.net.node-modules/* ~/Documents/edvid.net.temp/node_modules
cd ~/Documents/edvid.net.temp
npm install
npm run build
cd ~
sudo rm -rf /var/www/html/*
sudo mv ~/Documents/edvid.net.temp/build/* /var/www/html/
mv ~/Documents/edvid.net.temp/node_modules/* ~/Documents/edvid.net.node-modules
rm -rf ~/Documents/edvid.net.temp
break
fi
done
A more detailed description of each command and what they do can be found on that blog post, so I'm not going into detail here with comments.
Since that blogpost, the script has had a few changes come its way. I taught myself Docker and Docker-compose since, and chose to use those tools for my website too, despite only consisting of one "layer" for now. I opted for using these tools to prepare for the site maybe gaining more layers in the future, without having the process of launching the application become any more complicated from the perspective of the git hook. The git hook script is so much smaller after modifying it for this need.
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == *main ]]; then
cd /home/space/edvid.dev.deployable/ # No more creating temporary repo
unset GIT_DIR # clones just to build and move static
unset GIT_WORK_TREE # files into /var/www/html/ .
git checkout main # Just host from node directly (making
git pull # Apache2 listen to it first of
docker compose up -d --build # course) and do a quick checkout,
# pull, and docker compose up.
break
fi
done
A lot simpler, eh? But it is still essentially the same as where we left of right after the last blog post in terms of checks that exists. This solution was incredibly useful to incorporate into my workflow but still naïve. What do I mean by that? The solution above put some restriction on me that I'm not a huge fan of having to abide by.
I must:
Or else everything might break. You might imagine only the second one is causing real problems. Failing to abide by the first rule mostly makes my host server spend time rebuilding the site when it didn't need to. Not abiding by the second rule will cause the host server to be serving a site that didn't build correctly. Leaving any visiting clients with a "site temporarily down" until I manually go into/home/space/edvid.dev.deployable/
and checkout a commit that builds correctly and run docker compose up -d --build
again.
In this blog post, we will be solving these issues by having the script itself able to fix broken deployments, and only listen to the branch called "main".
This restriction turned out to be the easiest to solve, despite it has the least impact on our workflow. The branch names in our git hook come in as their $refname
s. Hence why the initial solution was looking for strings ending in "main" not just matching "main" directly. But we can do better than that.
$ if [[ "refs/remotes/origin/main" == *main ]]; then echo "hello!"; fi
hello!
$ if [[ "refs/remotes/origin/mainer" == *main ]]; then echo "hello!"; fi
$ if [[ "refs/remotes/origin/notmain" == *main ]]; then echo "hello!"; fi
hello!
$ if [[ "refs/remotes/origin/main" == */main ]]; then echo "hello!"; fi
hello!
$ if [[ "refs/remotes/origin/mainer" == */main ]]; then echo "hello!"; fi
$ if [[ "refs/remotes/origin/notmain" == */main ]]; then echo "hello!"; fi
$
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == */main ]]; then # That was a small change
cd /home/space/edvid.dev.deployable/
unset GIT_DIR
unset GIT_WORK_TREE
git checkout main
git pull
docker compose up -d --build
break
fi
done
The second issue is a bit more involved. Here's the approach we will take to solve the problem:
If "main" is one of the branches pushed to the remote repo.
Let's try to put that into context of where it needs to be in a script
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == */main ]]; then
cd /home/space/edvid.dev.deployable/
unset GIT_DIR
unset GIT_WORK_TREE
git checkout main
git pull
# A loop where we run docker compose.
# Run a thing that checks if a container is stuck restarting.
# If either of those fail, checkout previous commit and
# restart at top of the loop.
# If neither fail break out of the loop. We have a working application
# Running now
break
fi
done
We definitely want a while loop that can only be broken out of. We also have 2 commands to run where if either fails, we run a command before the loop starts over, but if both succeed, we break out. The second command also doesn't have to run if the first fails. That's gonna look something like this:
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == */main ]]; then
cd /home/space/edvid.dev.deployable/
unset GIT_DIR
unset GIT_WORK_TREE
git checkout main
git pull
while true; do
echo "command one" && \
echo "command two" && \
break || git checkout HEAD~
done
break
fi
done
I think we all know how the first command will look. docker compose up -d --build
will throw an error itself if it fails, so it's rather straight forward. It's the second command we have to engineer ourselves. Let's get familiar with the output of docker ps.
docker ps
docker ps will show you all the running containers on your system in a neat little table, that shows you "container id", "image", "command", "created", "status", "ports", and "names". Docker compose has its own ps
command that will narrow down the shown containers to just the ones the docker-compose.yaml (in the current or a parent directory) states it's responsible for.
Let's take a project of mine that is actually multilayered right now, as an example of docker compose ps
. Here's the structure of my pizzedalieni.com project.
$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
alien-pizza-api-1 alien-pizza-api "docker-entrypoint.s…" api 7 seconds ago Up 6 seconds 0.0.0.0:3021->3001/tcp, :::3021->3001/tcp
alien-pizza-cronjob-1 alien-pizza-cronjob "/bin/sh -c 'cron &&…" cronjob 7 seconds ago Up 6 seconds
alien-pizza-db-1 alien-pizza-db "docker-entrypoint.s…" db 7 seconds ago Up 7 seconds 5432/tcp
alien-pizza-website-1 alien-pizza-website "docker-entrypoint.s…" website 7 seconds ago Up 6 seconds 0.0.0.0:3020->3000/tcp, :::3020->3000/tcp
Let me real quick change a random line in my express server to something that makes no sense, which should make the container fail to start. This project is set up to have the containers restart at crash, so this will make it stuck restarting.
$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
alien-pizza-api-1 alien-pizza-api "docker-entrypoint.s…" api 8 seconds ago Restarting (1) Less than a second ago
alien-pizza-cronjob-1 alien-pizza-cronjob "/bin/sh -c 'cron &&…" cronjob 8 seconds ago Up 7 seconds
alien-pizza-db-1 alien-pizza-db "docker-entrypoint.s…" db 8 seconds ago Up 7 seconds 5432/tcp
alien-pizza-website-1 alien-pizza-website "docker-entrypoint.s…" website 8 seconds ago Up 6 seconds 0.0.0.0:3020->3000/tcp, :::3020->3000/tcp
That's a whole lot of text, and we want our engineered second command to throw an error if the word Restarting is found in the STATUS column. The output of this command is not tab seperated as one might be lead to believe. They are actually spaces. This makes figuring out which words belong under which columns rather difficult, especially as STATUS doesn't show up on the same character-column every time. Luckily we don't have to entertain this difficult task, as docker has offered us a better solution for programmatically working with docker compose ps
output. docker compose ps
provides us a format option, and one of those are json! This is great because we can make use of jq!. This is a tool that has to be installed, but it's incredible powerful.
$ docker compose ps --format json | jq '{Status}'
{
"Status": "Up 3 seconds"
}
{
"Status": "Up 3 seconds"
}
{
"Status": "Up 4 seconds"
}
{
"Status": "Up 3 seconds"
}
And If I detroy my Express API layer again:
$ docker compose ps --format json | jq '{Status}'
{
"Status": "Restarting (1) 1 second ago"
}
{
"Status": "Up 6 seconds"
}
{
"Status": "Up 6 seconds"
}
{
"Status": "Up 5 seconds"
}
We now have a command that will only contain the word "Restartíng" if it truly is stuck restarting. We now need to throw an error if that word is present.
Grep will be our friend here. But grep will succeed if the patternwas found, and fail if no word was found. We want the opposite effect. Let's work towards that.
The special parameter $?
denotes the exit code of the last executed command. That means that our pipeline of docker compose ps --format json | jq '{Status}' | grep "Restarting"
has to end here. A command will follow this pipeline and reverse the exit code of it. Here's how we will capture the exit code:
$ echo "Something Containing The Word Restarting" | grep -q "Restarting"
$ echo $?
0
$ echo "Something not doing that" | grep -q "Restarting"
$ echo $?
1
Or as one liners:
$ echo "Something Containing The Word Restarting" | grep -q "Restarting"; echo $?
0
$ echo "Something not doing that" | grep -q "Restarting"; echo $?
1
Now for how to reverse that. Bash provides us a very convinient "compound command", namely the [[ expression ]]
command. Any expression that can be evaluated as either true or false can be put here, and the command will exit with a status code of 1 (failure) if the expression evaluates as false, and 0 (success) if true. This is exactly what we need. We will consider The previous command a success if it has a non-zero exit code (it did not find a container stuck Restarting).
$ echo "Something Containing The Word Restarting" | grep -q "Restarting"; [[ $? != 0 ]]; echo $?
1
$ echo "Something not doing that" | grep -q "Restarting"; [[ $? != 0 ]]; echo $?
0
It's flipped! Now for putting all this together. Since our long pipeline with docker compose, jq, and grep, as well as the following command [[ $? != 0 ]]
is our solution to the "command two", it is itself part of a pipeline. This means we can't type it out as is, as when we would end the top level pipeline when we introduce the [[]]
command.
Luckily, bash has us covered once again. We are able to create a "subshell", that is, a contained area in which a series of commands can be executed in succession, and will be interpreted as just one command on a level above. We do this by encapsulating it in parentheses. That then finally leaves us with:
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == */main ]]; then
cd /home/space/edvid.dev.deployable/
unset GIT_DIR
unset GIT_WORK_TREE
git checkout main
git pull
while true; do
docker compose up -d --build && \
( docker compose ps --format "json" | jq '{Status}' | grep "Restarting"; [[ $? != 0 ]]) && \
break || git checkout HEAD~
done
break
fi
done
Or a fair bit more readable and completely equivalent
#!/bin/bash
while read sha1 sha2 refname; do
if [[ $refname == */main ]]; then
cd /home/space/edvid.dev.deployable/
unset GIT_DIR
unset GIT_WORK_TREE
git checkout main
git pull
while true; do
docker compose up -d --build && \
(
docker compose ps --format "json" | \
jq '{Status}' | \
grep "Restarting"
[[ $? != 0 ]] \
) && \
break || git checkout HEAD~
done
break
fi
done
This is the new git hook script for my personal projects. This is already a huge upgrade and introduces a whole bunch of new bash concepts. The future might bring an even more involved git hook as I still see room for improvements. Namely
But for now, this is it. I hope you learned something too, and this was certainly a nice break from greater projects I've worked on.
No, I'm not talking to the team behind
A project I'm currently working on for the purpose of getting comfortable with databases, have given me an opportunity to have a second look at Docker. Before this project, all other solo projects I've worked on so far have been single layer applications such as static web pages that don't communicate with other layers. Even my most succesful project to date, ("The New World's Decicated Stat Program, which was used by real people), was still just a website which loaded data from a giant JSON file which was version controlled with the rest of the application. Yikes.
This new project of mine, for the first time in my software development journey, consists of several layers, of which I am the sole developer of every layer. The project consists of a front-end, a database, and for the sake of keeping the front-end and database isolated, an API layer as well. All of a sudden I had not one, but three processes I needed to make sure were up and running before I could have a working application. Moreover, the front-end could easily crash my API layer if it requested something that didn't make sense to it. A crash shouldn't be a problem for a static API that is fast to relaunch, but I hadn't set it up to automatically restart when crashing. I didn't know how, outside of the node module "forever". I thought it might be overkill to use the node module "forever" for a tiny 1-file 120-line long expressJS api with no other modules. Besides, "forever" would only have solved a third of my nuisance with my new workflow. I was holding out for a different hero. That hero turned out to be Docker with its trusty side-kick
Docker-compose turned out to be everything I had been needing in this layered workflow, which was still so new and foreign to me. But I wasn't gonna jump into using a tool that builds on top of another tool without first getting comfortable with the first tool. I had to get comfortable with Docker first and understand how I should conceives of images, containers, networks, ports, and Dockerfile syntax before I could truly appreciate and make correct use of the conveniences Docker-compose brought to the table.
And so I did. I made sure to only allow myself to do the equivalent in Docker-compose once I knew how to do it with vanilla Docker commands. The only exception being networking between containers, which I felt was a task so well suited for Docker-compose that I was comfortable starting there. This way of working turned out to feel great. Much like striking through an item on your checklist, I was able to consider a task done once I wrote a line in my docker-compose.yaml.
Docker-compose is truly a marvelous tool. Being able to launch several applications at once and close the very same ones down with a single command, being sure all the applications are lauched with the correct networking settings, and to enforce a certain launch order by informing Docker-compose of which applications depend on which other applications. A lovely icing on the top is being able to tell a container that it should restart on crash. Moreover, if Docker-compose has launched a set of containers, and the entire system reboots, the containers will relaunch once the system is up and running again (given the docker.service is made to launch on startup).
Along my Docker and Docker-compose learning journey, I stumbled upon a few roadblocks that I couldn't intuit my way out of. Things I was stuck on for a while, and when I found the solution, I didn't feel particularly silly for not knowing sooner. Some true pitfalls I want to warn about, so any aspiring Docker guru that come across this blogpost don't have to repeat the mistakes I did.
Mounts - or more specifically Docker's preffered and bespoke tool Volumes. Volumes, unlike mounts, work independently of the directory structure of the OS and automatically associate ownership of the bound directory with appropriate users, as well as probably a few other smart features I'm less familiar with.
Volumes (and Mounts too) are a great way to make sure you can have persistent data used by containers such that if a container needs to be rebuilt or is otherwise deleted, some data that is used by the container can persist. This is ideal for databases containers where it would be horrible to lose the database data just because there wasa a need for an update or a reinstall for whatever reason. Volumes are generally quite intuitive, be it vanilla Docker or Docker-compose. At least with their basic usage, that is, a source and target directory specified, but leaving all other options with their default values. For this project I defined a Volume for my database container to use in a docker-compose.yaml file. I used the official postgres image to make my database container, and defined a volume the following way
services:
...
db:
image: postgres
volumes:
- ./database:/var/lib/postgresql
volumes:
- <source>:<target>
/var/lib/postgresql/data
and Volumes get wacky when one Volume has a target that is a parent directory of other Volume's target. Don't do that. I was stuck on this for a long time, and the fix was literally changing my Volume target so my segment looked more like:services:
...
db:
volumes:
- ./database:/var/lib/postgresql/data
CMD and RUN differences (A minor pitfall) - in dockerfiles there is a difference between two keywords that took me a while to understand. The difference between CMD and RUN commands. I wanted my container running the website to use a built version of my nextJS application rather than just run the dev version internally.
Unlike vanilla react, nextJS has 3 commands associated with building and running the application; they are start
, build
, and dev
. start
works differently from how you might expect coming from react land. start
as we know it from vanilla react is what dev
is in NextJS land. build
does not build a static website for you to host statically (unless you make it do so with options). build
instead builds an application deployable with node directly with npm run start
.start
actually doesn't work at all in nextJS unless a .next directory has been created via the build
command.
Anyways, back to the CMD and RUN confusion.
I was changing
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
# except db connection somehow
COPY . .
EXPOSE 3000
RUN npm run dev
for
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
# except db connection somehow
COPY . .
EXPOSE 3000
CMD npm run build
CMD npm run start
aaaand everything broke... This is my second pitfall I want to warn against. A truly unintuitive thing with Docker I must admit. CMD sounds like a command you can run in some shell as you are building your way towards an image for a container to launch from. The parameters it takes is even in the form of a shell command.
It is NOT just a command in the build process of the image. In fact there can only exist one CMD command in a dockerfile. Why? Because the role of the CMD command is actually for a container using some image to have a pre-defined default executable the container can launch when the container is launched. So how do we get the behaviour we were looking for? The completely normal command to run of which we can have many because we are slowly building towards a Docker image that has everything we wish for? the RUN command. so we replace CMD npm run build
with RUN npm run build
and we get
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
# except db connection somehow
COPY . .
EXPOSE 3000
RUN npm run build
CMD npm run start
Dockerfile and docker-compose.yaml have different file formats (yet another minor pitfall) - might be an obvious statement to make, but it was still a pitfall that confused me for some time.
The docker-compose.yaml will look for an .env file and include those environment variables for you to use in the file. For example, you could save the password to a database in an .env file (that is then not version controlled), and you could make sure your database is created with that password and the API layer use that same password to connect to the database, all without ever exposing the real password running on the host machine. At least not exposed to anyone who only has access to the repo to clone, and not access to the host machine.
Let's run with that example.
An .env file looking like this
DB_PASSWD=super-secret-password
Would allow us to use that enviroment variable in yaml files like this
services:
api:
environment:
DB_PASSWORD: ${DB_PASSWD}
The above just allowed us to use an environment variable called DB_PASSWORD
in the Dockerfile that builds our api image. My first intuition was to use the same syntax as the yaml file to use an environment variable in the Dockerfile too i.e. ${DB_PASSWORD}
. This was the wrong intuition. It was not a problem I was stuck on for particularly long, but I still feel inclined to stress that these two filetypes use two different syntaxes. The dockerfile is much more shell-like, and as such, the value of a environment variable is used in a similar way. The Project's API dockerfile, in contrast to the syntax of the yaml file from before, currently looks like this
FROM node:18 as api
COPY . .
EXPOSE 3001
CMD PASSWORD=$DB_PASSWORD node server.js
Port pitfall - This point might make a few who have already worked with networks in Docker or Docker-compose roll their eyes. Still, I want to bring attention to this point, because it sure had me stuck for a while until I had understood how I should conceive of neworks when working with Docker. My initial strategy was to make a container for each layer in the application, have an appropriate port be published for each layer, and have them all join the same network, and hopefully it would all work the same as when all layers of the application was running in the same environment, aka my desktop PC.
That was the dream. The dream that was broken, the bubble that was burst. Networking with Docker is not quite that simple, but it is still simple. I had my appropriate ports open in every layer of the application, and every layer in the application was trying to interact with the other layers withlocalhost:<some_port>
, aka with localhost as the host. This worked great when every layer was running in the same environment, but now everything was containerized.
I will explain with diagrams.
I had every container publish the port that they can interact over, and every container was expecting to find who they wish to communicate with on thelocalhost
host. Essentially leaving me with this setup.
What I actually needed
What I needed
Your first intuition might have been to do something more like this
A plausible first intuition.
It certainly was also my first intuition. The reason this method won't work has to do with a little quirk of modern front-end frameworks. React does its best at having as many of the components of the front-end be rendered on- and served from the servers hosting the application. But a lot of the time you'll find yourself needing a feature that makes no sense to have pre-rendered on the server, and needs to me managed by the browser that is loading the web page. These features tend to be things like state, in which the client's actions can influence the rendering of a certain component. The components that haveto be managed by the browser it is loaded in are known as a client component. This little quirk puts the front-end in kind of two positions. When a client visits a webpage with their browser, that browser is not on the same network as the 3 layers of the application I'm building. If the way in which the webpage layer communicates with the API layer depends on those layers being on the same network, no client component could ever request anything from the API. The API is does not exists as far as the component running in the clients browser.
On the flipside, if we build our application such that the way the website connects and communicates with our API works independently of whichever browser on whichever network loads our site, we can once again have our API requests take place, even in components managed by the client.
Alright, bad intuition rant over.How do we actually make the db avaible to the api. Simple! (I promised it was still simple) It already is available by default when they are on the same network. In Docker-compose you essentially have two options for how to communicate with other containers on the same network.
Say we have our DB container and API that wish to connect to our DB with a connection string. We can specify an IP for our DB to use on the network that is shared between the DB and the API layer. that IP can then be used in place of localhost in that conenction sting back in our API layer. Something like this
Where DB is created in the docker-compose.yaml filedb:
image: postgres
...
networks:
pizzanet:
ipv4_address: 10.56.1.22
const pgp = require('pg-promise')();
const databaseConfig = {
host: '10.56.1.22',
port: 5432,
database: 'pizze_dalieni',
user: 'postgres',
password: process.env.PASSWORD, // You best believe I keep my password in a environment variable
};
const db = pgp(databaseConfig);
The second option is even easier. We can forget about specifying an IP at all, and let Docker-compose handle it all for us. In place of the IP address in the connection string, we are allowed to just use the name we have given our service. How cool is that!
Where DB is created in the docker-compose.yaml filedb:
image: postgres
...
networks:
- pizzanet
const pgp = require('pg-promise')();
const databaseConfig = {
host: 'db',
port: 5432,
database: 'pizze_dalieni',
user: 'postgres',
password: process.env.PASSWORD, // You best believe I keep my password in a environment variable
};
const db = pgp(databaseConfig);
Networking in docker rally isn't that bad once you know this. But you'd be scratching your head a whole lot before figuring this out.
I also feel inclined to point out that while writing this blogpost, I have since made my 3 layers not all be on a shared pizzanet network. Containers are allowed to be part of more than one network, and as such I could make a frontend network that the API and webpage container share, and a backend that the DB and API share. This way, not only doens't the webpage try to communcate with the DB directly, it isn't even on the same network.
Env pitfall - A final quirk of Docker-compose I want to bring attention to is the fact that if you're not messing with environment variables, there's a good chance you won't run into any issues running docker-compose up
in subdirectories of your project. Certainly this was the case for me, which is why I found it so surprising when I learned it all of a sudden mattered when I was messing around with environment variables. One of the reasons I was stuck for as long as I was on the whole environment variable syntax of yaml and dockerfile thing, was that I still wasn't seeing it working even though I was using the correct syntax in each corresponding language. And so I was on to try something different.
Little did I realise it was because I was running docker-compose up
from within one of my subdirectories, in which every part of the composition process went just fine except a little warning for each environment variable that wasn't set correctly. I did not understand why these variables refused to set or what I was doing wrong, so I essentially gave up and blamed it on some obscure docker bug that might get fixed after a restart. Lo and behold the restart seemed to fix everything. My application was running smoothly and the warnings of the environment variables that refused to set were gone. I was content and continued working on the application. Little did I know that the reason for it all working this time around was the fact that I started work in the root directory of my project, but I soon cd'ed my way down into different directories as I was working and BAM, it was all just as broken as before my restart. I feel lucky that the first place my mind went was to check which directories I had been able to run the command succesfully from before, and in which directories I hadn't been able to. I feel very lucky that this debug session didn't last longer, cuz it sure feels like one that could have lasted a whole lot longer.
I hope everyone reading this pitfall remembers this one thing. It is not the directory in which the docker-compose.yaml exists, but rather the directory in which you run docker-compose up
that the .env file will be looked for by Docker-compose.
docker-compose down
(much like docker stop <container-name>
) has default behaviour that sends SIGTERM to the containers docker-compose is responsible for. After a default grace period of 10 seconds, the command will force these containers to shut down with SIGKILL. For reasons unbeknownst to me as of writing this, my own containers refuse to respond to SIGTERM. I refuse to wait 20 seconds for my containers to shut down, especially during development and testing, and especially for containers where I know they can handle not having a graceful shutdown. I want a command to shut down all my containers fast so I can boot them up again with new updated content from the current development session.This is where the -t
flag comes in. This flag allows me to specify that grace period. Hurray! I no longer have to wait 10 seconds for each container! I can wait 1! Or perhaps... just 0? Yes, just 0!
The command docker-compose down -t 0
truly makes me feel like a saturday night cartoon villain who defies every viewer's expectation by setting the timer of whatever evil device they have created to exactly 0, giving the hero absolutely no chance whatsoever at stopping the destructive power of the device and saving the day. No tension, no suspense, no story, no good ending.
"The docker containers were already detroyed before you even knew that was my plan, Mr. Bond" - me whenever I finish running docker-compose down -t 0
Speaking of getting those containers up and running again with updated content.
The default behaviour ofdocker-compose up
is to reuse the containers that we just shut down. This is not going to include the new content we just made! If you google this issue you are likely to stumble upon a forum post where they are recommending the --force-rebuild
flag. I would actually not recommend this flag at all. Sometimes you should just read the manual instead of looking at forum posts, even if they ask the exact same question you are. On the docker website, we can find a description of the two options/flags in question.
It isn't even the images that are being recreated with the --force-recreate
flag. The content of the images are unchaged. --build
is the way to go.
These were the pitfalls I want to warn others against. They are very annoying to be stuck in, and if I can provide the right guidance for people learning Docker such that they understand the concepts without themselves being stuck in these pitfalls, I'd be delighted.
I had heard good things about
Ironically, it was right around the time I had already outphased much of my use for the good ol' terminal by integrating more of neovim's netrw and fugitive into my workflow, that I decided to give tmux another look.
And my jaw dropped. I had completely overlooked a life safer this entire time. What had I been waiting for?!? Opening a tmux pane is absolutely more convenient than :term in neovim. And yes, of course there's a mode that allows you to move your cursor around with vim-like movements and copy to your heart's content. I had been dealing with compromises this entire time when I was opening a terminal in neovim. Compromises I thought were necessary to have the luxury of a terminal I can move my cursor around in and copy text from at my will (this is all assuming mouseless).
There were still some things that differed in tmux from how I wish to have my terminal instances behave. I didn't want my switch to tmux to feel like apples and oranges. I wanted it to be a true upgrade to my software dev experience. I want to share with you some settings I set in order for me to feel truly at home in my new tmux, and make it a true upgrade.
# vim bind move
set -g mode-keys vi
bind-key h select-pane -L
bind-key j select-pane -D
bind-key k select-pane -U
bind-key l select-pane -R
How it feels to use vim-tmux-navigator.
The default keybinds for splitting a window in panes in tmux are different from vim by a lot. The two programs even disagree with what is considered a vertical or horizontal split. They have opposite terminology. Vim says a split is vertical if the line between the two windows is vertical, leaving the two windows to be next to each other horizontally. Tmux says a vertical split is when two panes are places on top of one another vertically, leaving the line seperating them horizontal. Confusing, I know. The keybinds for splitting vertically in the tmux sense then, is [prefix] " and splitting horizontal is [prefix] %. Weird. And inconvenient I'd argue.
I don't want to think about these differences. So redoing keybinds it is again! Let's just use [prefix] s and [prefix] v for horizontal and vertical splits in tmux. And that's vim's definition of horizontal and vertical. That's why the rebound key "v" has the -h flag in the command it's bound to. It's tmux's idea of a horizontal we want to bind to v.
# custom keybinds
bind-key s split-window
bind-key v split-window -h
set -g base-index 1
set -g pane-base-index 1
Now that I can safely have both vim open on my desktop, and simultaneously have vim open on a connection to my raspberrypi, I could find myself tempted to work on several things on several devices at once. I also heard a horror story of a junior developer having accidentally running a sequal command on production instead of in their own test environment. What a scare! So I decided that if I was gonna install neovim on all my devices (desktop, raspberrypi, my phone with termux) I needed a quick way to differentiate them. I have a .dotfiles repo on github on which I keep track of all my configs for several applications, including neovim and I intend all my devices to share the same configs, so I need a different way of differentiating these environments. I ended up with the following idea:
This is a snippet of lua code in my neovim config
local _solidBgColor = nil
local solidBgColor = function ()
if _solidBgColor == nil then
local ok, mod = pcall(require, 'vimbgcol')
if not ok then _solidBgColor = grabColor('draculabgdarker', 'bg')
else _solidBgColor = mod end
end
return _solidBgColor
end
There's only really one line we need to concern ourselves with. All the other lines is just implementing a fallback and ensuring there's only ever 1 fileread even as the value is requested any number of times. The line I want to draw attention to is:
local ok, mod = pcall(require, 'vimbgcol')
in fact, if we didn't have to concern ourselves with errors and a fallback, the line would look like this:
local _solidBgColor = require('vimbgcol')
this line looks for a file at path ~/.config/nvim/lua/vimbgcol.lua . At least on Linux. _This_ file is the file I won't have in my .dotfiles repo on github and I will actually create for each device I intend to work on. The file literally just contains a single lua line, and it returns a hex color code.
return "#162016"
This way I can very quickly know which device I'm on in a vim session. Minimizing scary mistakes like the aforementioned horror story. Yikes.
Me with two neovims open. Sharing the exact same config file but clearly looking different on my desktop and my raspberrypi.
I feel a lot safer with a background on my neovim that can differ per device, but it still leaves me vulnerable when a pane is just a terminal or if my neovim has its background removed (I have a keybind for that, transparent editors have their uses when you don't have an epic gamer setup with 3 monitors). I want a solution to each of my panes. A way to signify to myself that a pane I'm in is one in which I should tread carefully. Such as maybe being ssh'ed into an important server, or I'm messing with production is some way. Or perhaps a more fool proof way would be to tread carefully everywhere unless I have marked a pane as safe for me to fool around in. Whichever the case, my solution so far is to set the background colour of a pane.
Tmux also allow you to run commands on it, just like vim has a command mode. A command in Tmux can be written by typing [prefix] : . The background of the current pane can then be set with
:select-pane -P "bg=#304030"
Or any other color that your heart desires.
Pane coloured with the command above
Worth noting is actually that I also have the following line in my tmux configuration file to allow me the full range of RGB colours rather than the default 256 XTerm colours.
set -ag terminal-overrides ",$TERM:RGB"
These things have significantly improved my tmux experience and made feeling safe while working in it. I can definitely recommend tmux to anyone else who find themselves doing a lot in the terminal. Tmux has been a luxery I had missed out on because I thought vim's :term was the only and best luxury I could ask for. Tmux was a lovely addition to my workflow. I won't be going back anytime soon.
Alternative title: A really silly AI and a quickfix list, name a more iconic duo, I'll wait
When I'm not working on my web dev journey and expanding my skillset for what I intend to be used for my professional life, I dabble in game development. I have a rather big project idea that's on the back burner for now as I have been more determined to work on landing a job full time in web development. But I intend to game dev a fair bit more in my free time once I am paid full time to be the best web dev I can be.
I have most of my experience in the Unity game engine and that's also where the current project I have been working on lies. Unity had a bit of a scandal in the autumn of 2023, and I have since been set on getting into the godot game engine instead. But switching working environment make you need a lot of practise. So this week, as I've been sick and been too overwhelmed to work on the web developing portfolio project in the works, I have instead been rediscovering my love for game making and been playing around in godot and getting in the nitty gritty of learning their gdscript scripting language. A task I dreaded getting started with, but what better time to learn an entire programming language than when sick?
You're not forced to use gdscript in Godot however. C# is the other option, a programming language I'm much more familiar with, but I wanted to get a proper perspective and know my choices with the engine. Maybe there's a good reason why the Godot team chose to develop such a bespoke language for their engine. Maybe the development feel fits more well with the engine as as whole than C#.
So I've been practising godot and gdscript the past week or so. So far loving it. The engine a fair bit more than the scripting language. I have my qualms with it, the same way I have with python. I have the same take with python as I do javascript, lua and now gdscript; I miss strictly typed and scoped variables,methods, and classes. Too many things are enforced with culturally/ with "good practices"/, but at the same time they're super good first programming languages, and tend to be what I recommend people starting out anyways despite my qualms, mostly because it's so fast to get something up and running fast. A prototyping language if you will.
Hot take out of the way. Neovim has been great to work in for godot. This is not an editor I plan on leaving so easily. It's not that godot's own script editor is lacking. I think it's doing a great job at being accesible to new programmers, but I'm very happy with an editor I can open up in the terminal thus also
I used Mistral AI to ask it for ideas on how I might approach storing a style guide for an entire team in godot. Godot has its own style guide that they recommend you to follow when writing gdscript, heavily based on python's style guides. I had two tiny modifications to that style guide I wanted to express somewhere for all in a team to see. Here's what I asked, and what I got.
Mistral AI available all from within Neovim!
Look at its little gdscript snippet!
yay! gdscript code right there for me to copy! aaaand, whoops... seems it doesn't compile. Did the AI produce bad code, or did I mess something up somewhere else, and this should actually be completely fine syntax? After some googling to make sure I hadn't missed some gdscript syntax, I determined this code snippet needs a little modification from what I was initially given.
In fact, it needed the same modification for every line that had an error... An Idea struck me. Yes, yes, it's only 12 lines, and they all appear in the same file. But hey, practise automation where you can! I knew of the quickfix list but hadn't really found a good opportunity to use it. I rarely find myself with code that breaks the exact same way many many places at once. Something I expect to happen a lot if I integrate AI usage a lot more into my workflow (or perhaps more realistically, if I'm tasked with changing API in a large code base someday, and a convenient code action isn't available to me)
Alas, it was quickfixin time.
I could arguably have used _ instead of 0 in the cdo command. And no, I didn't have start filtering those errors. I already had them all there and of the same type. I could have just put them all immedietly into my quickfix list. I just wanted to show that you can filter.
Isn't that just elegant?
I wish I knew of an equivalent tool in vscode, so I had something to recommend the vast majority of developers. But this remains a vim exclusive in my head for now.
Oh, it all went to fast? Ok, let's break it down
The first window you see pop up infront of my open file is a nvim telescope window querying diagnostics, meaning errors, warnings and even style guide suggestions if e.g. you're working js/ts and have a very opinionated eslinter.
With <C-q> I can put all matches in a telescope window into a quickfix list. The quickfix list is the second window that pops up below my life with that strong green of a highlight on its top line. This list allows you to perform commands on all items in the list. You do that with
:cdo {command}
That leaves us with "norm! 0ivar " to explain. Yes, the trailing space is on purpose. Norm is a _command_ we can use to execute a vim _action_ in normal mode. An action is literally any series of keyboard inputs we perform in vim. Since vim is so keyboard centric, this includes anything we can do in the editor. And so we have
norm! {type type type as if you were editing all the matches lines at once}
The ! added there is mostly muscle memory. It makes sure to intepret my keyboard input without my own set of mappings I have in my neovim setup. A bit of a happy accident as I had no real reason to use ! here, but since I'm writing a blogpost for the wider world, I might as well restrict myself to input that will make the command execute the same regardless of people's differing vim setups.
Ok now onto the last part "0ivar ". This is the part that made "var " appear at the beginning of every line. If you're familiar with vim movements in any capacity, I think you can figure out what this lien does here. If not, and this is actually the blogpost that made you consider moving to vim at all. Cool! I've never had that impact on anyones life! I won't leave you in the dark! Vimmers, you can ignore the following paragraph.
the "var " portion of "0ivar " is quite trivial. That's what letting me insert the text at all. But vim is modal. I am not in "insert" mode. I'm in "normal" (as per the norm command). Normal mode is the mode vim launches in, and is where all the magic of vim can happen, with moving cursor, executing commands, etc. Pressing 0 while in normal mode will move the cursor to the beginning of the line. Pressing i in normal mode gets me in insert mode. So for every line in my quickfix list, I am moving to the beginning of it, and inserting the text "var ". At this point in your journey, vim movements will feel very esoteric to you, and they mostly are. It takes a lot of practise to be comfortable in vim, although I truly believe it's worth it, not just for party tricks like these, but also for every day editing. But this little cool trick I just showed off might be a while away from you still. I'd be honoured if this blogpost is what made you consider neovim as your editor, but I can't recommend integrating these two powerful tools into your workflow until you have become comfortable with vim movements, cuz everything else will give you a headache. Spend a week or two learning vim movements, and come back to this blogpost once you're ready to experience the power of the quickfix list.
And to the vimmers who are considering integrating this iconic duo into their workflow. It might take some time before it all clicks for you, but once the cdo commands comes to you faster than fixing the same bug 10 places would have, it becomes a powerful tool that truly makes you feel like a grand wizard.
I had known about Github's CI actions for a while, but never really got into them. I hadn't found a good use for them on any of my projects, and so never really went hands on with them. There is however one project which is neither hosted on Github, nor is its remote repository on Github. I wanted to figure out how to deploy the newest version of my website whenever I commit something new to my main branch on my repository.
In the beginning of my journey the webserver for this website, as well as the remote git repository for it, was hosted on the hosting service One.com, which I also had ssh access to.
I wanted to see if there was a way to do what Github CI actions could do, but just with vanilla git. There is! They're called git hooks, and they're quite intuitive to use. Just write a shell script that you want to execute whenever some given event occurs, and save it to <path to remote repo>/hooks/<given event name>. In my case, that's edvid.net.git/hooks/post-receive.
I was ecstatic! And I thought I was close to the end of my journey. Little did I know this was only the beginning.
I ran the commands I usually do one last time for builing my
At this point I spend a few hours going down a rabbit hole in the git documentation (which is already a heavy read, I beg for someone to rewrite the git documentation) hoping to find a solution there. I was convinced that I had either misunderstood what the git hook file should contain, or that the post-receive hook doesn't trigger in all the cases I thought it did. Turns out neither was the issue, and I had understood it correctly the first time around. The culprit was actually One.com itself! One.com doesn't allow its users to execute arbitrary scripts on their servers. I don't exactly fault the peeps at One.com for this decision, I don't know if I would have trusted users with the same power if I ran a similar service, but alas my heart was broken. Our lifestyles were incompatible. I had to move on.
And so the real adventure began. Moving the remote repo onto my Raspberry Pi rather than on a One.com server was trivial. Making my Raspberry Pi host my website, as well as moving my edvid.net domain over to my home IP was not so trivial. Once again, because I had only just looked into
(If you have struggled with finicky domain name providers, you might relate to the stuff below. Otherwise feel free to skip my rant entirely)
This was arguably an even more frustrating bit. Nothing needed to be changed with my Apache, no new ports needed to be forwarded on my home router, no part of this server setup had anything to do with the files I had full control over. I was fully in the hands of goDaddy, and their service can be... finicky. I had bought an SSL certificate + firewall pack, but when I go to the page to add the https to my domain I only had the option to "buy". I had spent 150€ on this, I will not just buy again! Maybe I should just go enable the firewall and see how far that will take me. I click to enable the firewall and I'm met with two choices. Either I "enable firewall with https" or "enable firewall without https". Oh okay, easy choice, I will just pick with... oh... not available with your plan. That's not the way then... After another round of googling and giving up on finding a solution, I accepted that I might just have to try without https and see where that gets me. Lo and behold, that was the right thing to do, because now the page where I add the https no longer only says "buy"; it allows me to just enable! Once I enables that, I was met with a screen that told me it failed, and that I need to contact customer support. I felt like I had failed. This was it. I can't do this 100% on my own, I need help from the goDaddy support team. I tried one last thing before calling in, and that happened to work! In my page to enable the firewall I had bought, I could choose for it to use http or https. I thought this meant the choice between visitors to my site needing to go to http://edvid.net or https://edvid.net. That's not what that meant. The firewall settings were only interested in wether or not communication with my Raspberry Pi was via http or https. So I changed that, and that was somehow it! https://edvid.net was now loading! All without calling support, despite what they themselves said.
This kind of flimsy behaviour from goDaddy website; long loading times of pages, errors telling me to go get support, debugging tools that give me different information about which things are correctly enabled on my domain every time I reload the debugging tool's page, and DNS servers being updated slowly; all add to an experience of setting up my own server that I really wasn't fond of. But I've learned some valuable lessons too along the way. I know how to set up a domain with https on goDaddy despite how well they communicate, and I also now know to just test if Apache/forwarded ports work correctly by accessing my site via direct IP. Then I can rule out that the issue is on my programming, and not a slow and flimsy service that provides me the actual domain name.
With that rant concluded, I can perhaps continue again on the main point of this blog post. We weren't here to complain about setting up your own web-server and the hardships there. We were actually talking about the much more exciting prospect of automatically deploying your web-server (or any other service) using git hooks!
After having figured out all the server setup stuff, I was ready to tackle updating the web server whenever something is pushed to the main branch.
So I made a small edit to my website on the main branch and pushed the changes to my remote repo on my Raspberry Pi, and the git hook immedietly got busy! I was stoked! Now this script only had the instructions of how to build from a non-bare remote repository and, so after modifying the script to temporarily clone the remote repository to another location on the same device, to then make a build there, I had a fully working script that deployed the newest version of the website whenever something new was pushed to the remote repository.
#!/bin/bash
# We create a new appropriately named temp folder/directory.
mkdir -p ~/Documents/edvid.net.temp
# We clone the repository into the directory so it has all files in our codebase.
git clone ~/edvid.net.git ~/Documents/edvid.net.temp
# We make a new directory in this codebase for all the modules our react app uses.
# There's a lot so they're not part of the git repository, but are essential for building the react app for production.
mkdir -p ~/Documents/edvid.net.temp/node_modules
# Node modules take a long time to install, so to save time, a directory is saving all modules to another locations before
# This temp directory is destroyed. This is the directory we are taking from now.
mv ~/Documents/edvid.net.node-modules/* ~/Documents/edvid.net.temp/node_modules
# Go into our temp folder to...
cd ~/Documents/edvid.net.temp
# ... install any modules that are new and therefore weren't in the saved node-modules directory ...
npm install
# ... and to actually build the react app for production.
npm run build
# Go back out of this temp directory.
cd ~
# Clear the directory that holds what the web-server serves.
sudo rm -rf /var/www/html/*
# Move the ready-built react app into the directory that the webserver serves.
sudo mv ~/Documents/edvid.net.temp/build/* /var/www/html/
# Save all currently used node modules to a save location before destruction of the very first temp directory we made.
mv ~/Documents/edvid.net.temp/node_modules/* ~/Documents/edvid.net.node-modules
# Destroy temp directory.
rm -rf ~/Documents/edvid.net.temp
Now, of course I also had to make sure it only triggers this for main branch, and not all the other branches I might make and push to the remote repo. To achieve this I did the following:
#!/bin/bash
# For every updated ref in a push, a line is passed to the post-/pre-recieve hooks in the form of
# <oldrefsha> <newrefsha> <refname>. So we iterate over every ref in the push to make a check
while read sha1 sha2 refname; do
# This is our check. The refname contains, among other things, the name of the branch we're updating.
# That's what the asterisk means in "*main"; I'm matching any refname that ends in main. We're mainly looking for
# the refname "remotes/origin/main" but I want this script to work regardless of what you might have called your remote origin.
# If this is not what _you_ want, you can absolutely write "*/origin/main" instead.
if [[ $refname == *main ]]; then
mkdir -p ~/Documents/edvid.net.temp
git clone ~/edvid.net.git ~/Documents/edvid.net.temp
mkdir -p ~/Documents/edvid.net.temp/node_modules
mv ~/Documents/edvid.net.node-modules/* ~/Documents/edvid.net.temp/node_modules
cd ~/Documents/edvid.net.temp
npm install
npm run build
cd ~
sudo rm -rf /var/www/html/*
sudo mv ~/Documents/edvid.net.temp/build/* /var/www/html/
mv ~/Documents/edvid.net.temp/node_modules/* ~/Documents/edvid.net.node-modules
rm -rf ~/Documents/edvid.net.temp
# If we have already done one build update this push, we can escape the loop. We don't need to build
# Several times
break
fi
done
And that was it. I finally had automatic deployment of my server. The above script could of course be modified to not delete and recreate the temp git repo and the seperate node_module directory to save time a little time in the hook's execution, but I actually prefer to clean up after myself and not occupy space outside of when I'm executing the script.
The last month my focus has been elsewhere than expanding my repertoire of skills or familiarising myself with more technologies. My Github commit history also looks quite silent. Where have I been? I have been a grand quest towards minimalism, ridding myself of distractions and having full control of my computer. It's a journey I had been putting off for some time. I mostly had some questions I needed answers to before I dared tackling the switch. This blogpost will come in three parts, and I will list my reasons for being unhappy with the current system, what I'm switching to instead, and which questions I needed answered before I made the switch.
A code editor with a lot of visual elements that you either can't access without mouse or I at least don't know the shortcuts to. If those elements/actions have shortcuts, there's no convenient way of looking up their shortcuts. Loads of visual elements present at once that, although most can be closed, are open by default and at launch. There's an extension available called 'VSCodeVim' that allows for 90% of movements possible in vim, which would be good enough if not for my final qualm; the editor itself is limited in its customizability.
Moving away from VSCode and switching to Neovim!
I didn't know if Neovim had everything I needed and wanted. For starters, I didn't know how to view a tree of the commit history that was as pleasant to look at as the one offered by the VSCode extension 'Git Graph'. I would be fine with something non-GUI, just pure colored terminal output text, but the default 'git log --graph' is really ugly. I'd really miss something as pretty as the 'Git Graph' extension for VSCode. I ended up finding out how to configure the output of the 'git log --graph' to my own liking, and I've grown really fond of the fact that it's a text output, which means I can jump around in the output of the git command with vim movements and searches all I want, or I could even 'grep' for stuff if I felt like it. The output is in my hands.
Another thing that held me back from switching sooner was just how comfortable I was with vim movements and commands. I had been editing a few files with Vim before, mostly on my Raspberry Pi that I access via SSH, but I had a long way to go before I felt comfortable enough using pure vim for my everyday text editing. I knew the solution from the beginning; practise makes perfect. So I installed Neovim weeks before I committed to the actual switch, and I forced myself to use it where I could, and only go back to VSCode if things were going uncomfortably slow (still with VSCodeVim installed).
A final hesitation of mine was how to get something similar to VSCode's search across files (ctrl+shift+f) in Neovim. I fell in love with the fuzzy search of telescope that comes with
A corner of the desktop that if hovered, bring up sponsored news taking up a fifth of the screen. A start menu where a tenth of the icons there are not applications you put there yourself, but advertisements. An operating system keeping things secret from you because it's proprietary. Processes that you have no way of shutting down as they're essential to the OS, but no way to figure out their exact role or source code no matter how much digging you do. An OS without an easy way to tell it "I know what I'm doing" and delete a file even if a process is using it, just always a wrestle with your own OS.
Using a free and open-source OS:
I knew I needed Linux but I had yet to commit to a specific distribution of Linux (
When all that was found out, I was ready for my switch. The switching went relatively smoothly. There were some shortcuts I needed configured to be able to use my computer comfortably the way I used to, there still is a weird empty rectangle to the left of all screenshots of the entire screen I take. But overall, the switch has been amazing for me. I want to mention two bonuses I got out of my switch too:
Git is a Version Control System (VCS), and possibly the most widely used one at that. VCSs in general are incredibly powerful tools that allow you to have several save states of some project, which is not only useful in managing a team of developers as they can all safely work on their own state of the project and sync later, but it's also extremely powerful even for solo projects which I feel is heavily understated. In fact, it is SO useful for solo projects (which is all I did for most of my teen years) that if I had a time machine and I was only able to give my young teen self one piece of advice, it would be this: Learn Git.
What's so great about Git? Let me answer that with a story. As a young teen I had just moved away from
The worry of the above two day-ruining, devastating pitfalls completely disappeared once I dared learn the beast that was git. I did not learn git the moment I heard of it; I was unaware of its potential outside of collaborative projects, but I deeply, deeply regret not looking into it earlier. Of all my mistakes as a young teen aspiring to become a programmer, this is the one that has definitely stifled my growth the most; I was cautious about doing the very thing that makes you learn the best: Mistakes. I don't want anyone else to make that mistake. Look into git. It takes an hour to be good enough at, it takes a day to be smart at it.