<![CDATA[ Linux Handbook ]]> https://linuxhandbook.com https://linuxhandbook.com/favicon.png Linux Handbook https://linuxhandbook.com Fri, 06 Sep 2024 17:38:16 +0530 60 <![CDATA[ LHB Linux Digest #24.15: New Courses Portal Launch, Sudo Tweaks, Docker Tips and More ]]> https://linuxhandbook.com/newsletter/24-15/ 66d6c8a67cebe04d0496c5e9 Tue, 03 Sep 2024 20:55:17 +0530 LHB Pro membership gives you access to 5 eBooks and 4 courses. And all this is for just $50 per year. This works for 253 Pro members but not for everyone.

Some people find the pricing too steep for the learning material they are interested in.

That's the reason why our eBooks are available to purchase individually on Gumroad.

And now, you can do the same with the courses. Behold LHB Courses portal!

Courses by Linux Handbook
Making learning accessible for everyone with lifetime access to courses at affordable prices. Courses are available on Linux, Docker, Ansible and more.

If you don't want the yearly Pro membership, you can purchase the course of your choice from the portal. Unlike the membership, the courses purchased this way will remain with you forever. No yearly payment is needed.

I have been working hard to put everything in place, but there is still much to improve.

To help me figure out if everything works properly or what I should improve about onboarding, notifications or anything that makes the learning experience better, I am offering a limited time 50% discount on all the courses.

You can obtain that by using the coupon code BETA50 at checkout time. This way, you can get the extensive Linux for DevOps course for just $14.50 instead of $29.

LHB Course Portal beta

The coupon will be valid for a week or two maximum. So, hurry up!

πŸ“‹
Note that if you are an existing Pro member, you don't need this new portal. All of this is available on the main Linux Handbook website for you. You just have to be logged in.

Enjoy learning on your own terms with the new portal πŸ§‘β€πŸŽ“πŸ§

Here are the other highlights of this edition of LHB Linux Digest:

  • sudo tips and tweaks
  • Interesting Docker tips
  • Relevant Linux news
  • Tools and memes for Linux lovers
]]>
<![CDATA[ Difference Between Pods and Containers in Kubernetes ]]> https://linuxhandbook.com/kubernetes-pods-containers/ 66c42b79fb0b96c2a2b7e51a Wed, 28 Aug 2024 11:08:52 +0530 While working with Kubernetes, you will often come across two fundamental concepts: Pods and Containers.

While these terms are sometimes used interchangeably, they represent different entities with distinct roles in the Kubernetes ecosystem.

Understanding the difference between Pods and Containers is crucial for effectively managing and deploying applications in Kubernetes.

Kubernetes Pod vs Containers

Let's explore the differences between Pods and Containers, with detailed explanations and practical examples to illustrate their relationship and functionality.

What are pods in Kubernetes?

Pods are the smallest deployable units in Kubernetes. A Pod represents a single instance of a running process in your cluster and can contain one or more tightly coupled containers that share the same network namespace and storage volumes.

Pods are ephemeral in nature, meaning they are designed to be created, destroyed, and recreated as needed. Pods serve as a wrapper around one or more containers, providing an additional layer of abstraction that allows Kubernetes to manage and orchestrate the containers efficiently.

Key Characteristics of Pods:

  • Multiple Containers: A Pod can contain multiple containers that work together as a single cohesive unit. These containers share the same network IP address and can communicate with each other using localhost.
  • Shared Storage: Containers in a Pod can share storage volumes, enabling data to be shared between containers.
  • Lifecycle Management: Kubernetes manages the lifecycle of Pods, including creation, deletion, and scaling.

Understanding the Relationship Between Pods and Containers

A Pod is essentially a higher-level abstraction that encapsulates one or more containers. While a single-container Pod is the most common use case, there are scenarios where multiple containers are deployed within a single Pod.

In such cases, these containers are tightly coupled and share resources such as storage volumes and network interfaces.

Example 1: Single-Container Pod

Most applications deployed in Kubernetes consist of single-container Pods. In this case, the Pod acts as a wrapper around the container, providing Kubernetes with the ability to manage the container's lifecycle.

apiVersion: v1
kind: Pod
metadata:
  name: single-container-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80

In this example, the Pod single-container-pod contains a single container running the nginx web server.

Example 2: Multi-Container Pod

In some cases, you might want to deploy multiple containers that need to work closely together within a single Pod. For example, one container could serve as a web server, while another handles logging.

apiVersion: v1
kind: Pod
metadata:
  name: multi-container-pod
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80
  - name: logging-container
    image: busybox:latest
    command: ['sh', '-c', 'tail -f /var/log/nginx/access.log']
    volumeMounts:
    - name: log-volume
      mountPath: /var/log/nginx
  volumes:
  - name: log-volume
    emptyDir: {}

In this example, the multi-container-pod contains two containers: one running nginx and another running a busybox container for logging. Both containers share a volume, log-volume, where the nginx logs are stored.

Use Cases

Understanding when to use single-container Pods versus multi-container Pods is essential for efficient Kubernetes deployments.

Use Case 1: Single-Container Pod for Stateless Applications

For stateless applications, such as web servers or API endpoints, a single-container Pod is usually sufficient. This allows Kubernetes to manage the scaling and lifecycle of each instance of the application independently.

Use Case 2: Multi-Container Pod for Sidecar Patterns

A common use case for multi-container Pods is the Sidecar pattern. This involves deploying an additional container alongside the main application container to enhance or augment its functionality. Examples include logging agents, monitoring tools, or configuration managers.

Conclusion

In summary, while Containers are the fundamental units of application deployment, Pods provide the necessary abstraction for Kubernetes to manage these containers effectively.

A Pod can contain a single container or multiple containers that work together as a unit, sharing resources and coordinating their operations.

✍️
Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
]]>
<![CDATA[ Check Prime Number ]]> https://linuxhandbook.com/practice/bash/check-prime-number/ 66c6e438fb0b96c2a2b83f01 Thu, 22 Aug 2024 18:09:47 +0530 Time to practice your Bash scripting.

Exercise

In this Bash practice exercise, write a shell script that accepts an integer and then checks whether the provided number is prime number or not.

πŸ’‘Hint: A prime number is the one that is only divisible by itself.

Some sample prime numbers are:

2 3 5 7 11 13 17 19 23 ...

Remember that the zero and one are not prime number.

Test data

If you input 827, it should show:

827 is a prime number

If you input 21, you should get:

21 is not a prime number
βœ‹
A programming challenge can be solved in more than one way. The solution presented here is for reference purposes only. You may have written a different program and it could be correct as well.

Solution 1: Interactive bash script to check prime number

Here's a sample bash scripting for checking if a given number is prime or not.

Note that I used $num/2 even though it won't generate a floating point number (21/2 will result in 10, not 10.5) but it will still work the same as we don't need exact division here.

#!/bin/bash

# Zero and one are not prime number
read -p "Enter a number greater than 1: " num

if [ "$num" -lt 2 ]; then
	echo "Number must be greater than 1"
	exit
fi
# Set a flag 
flag=0

# Loop to find if number is divisible by 2 or higher
# Loop until num/2 only as full loop is not required 
# to check if num is divisible or not 
# For example, 20 can only be divisible by 10, not 11

for ((i = 2; i <= $(($num/2)); ++i)); do
	if [ $(($num % $i)) -eq 0 ]; then
      		flag=1
      		break
    	fi
done

if [ $flag -eq 0 ]; then
	echo "$num is a prime number"
else
	echo "$num is not a prime number"
fi

Solution 2: Non-interactive bash script to check prime number

The same bash script can be modified to use arguments supplied with the script (prime.sh N) instead of using read command that makes it interactive.

#!/bin/bash

num=$1

if [ "$num" -lt 2 ]; then
	echo "Number must be greater than 1"
	exit
fi
# Set a flag 
flag=0

# Loop to find if number is divisible by 2 or higher
# Loop until num/2 only as full loop is not required 
# to check if num is divisible or not 
# For example, 20 can only be divisible by 10, not 11

for ((i = 2; i <= $(($num/2)); ++i)); do
	if [ $(($num % $i)) -eq 0 ]; then
      		flag=1
      		break
    	fi
done

if [ $flag -eq 0 ]; then
	echo "$num is a prime number"
else
	echo "$num is not a prime number"
fi

πŸ“– Concepts to revise

The solutions discussed here use some terms, commands and concepts and if you are not familiar with them, you should learn more about them.

πŸ“š Further reading

If you are new to bash scripting, we have a streamlined tutorial series on Bash that you can use to learn it from scratch or use it to brush up the basics of bash shell scripting.

Bash Scripting Tutorial Series for Beginners [Free]
Get started with Bash Shell script learning with practical examples. Also test your learning with practice exercises.
]]>
<![CDATA[ LHB Linux Digest #24.14: Process Substitution, Docker WebUI, K8 Editing and More ]]> https://linuxhandbook.com/newsletter/24-14/ 66c412a7fb0b96c2a2b7e4d0 Tue, 20 Aug 2024 16:24:40 +0530 Here are the highlights of this edition of LHB Linux Digest:

  • Docker image modification
  • DweebUI for managing Docker containers
  • Known hosts file explained
  • Tools and memes for Linux lovers

✨ PikaPods: Self Hosting Without Headaches

Deploy an open source application of your choice in minutes. Try it out with a free $5 credit.

PikaPods - Instant Open Source App Hosting
Run the finest Open Source web apps from $1/month, fully managed, no tracking, no ads, full privacy. Self-hosting was never this convenient.
]]>
<![CDATA[ How to Edit a Kubernetes Deployment ]]> https://linuxhandbook.com/edit-kubernetes-deployment/ 66c427adfb0b96c2a2b7e4ec Tue, 20 Aug 2024 11:13:17 +0530 Kubernetes deployments are essential components that manage the state of your application by ensuring that a specified number of pod replicas are running at any given time.

Editing these deployments is a common task, especially when you need to adjust configurations such as environment variables, image versions, or resource limits.

While you can modify the YAML file directly, Kubernetes provides a more dynamic approach to editing deployments without the need to modify the file manually.

In this guide, we’ll explore how to accomplish this using kubectl, the Kubernetes command-line tool.

Prerequisites

Before we begin, ensure that you have the following:

  • A Kubernetes cluster running with kubectl configured to interact with it.
  • A deployment already running in your cluster. If you don't have one, you can create a simple NGINX deployment:
kubectl create deployment nginx-deployment --image=nginx

Method 1: Editing a deployment with kubectl edit

The kubectl edit command allows you to open the deployment's configuration in a text editor, make changes, and apply them directly to the running deployment.

This method is quick and convenient when you need to make small adjustments.

Run the kubectl edit command.

kubectl edit deployment nginx-deployment

This command opens the deployment configuration in your default text editor (usually vim or nano). You can modify any aspect of the deployment, such as changing the Nginx image version.

spec:
  containers:
  - image: nginx:1.21.6

Once you save and exit the editor, Kubernetes automatically applies the changes to the deployment. The new configuration will take effect, and Kubernetes will update the pods accordingly.

Using kubectl patch

For more controlled and scriptable edits, kubectl patch allows you to make partial changes to a deployment without editing the entire YAML file. This is particularly useful for automation or when integrating with CI/CD pipelines.

You can update the image of the deployment by using a strategic merge patch:

kubectl patch deployment nginx-deployment --patch '{"spec": {"template": {"spec": {"containers": [{"name": "nginx", "image": "nginx:1.21.6"}]}}}}'

You can also use the kubectl patch command to change the number of replicas:

kubectl patch deployment nginx-deployment --patch '{"spec": {"replicas": 5}}'

This command updates the nginx-deployment in Kubernetes to scale its replicas to 5.

Using kubectl set commands

Kubernetes provides the kubectl set family of commands, which are designed for specific tasks, such as updating image versions or modifying environment variables.

To update the container image, run the kubectl set image command and specify your desired Nginx container image.

kubectl set image deployment/nginx-deployment nginx=nginx:1.21.6

To update environment variables in a container, use the kubectl set env command:

kubectl set env deployment/nginx-deployment MY_ENV=production

To modify the resource requests and limits of a container, use the kubectl set resources command:

kubectl set resources deployment/nginx-deployment --limits=cpu=500m,memory=512Mi

This command sets the resource limits for the nginx-deployment in Kubernetes to 500m CPU and 512Mi memory.

Using kubectl scale to adjust replicas

The kubectl scale command is specifically used to adjust the number of replicas in a deployment:

kubectl scale deployment nginx-deployment --replicas=3

This command adjusts the deployment to the desired number of replicas without modifying the YAML file directly.

Viewing the deployment status

After making changes, it’s essential to verify that the deployment is running as expected. Use the following command to view the status of your deployment:

kubectl get deployment nginx-deployment

The output shows that the deployment is up and running with the desired number of replicas.

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           5m

Conclusion

Editing a Kubernetes deployment without manually modifying the YAML file is efficient and flexible. This approach is especially useful when managing multiple deployments or automating tasks.

You can use kubectl edit for quick changes, kubectl patch for controlled modifications, or kubectl set for specific updates. These tools help you maintain your deployments effectively.

Always verify the status of your deployment after making changes to ensure everything functions as expected.

✍️
Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
]]>
<![CDATA[ How to Keep a Container Running on Kubernetes ]]> https://linuxhandbook.com/kubernetes-keep-container-running/ 66b31635fb0b96c2a2b7d11f Thu, 08 Aug 2024 14:40:26 +0530 Kubernetes containers are designed to be ephemeral and usually run a specific process that is eventually completed.

However, you might want to keep a container running to inspect its state and logs while troubleshooting an issue. Monitoring agents and web servers are also designed to run continuously as background services.

Let me show you various ways to keep a container running in Kubernetes.

Method 1: Running a Background Process

This is one of the most common and simplest ways to keep a Kubernetes container running. It involves running a process in the background which doesn't exit.

Here's an example Kubernetes deployment that uses a background process to keep the container running:

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
    command: ["tail", "-f", "/dev/null"] 

In this example, I used the tail command with -f parameter on /dev/null file that discards everything written to it, thus entering an infinite loop that keeps the container running.

You can also use something like while true; do :; done or any scripts that runs a never ending loop.

Method 2: Using the sleep-infinity command within a Kubernetes Pod

Using the sleep infinity command within a Kubernetes Pod is another common way to keep a container running. This command works by instructing the container to pause its execution indefinitely. It doesn't perform any other tasks and essentially just waits forever.

Since Kubernetes is designed to keep Pods running. If a Pod's main process terminates, Kubernetes normally restarts it. However, since the sleep infinity command never ends, the Pod continues to run forever.

To use sleep-inifinity command you have to edit the pod's YAML configuration and add a debugging container:

apiVersion: v1
kind: Pod
metadata:
  name: sleep-infinity-pod
spec:
  containers:
  - name: alpine
    image: alpine:latest 
    command: ["sleep", "infinity"] 

Next you have to apply changes and attach a shell:

kubectl apply -f <your_pod_yaml> 

kubectl exec -it <pod_name> -c alpine -- bash

Though sleep inifinity is lightweight and does not consume a lot of resources but in a complex enviornment, it is considered a better practice to monitor the debugging containers to ensure they don't have a impact on the overall performance.

Method 3: Using a Process Supervisor to keep Kubernetes container running

A process supervisor is a lightweight program that runs inside the container and oversees the main process of the application. It is used to monitor the health of this process and take corrective action such as restarting the process and ensuring that the container remains running even if the main process crashes or becomes unresponsive.

Kubernetes itself has built-in mechanisms to restart containers when they fail. However, if your container runs multiple processes, Kubernetes might not be aware of the individual process failures. A process supervisor monitors each process independently and restarts them as needed.

Also, in some cases, processes can become zombies when they terminate but are not properly cleaned up. Process supervisors can prevent this by ensuring that child processes are correctly managed.

To use a process supervisor like Tini you have to add the process supervisor to your container image during the image build process and run it in your container (PID 1).

apiVersion: v1
kind: Pod
metadata:
  name: tini-pod
spec:
  containers:
- name: node-app
  image: node:18-alpine 
  command: ["/tini", "--"] 
  args: ["node", "server.js"] 

In this configuration, Tini is specified as the entry point command for the container. It will then launch the Node.js application and monitor its execution. If the Node.js process fails, Tini will automatically restart it, thus ensuring that Kubernetes container runs indefinitely.

Conclusion

Here, I showed you three ways to keep a container running in Kubernetes. When the situation demands, you can use the method that suits you best. I hope you find this Kubernetes tip helpful.

]]>
<![CDATA[ LHB Linux Digest #24.13: Flatpak, System Calls, Docker Compose Tips and More ]]> https://linuxhandbook.com/newsletter/24-13/ 66b197e28bbf5e05c5a6a361 Tue, 06 Aug 2024 18:14:30 +0530 Although, Flatpaks are typically for desktop Linux and we focus more on the server side of Linux, we create a section on using the Flatpak commands. It's good to know about the newer packaging formats, after all.

Introduction to Flatpak
The universal packaging system from Fedora is popular among developers and desktop Linux users.

We are working on a Kubernetes course next. It will be probably out by the end of September or mid-October.

πŸ’­ What else do you get in this edition of LHB Linux Digest:

  • Syscalls
  • Docker Compose
  • Docker Container Monitoring
  • Tools and memes for Linux lovers

✨ Get Trained with a Mentor

In the spirit of Olympics, Linux Foundation is offering discount on its instructor-led training. Get trained with a mentor.

]]>
<![CDATA[ Updating Flatpak Packages ]]> https://linuxhandbook.com/updating-flatpak-packages/ 66a68cbe8bbf5e05c5a69948 Tue, 06 Aug 2024 15:46:57 +0530 While users tend to perform system updates now and then, they often forget to update packages of a different package manager (I'm one of them).

You have multiple choices on how you want to update Flatpak packages and I will walk you through the following:

  • Update all the packages at once
  • Update a specific package
  • Update a package to a specific version

Update all the Flatpak packages at once

To update all the Flatpak packages at once, all you have to do is execute the flatpak command with the update flag as shown here:

flatpak update

To proceed further, press Y and hit the Enter key. That's it!

Update a specific Flatpak package

To update a specific package, you would need the application ID of that specific package and to do so, you can list the installed packages using the following command:

flatpak list
list installed flatpak packages

Once you know the application ID of the target package, enter the application ID in the following command:

flatpak update package_name

For example, if I want to update the Spotube package, then, I will use the following:

flatpak update com.github.KRTirtho.Spotube
Update a specific flatpak package

Update a Flatpak package to a specific version

Each Flatpak package update is associated with a unique commit hash and by using the commit has, you can switch to that specific version.

To list available versions and their commit hash, you'd need to enter the application ID of the target package in the following command:

flatpak remote-info --log flathub <application-ID>

For example, if I want to list multiple versions of Spotube, including their commit hash, then I will use the following:

flatpak remote-info --log flathub com.github.KRTirtho.Spotube
List previous versions of Flatpak packages

Find the version from the given list and enter the commit hash and application ID of the package as shown here:

flatpak update --commit=<commit-hash> <application-ID>

Here's what the end command would look like:

flatpak update --commit=4fd4307d4a81093c5fa2ddba3ffbad3f7d09fa0cc66e9499e77c92b696774484 com.github.KRTirtho.Spotube

Pretty cool, right?

Wrapping Up...

In this quick tutorial, I went through how you can update flatpak packages including methods to update everything at once, updating a specific package and updating a package to a specific version.

I hope you will find this helpful.

]]>
<![CDATA[ Removing Flatpak Packages ]]> https://linuxhandbook.com/flatpak-remove-package/ 66a9351f8bbf5e05c5a69caf Tue, 06 Aug 2024 15:41:30 +0530 The easiest way to remove a Flatpak package is to use the uninstall flag along with the application ID of the target command as shown here:

flatpak uninstall <application-ID>

Want more details? Here you have it.

Uninstall a Flatpak package

As mentioned earlier, to uninstall a Flatpak package, you'd need an application ID of the package that you want to uninstall.

To know the application ID of the installed packages, you can list installed Flatpak packages using the following command:

flatpak list

Once you know the application ID of the target package, use the following command syntax to uninstall a specific package:

flatpak uninstall <application-ID>

For example, if I want to uninstall Spotify, then I will use the following:

flatpak uninstall com.spotify.Client
Uninstall a flatpak package

Remove unused dependencies

Once you remove a specific package, it is also a good idea to remove runtimes and extensions from your system to free up some storage.

Think of it as similar to apt autoremove which removes packages which once were dependencies of a package and now, no longer needed.

To remove unused runtimes and extensions from your system, use the following command:

flatpak uninstall --unused
Remove unused packages from flatpak

Wrapping Up...

It was my take on how you can remove flatpak packages from your system including a clean-up trick. I hope you will find this guide helpful.

Have doubts? Leave us a comment.

]]>
<![CDATA[ Installing Flatpak Packages ]]> https://linuxhandbook.com/flatpak-install-package/ 66a627998bbf5e05c5a69855 Tue, 06 Aug 2024 15:38:54 +0530 Once you install and configure Flatpak on Linux, it is quite easy to install packages using Flatpak. The easiest way is to grab the installation command from the Flatpak package page and execute it on the terminal:

Looking for detailed steps? Here you go.

Step 1: Search for the package

You can either use the Flatpak webpage or terminal to search for packages. To search packages through the terminal, use the flatpak command with the search flag as shown:

flatpak search <package_name>

For example, if I want to search the spotify package, then I'll use the following command:

flatpak search spotify
Search packages in Flatpak

From the entire output, the only thing which is important for installation is the Application ID as you will be using it to install the package in the next step.

So copy the package ID of the target package.

Step 2: Install Flatpak package

Once you know the package name, enter the package name in the given command:

flatpak install flathub <Application ID>

As I wanted to install Spotify, my Application ID is com.spotify.Client and I will use the following command to install it:

flatpak install flathub com.spotify.Client
Install a package using Flatpak in Linux

Installing a package through flatpakref

When searching for a package on Flathub, if you press the Install button, it will download a .flatpakref file for an application. Think of it as a torrent file which includes information of all the necessary information.

To install a package through a .flatpakref file, you will have to specify the path to the a .flatpakref file with the Flatpak command as shown here:

flatpak install <path-to-flatpakref file>

In my case, a .flatpakref file for Spotify is located inside my Downloads directory, so I'll use the following:

flatpak install ~/Downloads/com.spotify.Client.flatpakref
Install flatpak package using flatpakref file

Yep, it is that easy!

Step 3: Run a Flatpak package

The easiest way to run the installed Flatpak package is to run it from the system app menu. But if you prefer a terminal for everything, then first, you need to know the application ID of the package.

For that purpose, you need to list the installed packages to know the application name. To list installed packages, use the following:

flatpak list

Once you know the application ID of the target package, enter the application ID in the following command:

flatpak run <Application-ID>

For example, if I wanted to run Spotify, then I would use the following:

flatpak run com.spotify.Client

That's it!

Wrapping Up...

In this quick tutorial, I went through how you can install packages through flatpak including multiple methods and how to run them.

I hope you will find this helpful and if you have any queries, leave us a comment.

]]>
<![CDATA[ Installing Flatpak Packaging Support ]]> https://linuxhandbook.com/install-flatpak/ 669d44a98bbf5e05c5a63e3f Tue, 06 Aug 2024 15:36:30 +0530 Many Linux distributions like Pop!_OS, Linux Mint, Fedora, etc. come pre-configured with Flatpak so before you follow the given instructions, use the following command to check if Flatpak is already configured or not:

flatpak --version

If the above command gives you the installed version of Flatpak then you can skip the installation process. But for many users, it would give them an error saying "Command 'flatpak' not found":

command flatpak not found in Linux

So in this quick tutorial, I will walk you through the following:

  • Install Flatpak on Linux (covering every popular distro)
  • Setting up a Flatpak repository
πŸ“‹
After installing Flatpak, make sure to follow the instructions to add the Flatpak repository on Linux. Steps are written at the end of this tutorial.

Install Flatpak on Ubuntu

You can install Flatpak on Ubuntu using the default package manager apt in the following manner:

sudo apt install flatpak

Install Flatpak on Fedora and other RHEL-based distros

Yes, I mentioned at the beginning of this tutorial that Flatpak is pre-installed on Fedora and other RHEL-based distros.

But there's a catch.

It is only applicable to the new releases and you still have to manually install Flatpak on older versions. To install Flatpak on Fedora and other RHEL-based distros, use the following command:

sudo yum install flatpak

Install Flatpak on openSUSE

To install Flatpak on openSUSE tumbleweed or leap, you can use the zypper package manager as shown here:

sudo zypper install flatpak

Install Flatpak on Arch Linux

This is my favorite one and there's a reason. Unlike with other Linux distributions where you have to set up a repository after installing the Flatpak package, Arch does not require setting up a repository.

So all you have to do is install the Flatpak package with the pacman and you're good to go:

sudo pacman -S flatpak

Install Flatpak on Gentoo

To install Flatpak on Gentoo, you first need to enable the ~amd64 keyword for the necessary packages. For that, execute the given commands one by one:

echo -e 'sys-apps/flatpak ~amd64
acct-user/flatpak ~amd64
acct-group/flatpak ~amd64
dev-util/ostree ~amd64' >> /etc/portage/package.accept_keywords/flatpak

Now you can install Flatpak using emerge:

emerge sys-apps/flatpak

Install Flatpak on Void

To install Flatpak on Void Linux, use the xbps-install command in the following manner:

sudo xbps-install -S flatpak

Install Flatpak on NixOS

To install Flatpak on NixOS, open the /etc/nixos/configuration.nix file using the following command:

sudo nano /etc/nixos/configuration.nix

Now, go to the end of the file by pressing Alt + / and paste the given line to install Flatpak on NixOS:

services.flatpak.enable = true;

Next, save changes and exit from the nano editor. To take effect from the changes you made, rebuild the NixOS:

sudo nixos-rebuild switch

Setting up Flatpak Repository

πŸ“‹
Skip this part if you're an Arch Linux user.

Once you are done installing the Flatpak package on your computer, you need to set up a Flatpak repository.

To set up a Flatpak repository, all you have to do is execute the following command:

flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo

Now, log out and log back in to take the effect of the changes you made to your system.

Wrapping Up...

In this quick tutorial, I want through how you can install and setup Flatpak on various Linux distributions.

If you faced any issues in the installation process, leave us a comment and will come up with a solution ASAP.

]]>
<![CDATA[ How to Make Docker-Compose to Always Use the Latest Image ]]> https://linuxhandbook.com/docker-compose-latest-image/ 66a739008bbf5e05c5a69a0c Mon, 29 Jul 2024 12:33:32 +0530 By default, docker compose does not always pull the latest version of an image from a registry.

This can lead to using outdated images and potentially missing bug fixes or new features.

This tutorial will discuss various methods to make Docker Compose always use the latest image.

Method 1. The --pull Flag

The simplest way to ensure Docker Compose pulls the latest image is to use the --pull flag when running docker-compose up:

docker-compose up --pull always
Using pull always

This flag forces Docker Compose to attempt to pull the latest versions of all images specified in your docker-compose.yml file before starting or restarting services.

Method 2. The image: tag Strategy

In your docker-compose.yml file, you can specify the image you want to use along with the latest tag:

 services:
  redis:
    image: redis:latest
    ports:
      - "6379:6379"
      - "16379:16379"
    volumes:
      - redis-data:/data
      - ./redis.conf:/usr/local/etc/redis/redis.conf

volumes:
  redis-data: 

This seems very easy but it's important to note that using latest doesn't always guarantee that you will get the absolute latest image.

For example, if you had previously pulled an image tagged as latest, Docker might use the cached version unless you explicitly tell it to pull again (using the --pull flag).

The best way to handle this is by first stopping and removing existing containers and images, then pull the latest images from the registry, and finally starts the containers in detached mode (-d), rebuilding them if there are changes in the Dockerfile (--build):

docker-compose down --rmi all
docker-compose pull
docker-compose up -d --build
Use image tags to get the latest image with docker compose

Method 3. Build Images Locally

Another comman way to make docker-compose use the latest image is by building images locally using a Dockerfile. This way you are using the latest code by rebuilding the image before running docker-compose up:

docker-compose build --no-cache
docker-compose up

The --no-cache flag tells Docker to rebuild the image from scratch, incorporating any changes you have made.

Method 4. Use Watchtower

Watchtower is a utility that runs as a Docker container itself. Its primary function is to monitor other Docker containers on the same host and automatically update them to newer image versions when they become available.

Watchtower is easy to set up. You can either run it as a standalone container:

     docker run -d \
          --name watchtower \
          -v /var/run/docker.sock:/var/run/docker.sock \
          containrrr/watchtower

Or integrate it into your docker-compose.yml file:

 services:
  redis:
    image: redis:latest
    ports:
      - "6379:6379"
      - "16379:16379"
    volumes:
      - redis-data:/data
      - ./redis.conf:/usr/local/etc/redis/redis.conf

  watchtower:
    image: containrrr/watchtower:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command: --schedule "0 4 * * *" --cleanup --stop-timeout 300s

volumes:
  redis-data: 
Use Watchtower to get latest image with docker compose

Using the latest image is not always the best practice

While using the latest image might seem ideal, it's important to understand that it is not always the best approach as latest tag can be unpredictable.

The image it refers to might change unexpectedly, introducing unforeseen issues into your environment or new image versions might contain breaking changes.

That's why it's important to follow the best practices:

  • Specific Tags: Instead of relying on latest tag, read the changenotes and use specific image tags. For example, redis:7.2.5. This gives you more control and predictability.
  • Regular Updates: Establish a schedule for updating your images to benefit from bug fixes and new features while minimizing the risk of unexpected issues.
  • Testing Environments: Always test new image versions in a staging environment before deploying them to production especially when you are using a tool like Watchtower or any other tool which automaically updates the images.

Wrapping Up

It's quiet easy to make Docker Compose use the the latest image easier, however, it's important to follow the best practices to avoid breaking things and maintain a balance between staying up-to-date and maintaining the stability.

]]>
<![CDATA[ List All Pods and Nodes in Kubernetes ]]> https://linuxhandbook.com/kubernetes-list-pods-nodes/ 66a3798b8bbf5e05c5a6956d Fri, 26 Jul 2024 16:04:56 +0530 In Kubernetes, managing and monitoring your cluster often involves interacting with the resources it manages, particularly Pods and Nodes. Pods are the smallest deployable units in Kubernetes, consisting of one or more containers that share storage and network resources. Nodes, on the other hand, are the worker machines (physical or virtual) where Pods are deployed.

This guide will walk you through the commands to list all Pods and Nodes in a Kubernetes cluster and briefly cover how to get detailed information about a specific Pod.

Listing All Pods

Pods are a critical resource in Kubernetes, representing the deployment of your application containers.

To list all Pods in your cluster, you can use the kubectl get pods command.

1. List Pods in the Default Namespace

By default, Kubernetes organizes resources into namespaces. If you don't specify a namespace, kubectl assumes the default namespace.

kubectl get pods

This command lists all Pods in the default namespace. The output will include the Pod's name, ready status, status, restarts, and age.

NAME                             READY   STATUS    RESTARTS   AGE
nginx-deployment-54f57cf6b7-x9t4x   1/1     Running   0          5m
nginx-deployment-54f57cf6b7-z8v4j   1/1     Running   0          5m

2. List Pods in All Namespaces

To get a comprehensive view of all Pods across all namespaces, use the --all-namespaces or -A flag:

kubectl get pods --all-namespaces

This command is particularly useful for administrators who need to monitor the entire cluster's state.

NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
default       nginx-deployment-54f57cf6b7-x9t4x   1/1     Running   0          5m
default       nginx-deployment-54f57cf6b7-z8v4j   1/1     Running   0          5m
kube-system   coredns-558bd4d5db-h7v5n            1/1     Running   0          15d
kube-system   etcd-minikube                       1/1     Running   0          15d

3. List Pods with Detailed Information

To get more detailed information about each Pod, including the node it is running on and the IP addresses, use the -o wide option:

kubectl get pods -o wide

Output.

NAME                             READY   STATUS    RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
nginx-deployment-54f57cf6b7-x9t4x   1/1     Running   0          5m    172.17.0.4   minikube   <none>           <none>
nginx-deployment-54f57cf6b7-z8v4j   1/1     Running   0          5m    172.17.0.5   minikube   <none>           <none>

4. Filtering Pods by Labels

Labels are key-value pairs attached to Kubernetes objects. You can filter Pods based on these labels using the -l flag:

kubectl get pods -l app=my-app

This command lists all Pods with the label app=my-app.

NAME                      READY   STATUS    RESTARTS   AGE
my-app-7f94987d5c-n8qlm   1/1     Running   0          10m
my-app-7f94987d5c-x9c8f   1/1     Running   0          10m

5. Getting Details of a Specific Pod

Sometimes, you need to delve deeper into a specific Pod's status and configuration. The kubectl describe pod command provides detailed information about a particular Pod, including events, conditions, and container statuses.

kubectl describe pod nginx-pod

This will gives you detail information about nginx-pod. This command is useful for troubleshooting and understanding the Pod's lifecycle.

Name:         nginx-pod
Namespace:    default
Priority:     0
Node:         worker-node/192.168.1.11
Start Time:   Thu, 22 Jul 2024 08:30:00 +0000
Labels:       app=nginx
Annotations:  <none>
Status:       Running
IP:           10.244.1.5
IPs:
  IP:  10.244.1.5
Containers:
  nginx:
    Container ID:   docker://1234567890abcdef
    Image:          nginx:1.21
    Image ID:       docker-pullable://nginx@sha256:abcdef1234567890abcdef1234567890abcdef1234567890abcdef1234567890
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 22 Jul 2024 08:30:10 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     500m
      memory:  128Mi
    Requests:
      cpu:     250m
      memory:  64Mi
    Liveness:  http-get http://:80/ delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness: http-get http://:80/ delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4k2j (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-f4k2j:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute for 300s
                             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  1m    default-scheduler  Successfully assigned default/nginx-pod to worker-node
  Normal  Pulling    1m    kubelet            Pulling image "nginx:1.21"
  Normal  Pulled     1m    kubelet            Successfully pulled image "nginx:1.21" in 20s
  Normal  Created    1m    kubelet            Created container nginx
  Normal  Started    1m    kubelet            Started container nginx

Listing All Nodes

Nodes are the machines in your Kubernetes cluster that run your applications. Each Node contains the services necessary to run Pods, such as the container runtime and kubelet.

1. List Nodes

To list all Nodes in your cluster, use the kubectl get nodes command:

kubectl get nodes

This command provides information about each Node, including its name, status, roles, version, and more.

NAME       STATUS   ROLES    AGE   VERSION
minikube   Ready    master   15d   v1.20.2

2. Detailed Node Information

For a more detailed view of each Node, including their labels and resource capacity, use the -o wide option:

kubectl get nodes -o wide

This will includes additional details such as the internal IP address and the operating system of each Node.

NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube   Ready    master   15d   v1.20.2   192.168.49.2   <none>        Ubuntu 20.04.1 LTS   5.4.0-66-generic   docker://20.10.2

3. Node Descriptions

To get a complete description of a specific Node, including all its metadata, status, and allocated resources, use the following command:

kubectl describe node app-node

This will provides detailed information about the node named app-node in the Kubernetes cluster.

Name:               app-node
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=app-node
                    node-role.kubernetes.io/app=app
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"aa:bb:cc:dd:ee:ff"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/public-ip: 192.168.1.10
CreationTimestamp:  Wed, 21 Jul 2024 15:30:00 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  app-node
  AcquireTime:     <unset>
  RenewTime:       Thu, 22 Jul 2024 10:00:00 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                        Message
  ----             ------  -----------------                 ------------------                ------                        -------
  MemoryPressure   False   Thu, 22 Jul 2024 10:00:00 +0000   Wed, 21 Jul 2024 15:30:00 +0000   KubeletHasSufficientMemory    kubelet has sufficient memory available
  DiskPressure     False   Thu, 22 Jul 2024 10:00:00 +0000   Wed, 21 Jul 2024 15:30:00 +0000   KubeletHasNoDiskPressure      kubelet has no disk pressure
  PIDPressure      False   Thu, 22 Jul 2024 10:00:00 +0000   Wed, 21 Jul 2024 15:30:00 +0000   KubeletHasSufficientPID       kubelet has sufficient PID available
  Ready            True    Thu, 22 Jul 2024 10:00:00 +0000   Wed, 21 Jul 2024 15:30:00 +0000   KubeletReady                  kubelet is posting ready status
Addresses:
  InternalIP:  192.168.1.10
  Hostname:    app-node
Capacity:
  cpu:                4
  ephemeral-storage:  50Gi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16Gi
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  45Gi
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15Gi
  pods:               110
System Info:
  Machine ID:                 123456789abcdef
  System UUID:                12345678-1234-1234-1234-123456789abc
  Boot ID:                    1234abcd-1234-abcd-1234-abcd1234abcd
  Kernel Version:             5.10.0-1049-azure
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.7
  Kubelet Version:            v1.21.1
  Kube-Proxy Version:         v1.21.1
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-558bd4d5db-6g6gr                  100m (2%)     200m (5%)   70Mi (0%)        170Mi (1%)     18d
  kube-system                 coredns-558bd4d5db-pxgrt                  100m (2%)     200m (5%)   70Mi (0%)        170Mi (1%)     18d
  kube-system                 etcd-app-node                             100m (2%)     0 (0%)      100Mi (0%)       0 (0%)         18d
  kube-system                 kube-apiserver-app-node                   250m (6%)     0 (0%)      512Mi (3%)       0 (0%)         18d
  kube-system  

Conclusion

Listing all Pods and Nodes in a Kubernetes cluster is a fundamental skill for managing and monitoring your applications. By using kubectl, you can efficiently get an overview of your resources, filter them based on labels, and retrieve detailed information for specific resources. These commands are invaluable for debugging, monitoring, and ensuring the health of your Kubernetes environment.

✍️
Author: Hitesh Jethwa has more than 15+ years of experience with Linux system administration and DevOps. He likes to explain complicated topics in easy to understand way.
]]>
<![CDATA[ LHB Linux Digest #24.12: Certifications, Bash Scripting Course, Ansible Tips and More ]]> https://linuxhandbook.com/newsletter/24-12/ 66a10adf8bbf5e05c5a6417b Wed, 24 Jul 2024 22:20:37 +0530 The Docker for Beginner course is complete. It is accessible online for the Pro members. We'll also make it available in eBook format later.

Learn Docker: Complete Beginner’s Course
Learn Docker, an important skill to have for any DevOps and modern sysadmin. Learn all the essentials of Docker in this series.

πŸ’­ What else do you get in this edition of LHB Linux Digest:

  • Sar command guide
  • Crontab logs
  • A new code editor
  • Tools and memes for Linux lovers

✨ Get Certified at a Discount

To celebrate the SysAdmin appreciation day (last Friday of July), Linux Foundation is running offers on its training and certifications. Certifications can be helpful for DevOps career aspirants. So take advantage of the offer and get certification exams at a reduced pricing.

Linux Foundation
]]>
<![CDATA[ Restart a Single Container with Docker Compose ]]> https://linuxhandbook.com/docker-compose-restart-container/ 66a0789b8bbf5e05c5a640af Wed, 24 Jul 2024 19:37:41 +0530 The docker-compose restart command is designed to restart one or more containers defined in your docker-compose.yml file.

By default, if you run docker-compose restart without specifying any container names, it will restart all the containers in your application. You may not always want that.

During development, you may want to quickly restart a container to test code changes without waiting for the entire application stack to restart.

Similarly, there are situations when a specific container is causing performance issues and restarting it without restarting others can help you isolate the source of the problem.

The good thing is that docker compose allows you to restart specific containers and I am going to discuss this scenario in this tutorial.

Restarting a specific container with Docker Compose

To restart a single container, you simply append the name of the container to the end of the command:

docker-compose restart <container_name>

Replace <container_name> with the actual name of the container you want to restart. This name is usually the same as the service name you defined in your docker-compose.yml file.

Let's say you have a web application with the following docker-compose.yml file:

services:
  web:
    image: nginx:latest 
    ports:
      - "8000:8000"
  db:
    image: postgres:latest
  cache:
    image: redis:latest

In this setup, you have three containers: web, db, and cache (Redis). If you want to restart only the web container, you will run:

docker-compose restart web
Restart specific container with docker compose
Docker Compose Restart vs specific container restart

Restarting individual containers and dependency management

Imagine you have a web application that relies on a database. If you restart the web container using docker-compose restart web, the db container won't be automatically restarted, even though the web container depends on it.

This happens because Docker Compose focuses on managing the lifecycle of individual containers.

It doesn't inherently understand the runtime dependencies between them. So, if you need to restart a container and its dependencies, you can:

Restart them individually:

 docker-compose restart web && docker-compose restart db

Or, restart them together:

  docker-compose restart web db
Restart individual containers with dependencies

πŸ’‘ You may also use docker-compose up command which recreates all containers, ensuring dependencies are also restarted. But if you are recreating a container, you'll lose any data that is not stored in Docker Volumes.

In your docker-compose.yml file, you should define a volume and mount it to the appropriate directory within the container.

volumes:
  - ./postgres-data:/var/lib/postgresql/data

Restart specific container without restarting dependencies

🚧
Docker Compose Up and Restart Command are different in the sense that the up command destroys the container and creates a new one. Make sure you are using volumes for the data.

In some cases, you might want to restart/recreate a container without affecting any of its dependencies.

You can achieve this using the docker-compose up command with the --no-deps flag:

docker-compose up --no-deps -d <container_name>

The -d flag here is used to run the command in detached mode (in the background).

Consider the previous example with the web, db, and cache containers. If you only want to restart the web container without affecting the database or cache, you can run:

docker-compose up --no-deps -d web
Docker compose restart without dependencies

This will recreate the web container without restarting the db or cache containers, even though the web container might depend on them.

βœ‹
If the container you are restarting has any runtime dependencies on other services, those dependencies won't be restarted and might cause errors.

Conclusion

If you make any modifications in the docker-compose.yml file, like change port mappings or the Dockerfile used to build an image, then restart isn't enough as docker-compose restart only restarts the existing container. It doesn't rebuild the image or re-read the configuration from the docker-compose.yml file.

In this case, you will have to use docker-compose up which will rebuild the image if the Dockerfile has changed and recreate the container with the updated configuration.

I hope you find this tutorial helpful in your Docker endeavours.

]]>