Deep Dive Articles

The Best Way to Learn a New Programming Language from Scratch - How I Approach and Learn Any Programming Language Efficiently

Preface: The following article is based on my experiences and opinions on learning programming languages. I have been learning about computers in-depth and find learning languages with specific purposes useful. For example, Python was for beginning, C is for memory management and Rust is for learning to write memory-safe code in my case. This time, I started learning Golang which is known for its simple syntax and powerful performance, and widely known for its awesome concurrency.

Why More is Not Always Good in Terms of Software - Words on Cross Platform Utilities, Bash-ism, and POSIX Compliance.

Preface: This article is about my views on software compliance and cross-platform support, and reflects my opinions and experience with the subject. Your experience and opinions may vary, which I respect. What I am Specifically Talking About? I am going to talk about the issues caused by non-compliant software and why more features are not always good, especially in the case of the upgrade of tools on a single platform.

Why I Shifted From Arch Linux to Debian Linux?

Preface: The following article is based on my personal experience with Arch Linux and Debian Linux. While I appreciate both distributions for their unique strengths and different use cases, the information provided reflects my own opinions and experiences. Your experiences may vary. What was I going for Initially? Bit of my Story as a Beginner Linux User When I started using Linux, I was going through security stuff and learning computer security when I was in my High School.

Tmux is the Ultimate Choice for Power Users - An Awesome Terminal Multiplexer for Managing Persistent Sessions

What is Tmux? Tmux is a Terminal Multiplexer Application for Linux and MacOS for managing terminal sessions and Windows. It is to be mentioned that Tmux is not a terminal emulator, instead, it’s a terminal application, a binary that allows you to stay productive over your terminal. It doesn’t matter which terminal emulator you are using (although I recommend the Suckless Terminal). The functionality of managing Windows and the session doesn’t happen on the desktop GUI side but on the terminal session, you are working with.

The Concept of RSS Feed - A Reliable Way for Publishers and Subscribers Model

Preface: I have been looking for simple solutions in nearly everything related to computing. A lot of software designed these days is not designed per your requirements but as per companies’ profit. Hence, most of these solutions end up sucking your time and energy and need to be replaced by more optimised solutions that would boost your productivity in the right way. RSS Feed is something that I found after searching for solutions related to subscribing to websites or creators without sucking much resources and design that is reliable to the user itself.

Breaking RSA Encryption on Hardware Devices with Side Channel Power Analysis — Leaking the Private Key by Exploiting Square-Multiply Algorithm

Preface: This article is about leaking the private key from hardware devices that implement RSA encryption which is part of hardware hacking. The author is not responsible for any damage caused by the given information. It is recommended to be careful while performing these attacks as they can damage the hardware or even destroy it. All the information provided here is for educational purposes. There are no such prerequisites for understanding the theory, although knowledge about modular arithmetic, basics of encryption mathematics, basic electronics, etc.

Serious Reconnaissance with Unmanned Aerial Vehicles — Mapping Out Devices in an Area with Drones

Introduction: The following content is for educational purposes and for hackers living in basements knowing enough ethics. The author is not responsible for any damage caused by the knowledge provided here and does not support anything like that. It’s essential to check out the rules in the area of reconnaissance about the tactics provided here and the use of materials (unless and until there is any apocalypse and a solid recon is required).

The Fundamentals of Hardware Hacking — Breaking and Reverse Engineering Smart IoT Devices

Disclaimer — This is an introductory article about Hardware Hacking and Security of IoT Devices. None of the mentioned information or techniques are intended for any illegal purposes and the author is not responsible for any damage. It’s advisable to experiment on devices that you own or have explicit permission to do so. Rest of all, hardware hacking is fun! The Beauty of Electronic Devices In the ever-growing world of smart devices and the connectivity of things to the internet, life has become more convenient than ever.

Setting Up a Remote Git Server — A Simple and Concise Step-by-Step Guide to Host a Private Git Server

Preface: This is a concise and simple guide to hosting a remote git server. I have been researching this topic for a while and came up with the idea of writing an article with a step-by-step guide for hosting a private git server. Covering all the aspects of git is not possible in a single article, so it’s assumed that the reader has prior knowledge of git and version control.

The Nature of Linux Kernel Development — Difference Between Rules of Kernel Level and User-Space Application Level

Preface: This article is intended to explain a clear distinction between the core principles of Linux Kernel Development and User-Application Level Development. The provided information is based on my research on Kernel Development through various sources and I have tried to make it as accurate as possible. Efforts have been made to explain it as simply and concisely as possible. Introduction to the Nature of Linux Kernel Linux Kernel is the abstraction layer between the Operating System and the Hardware in the system.

Linux Process Scheduling — The Reason your Linux System Processes so Efficiently (Kernel Perspective)

Preface: I was going through the book “Linux Kernel Development” by Robert Love, one of the best books I have referred to for low-level stuff and understanding the workings of Linux. I study this book with intensity, simplify the concepts and write it down here so that the readers can get a straightforward description and all they need to know about the topic. Covering the whole Linux Process Scheduling is not possible and is not the goal of this article.

Linux Processes — A Kernel’s Perspective Explained with Clarity and Simplicity

Preface: I have been going through the book “Linux Kernel Development” by Robert Love which I highly recommend for understanding the Linux Kernel in depth. I decided to write this article to explain “Linux Processes” simply and concisely. The topic itself is broad and is not explained into the deepest of it, but essential for Linux Administrators, Developers and even Linux users to appreciate the beauty of the Kernel they make use of every day.

NGINX for Deploying Next.js Application on AWS EC2 with AWS ELB — Control and Stability of Deployments

I was looking for an article like this a few days ago, which I didn’t find at that time, so I did the deployment on my own and came up with this article to prevent other developers from saving those efforts and focusing on development. I am not explaining every single step and have provided links for references. I prefer manual deployment of applications over automated (and even serverless) methods. Although they are convenient and require less effort by the developers, they are bound to the providers and offer less control over the underlying system.

Configuring and Building the Linux Kernel — Absolute Guide to Compiling Your Kernel

Linux Kernel is an Open-Source Software and the user is free to modify and customise it as per the requirements. The modification of the Kernel requires a deep understanding of the working of the Kernel, although patches are available to make the Kernel optimised for specific hardware. Linux Kernel Source Code has various options to configure the drivers, modules, preferences on hardware options, etc. This part can be studied by the user and is pretty much easy to work with.

The Elegance of the Linux Kernel — A Concise History of Unix and the Creation of the Linux Kernel

Introduction and Context I was going through the book “Linux Kernel Development” by Robert Love, an absolute guide to getting started with Linux Kernel Development and a highly recommended book for understanding the core of the Linux Kernel. Linux Kernel has been one of the most important software ever written and is even considered one of the biggest projects ever undertaken by a single person. The idea of the Linux Kernel was initiated by Linus Torvalds, a student from the University of Helsinki and is maintained by him to date (while writing this article).

Linux Shell Scripting — A Suckless and Concise Guide to the Command-line of Linux

Prior Statements: This is a concise guide on the Linux Shell Scripting while consolidating all the facts about the Linux Shell for quick developer’s reference while using Linux. I am referencing the Bash (Bourne-Again Shell) which is the default shell for Linux-based systems. I will also be providing references and external links to dive into depth and not fill the article with too much explanation about a single topic which is not universally required by all the readers.

Suckless Utilities for Arch Linux — The Most Minimal Way Run a Computer

Suckless utilities have been my favourite at this time and kind of essentials for my use of the computer. I have been using Arch Linux for a fair amount of time now and I started using it with XFCE for few weeks. I would appreciate the XFCE desktop environment for it’s smoothness and light-weight nature with works really fine when newly shifting on Arch Linux. But then I learnt the suckless ecosystem and eventually shifted to it as my full-time environment.

Installing Pacman in Arch Linux — When You Blow it Up

Let me suckless and divide the article into two parts: My story how I blew up Pacman Package Manager How to reinstall the Pacman Package Manager If you only care about the second part, skip the first one. The Scenario — Blow it Up I was trying to install the pacman game from the Internet to get it running on my Arch Linux Terminal (I use Suckless Terminal BTW). When I got it installed and played it, it was super awesome.

Boot Process of Computers — A Learner’s Perspective Of Exploring the Depth of Computers

Prior Clarifications: Here, I will be providing a philosophical explanation about the bootloaders and understanding them in a simple and as minimal way as possible. This is not supposed to be a manual for bootloader or provide any advice for experimenting over your live system. It’s my journey to understand computers (one of the most complex creations of mankind) and I will be stating my thoughts. Take it with a pinch of salt.

Networking Fundamentals for Linux Administrators — A Suckless and Concise Explanation

Statistics are clear on the fact that 96.3% (while writing this article) of the servers use Linux as their Operating System which is no doubt what every other Linux user on this Earth expects. I believe that the Linux Administrator has to take the shot about the configuration of Networking in Linux Based Server. Some of the underlying concepts remain the same for any other distros but it is mainly intended for Linux.

Operating Systems and Low-Level Access to the Hardware — Why should you learn it?

Today, I completed the whole read of the book “Linux Kernel in a Nutshell” by Greg Kroah-Hartman and I highly recommend that you go through it if you want to understand how to build your custom configuration of Linux Kernel and all you need to know about all the nuts and bolts. It’s always great to have such handbooks around the desk. This blog is about why it’s so awesome to look into the operating system you are using with your hardware and why have a grasp on the Low-Level aspects of a computer.

Arch Linux Custom Builds — Freedom of the Operating System

While writing this blog, I was reading the book “Linux Kernel in a Nutshell” by Greg Kroab-Hartman and as far as the pages of the book are concerned, it seems to be a two-day read (this is a handbook so reading it once and having around the desk is super useful). By the way, the author has the book left open for download http://www.kroah.com/lkn/ so check that out if you want to follow up.

Bypassing the Linux Login to access the files (with Physical Access), even the root!

Imagine being away from the computer for a couple of minutes and getting to know that the system has been compromised and a backdoor has been installed into the system. “The system was locked?” doesn’t matter, without the bios security implementation (which most probably would not be implemented), all the files can be recovered without any login made to the Login Screen. This goes with the story of me trying to get my Wi-Fi troubleshooting in Arch Linux where I was trying to upgrade the Kernel of my System to get the Wi-Fi working properly (as mentioned in the previous blog.

NGINX for Deploying Next.js Application on AWS EC2 with AWS ELB — Control and Stability of Deployments

Planted January 29, 2024

I was looking for an article like this a few days ago, which I didn’t find at that time, so I did the deployment on my own and came up with this article to prevent other developers from saving those efforts and focusing on development. I am not explaining every single step and have provided links for references.

I prefer manual deployment of applications over automated (and even serverless) methods. Although they are convenient and require less effort by the developers, they are bound to the providers and offer less control over the underlying system. It’s an absolute necessity that the systems work as the developer (or a team in the organisation) wants it to, and not more than that. Often, more in this situation means more pricing which is considered for a long period can sum up to a huge amount, and even that functionality may not be that good enough or even required.

A simple deployment that ensures the control of the developing team and offers no more than the requirements is an efficient way to deploy applications. Consolidating these advantages, simplicity in design opens more doors to innovation and developing unique solutions to occurring problems. I believe that innovation requires freedom and control over a certain thing that is meant to be improved. Automated tools which act like an invisible layer of internal mechanisms for deployment often don’t allow the deployment team to work on their terms and end up causing a steep learning curve and fewer own-built features.

Next Js is widely used for developing web applications due to its vast ecosystem and convenience. Although there are a lot of ways to deploy these applications to a certain extent, having manually operated servers provides a degree of freedom and access to the resources. AWS (Amazon Web Services) provides lots of ways of deployment (like AWS Amplify but they are automated, or Virtual Machines for Manual Control). AWS EC2 (Elastic Compute Cloud) provides manual control over the servers with a whole degree of freedom to manage and extend the usage. Coupled with EC2, AWS ELB (Elastic Load Balancer) provides a convenient way to balance multiple EC2 Instances and prevent failures due to large amounts of requests, by distributing loads over lots of machines. NGINX provides proxy handling for configuring the right ports in the server.

diagram

The combination of Next Js, NGINX, AWS EC2 and AWS ELB provides a system with high throughput, control and upgradability to the organisation, ensuring efficient pricing and usage of the system. Configuring them can be tricky in some cases, but here is a suckless guide to walk through the process.

Get an AWS EC2 Instance Running

On the AWS portal, get the EC2 instance and deploy it. For Next Js, my personal preference goes with using Linux, especially Debian (although Ubuntu Servers are good too and are based on Debian). Choose enough capacity EC2 Instance and connect to it via SSH.

Refer to AWS EC2 Docs: https://docs.aws.amazon.com/ec2/

Build the Next Js Application

Once the application is ready for deployment, start building the project. It’s not a big deal for deployment engineers whether to use npm or yarn, but use anything that works out best for you. As this is a production environment, you would need to build it and then run it on the EC2 Instance.

npm install 
npm run build 
npm install -g pm2
pm2 start npm --name "your-application-name" -- start
pm2 save
pm2 startup

This will fire up pm2 (process manager for node.js) which will run on the server in the background. The startup command would make sure that the Next Js application gets up when the server reboots. This ensures uptime in cases of the server rebooting (although the load balancer will manage it but it’s good if the server revokes itself).

Now the application must be running on port 3000 (default port). To route the traffic to port 80 (HTTP port), you need to proxy requests using NGINX. NGINX routes the traffic from one port to another and vice-versa.

Setting up NGINX for Proxy

NGINX is easy to configure and requires a single configuration file to get the work done. First, install NGINX into the server. For Debian-based servers:

sudo apt update
sudo apt install nginx

Create a config file your-application-name.conf in the /etc/nginx/conf.d/ directory. Here is a basic and minimal config template that you can use, which works just fine:

server {
    listen 80; # Listen on port 80

    server_name <domain-name or IP address>;

    location / {
        proxy_pass http://127.0.0.1:3000; # Proxy traffic to port 3000
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Yes, you can use IP addresses too in the server_name field in case you don’t have a domain name yet. It’s the part where I spend most of my time debugging, it’s a field where you need to add what the user enters in the search bar when accessing the application.

At last, start NGINX with the following commands (for Debian-based servers):

systemctl enable nginx
systemctl start nginx

This will start nginx and configure it to start when the system reboots too.

Set Up Elastic Load Balancer

Configuring AWS ELB can be done on the AWS console and the docs can be referred for purposes. You just need to route traffic from ELB port 80 to EC2 port 80. The options in the ELB are absolutely easy and can be done with fet Google searches on the way. ELB has options to add more instances in case you have more instances running the application.

A point to note here is that you can skip the process of using NGINX and set up the ELB to forward traffic to EC2 port 3000 (default port of Next Js).

Refer to AWS ELB docs: https://docs.aws.amazon.com/elasticloadbalancing/

Conclusion

This was a simple guide to setting up a deployment which is scalable, manual and provides a greater degree of control than automated deployment tools. This can be integrated with a lot of external tools like GitHub actions to create a CI/CD Pipeline.

This guide was something that I needed a few days ago and now it exists.