Category Archives: linux

Adding a function to your .bashrc for resource switching

Shout out to my friend Drew for this one – I had something similar (but nowhere near as cool) previously!

In my work environment, we have several different kubernetes clusters that my team manages. It’s relatively common to have to switch between them several times a day because of various work items that need completed. Aside from that, there are different namespaces within each environment as well, also which need to be specified. This usually comes in the form of one or two commands:

kubectl config use-context dev-env
kubectl config set-context --current --namespace mynamespace

(You can condense these down into one command, but I’m leaving it as two for simplicity.)

In any event, these commands need to be executed every time you switch from test to dev, from dev to prod, or whatever your environments are, along with the namespaces as well. Each cluster requires a yaml file downloaded from the cluster that contains all of the information kubectl needs to know what cluster to connect to, along with your credentials. This .bashrc function is a very elegant and simple way to be able to switch environments and/or namespaces with a single command:

clus() {
    if [ $# != 2 ]; then
    echo "usage: clus <environment> <namespace>" 1>&2
    return 1
fi
environment=$1
namespace=$2
if ! [[ "${environment}" =~ ^(dev(1|2)|test(1|2)|prod(1|2))$ ]]; then
    echo "error: invalid environment \"${environment}\"" 1>&2
    return 1
fi
if ! [[ "${namespace}" =~ ^(name1|name2|name3) ]]; then  
    echo "error: invalid namespace \"${namespace}\"" 1>&2
    return 1
fi
export KUBECONFIG=${HOME}/workspace/kubeconfigs/${environment}.yaml
kubectl config use-context "${environment}"-fqdn
kubectl config set-context --current --namespace "${namespace}"
}
export -f clus

So needless to say, I’ve obscured some of the company-specific details about our namespace and cluster names, but you get the idea. So now any time I’ve got an active terminal, all I have to do is type:

clus dev2 name3

And I’m configured for the dev2 environment and the name3 namespace. Messages are displayed on the screen to indicate success.

Just remember! You need to have downloaded your cluster yaml files into a directory (here mine is /home/username/workspace/kubeconfigs) for this to work!

Resolving a kubectl error in WSL2

For work, I often have to connect to a Kubernetes cluster to manage resources, and anyone who’s done that via CLI before knows about the kubectl command. To use it locally, you must first download a yaml configuration file to identify the cluster, namespace, etc., then the commands should work. Notice I said “should” work.

So enter the following error message when attempting to run kubectl get pods:

Unable to connect to the server: dial tcp 127.0.0.1:8080: connectex: No connection could be made because the target machine actively refused it.

Obviously I wasn’t wanting to connect to 127.0.0.1 (aka localhost), I was trying to connect to an enterprise Kubernetes cluster. Then later on after re-downloading the target cluster yaml file, I received this error while running kubectl commands:

Unable to connect to the server: EOF

Searching for this error online led me down a multitude of rabbit holes, each as unhelpful as the last, until I found a reference to Docker Desktop. I know that we (the company I work for) used to use it, but we don’t anymore. (At least I don’t in my current role.)

I raised my eyebrow at that one — I had a relatively new laptop, but one of the corporate-loaded tools on it for someone in my role was Docker Desktop. I checked the running services to see if it was running, and it was not, which is expected. I don’t need to run it.

I forgot to mention that I am using WSL2 (Fedora remix) as my VS Code terminal shell, and so far I’m nothing but happy with it. Sensing something off with my local installation of kubectl I ran which kubectl, which gives me the location of the binary currently in my path. (For the record if it appears more than once in your path, it only returns the first one it comes across, in order of your PATH paths.)

Sure enough, it pointed to /usr/local/bin/kubectl, which was unexpected. I wouldn’t think that an enterprise tool would be installed to /usr/local, and I was right. Performing a long listing of that directory showed me the following:

lrwxrwxrwx  1 root root   55 Jul 21 09:43 kubectl -> /mnt/wsl/docker-desktop/cli-tools/usr/local/bin/kubectl

So sure enough, I had been running the Docker desktop version of kubectl and not the one I had installed with yum officially (which existed in /usr/bin, but was after /usr/local in my PATH.)

So I removed the link, and now which kubectl immediately started showing the correct one, installed via the package manager, and it starting working, connecting to the correct cluster, and everything. While this may have been a simple fix for some, not being fully aware of what may be pre-installed on a work laptop did give me some surprises.

Tiling Window Manager – AwesomeWM

For some time now I’ve been trying to reduce the need to use the mouse when I’m on my workstation at work, or my Linux desktop at home. For some applications, the mouse is necessary, but the majority of my work at my job is done through terminal shells. A co-worker opened my eyes to AwesomeWM, and I’ve never looked back. With a few configuration tweaks, it’s easy to arrange your open windows (shells and browsers alike) in a particular pattern. It keeps the notion of major and minor windows, so when there is an arrangement where some windows are bigger than others, the bigger ones are the major windows. The configuration file is relatively straightforward for some options, but the wiki the site has, along with other websites, has more than enough information to get started. For example, I am not a fan of xterm windows (the default term in AwesomeWM), so I used rxvt-unicode (urxvt). It took only seconds to update the configuration file to use a different term. I could have just as easily used gnome-terminal, or any other term you have installed.

If you’re used to Linux already, you’re familiar with the notion of workspaces (some people call them desktops), and while Awesome has these, the notion is somewhat different. Instead they are called tags. By default tag 1 is active, so you’re only seeing windows opened within tag one. With a shift-click or another easy keyboard sequence, you can switch tags, or display multiple tags. (i.e. if you have shells open in 1, and browser in 2, you can overlay 1 and 2 together.)

It’s very lightweight, doesn’t have a lot of fluff, and allows you to maximize your screen real estate. Even the window borders are minimal — seems like a pixel wide to me, and you can only see them when that window is active.

Having used a variety of other desktop and window managers, AwesomeWM is still at the top of my list.

If you’re so inclined, I’d recommend starting with looking at some example screenshots on Google Image Search.

 

CrashPlan vs Carbonite for a Home Backup Solution

After getting semi-serious in the photography arena, and having some paid-for shoots, I made the decision that it was time to bite the bullet and get an off-site backup solution. My “basement fileserver” has RAID1 (mirroring) so if one disk failed, the other one would still work. This doesn’t protect me from other physical disasters (such as a leaking, spraying water heater pipe that sprayed dozens of gallons of water onto the side of the desktop case) and other things like theft, fire, someone knocking it over, etc.

After looking at several solutions, I settled on a bake-off between Carbonite and CrashPlan. Both gave free trial solutions, and both were similarly priced for a single-computer unlimited backup. I tried CrashPlan first, and was pleased. I can control the hours that the backups take place (or unlimited), throttle it based on bandwidth, validation frequency (how often it checks for new files), CPU usage, encryption, and many other options. One other thing I really liked was the fact that you can use it for free to another computer. For example, a friend of mine and I want to back up the other’s files, so we can download their tool and use it completely free of charge rather than writing our own rsync/scp/etc scripts. It had a Linux client, and Windows client (since that’s all I currently have, I didn’t look for any other solutions).

Next was Carbonite. I went to the site and downloaded the installation package to try it out on my desktop (running Windows 7). It seemed to work okay and had many of the same features as CrashPlan, so I decided to try it out on my fileserver (running Linux), but alas, found that there was no Linux client — it is Windows and Mac OSX only. That cinched it for me… no way was I going to convert my fileserver over to Windows, so CrashPlan was the winner.

I later looked into Amazon AWS Glacier storage, since the storage fee was a penny per GB per month, with free uploads. The catch is that they assume this is “cold storage” (hence the name), so you get severely penalized for downloading content. You get 5% of your total storage free per month, but it’s prorated over 4 hour chunks throughout the month. The forums tell stories like how one user got charged $127 for downloading a 638MB archive in one day…. it all has to do with how much total storage you have vs how quickly you download the archive, and quite honestly, I wasn’t willing to worry about such a thing, so I ended up sticking with CrashPlan for now.

The one thing I don’t like in CrashPlan is the option to “keep deleted files”…. I uploaded several really old directories of photographs, ones that I likely will not look at for a long time, and deleted the local copy. I have the option checked to keep those deleted files on CrashPlan’s servers, but if for some reason that box gets unchecked, I’ll lose it all. I know the better solution to that is to get more local storage, but I’d rather have the space for other things.

All in all, for $5.99/month (on a month-to-month basis, it’s cheaper if you buy longer time periods at once), I’m satisfied. I just have to be careful, and this is one computer that nobody else in the house logs into for any reason.

Installing and Configuring Nagios on Fedora 20

In spite of a large corporate installation of Sitescope and Service Center, I decided to take a look into Nagios for the first time. Instead of having to install via source as I expected, I was pleased to see that both Nagios and the Nagios plugins were available in the fc20 repo.

The first thing I noticed was that the instructions were for Fedora 6, when there was no option to install via yum or rpm, but rather from source, so many of the steps could be ignored or had to be altered. For example, to set up login security, here was the sourceforge.net command:

htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Now that nagios was installed via yum, the default installation directory is /etc/nagios, so you have to modify the commands slightly.

I finally got the console to work, only to find that nothing was being reported because apparently of missing plugins (error code 127: out of bounds). In the end, it turned out to be that the plugins were owned by root, and they needed to be owned by the nagios ID. On FC20, the plugins directory (assuming 64 bit) is /usr/lib64/nagios/plugins. These files need to be owned by nagios (and for consistency sake) have the group set to nagios as well. Once I did that, after about 90 seconds all of my monitors cleared and my system was in the green.

Get started at http://www.nagios.org/.

 

How to Disable User List at Login for Gnome 3

Installing Fedora fc18 brought up once more the new features inherit in Gnome 3. One of which being that the dconf-editor is no longer used, because Gnome 3 has it’s own version. Here is how you can disable the user list in the login screen.

As root:

touch /etc/dconf/db/gdm.d/01-custom-gdm-settings

Then add the following to that file:

[org/gnome/login-screen]
disable-user-list=true

You must then update dconf and restart gnome:

dconf update
systemctl restart gdm

You should now be required to enter a valid username when logging in to your system.

CentOS 6.4 Installation failed due to ACPI errors

I got the latest CentOS (6.4) iso to install on my Toshiba Satellite laptop. It’s a few years old, but still works great, and I found myself hardly ever using the Windows 7 install. Unfortunately I ran into a bunch of AE_NOT_FOUND and ACPI errors, always causing a kernel panic. I tried a few different versions, including 6.2 and 6.1 that I already had with the same results. Oddly enough, I downloaded and successfully installed Ubuntu 13.04 64bit, but not wanting that to be my distro, I tried again, this time hitting [tab] at the install menu, and adding “acpi=off” at the boot line, and viola, that did the trick. For some reason the acpi driver is causing a system panic. Once the OS is installed and configured I’ll see if I can get it to turn back on. It can be very useful on a laptop.

Update: acpi=off worked for the CentOS installation, but once it was installed I could not boot. I was able to do this to turn acpi back on (otherwise it can cause your fan to stop working, causing a hang due to overheating):

acpi=enable pci=assign-busse acpi=ht

This successfully allowed my system to boot. Once it is completely booted, I will need to update my grub conf to ensure the settings are persistent.

Clearing /tmp and /var/tmp with find in cron.daily

Moving from Solaris to Linux has been enriching, but not without subtle differences. One of many was the need for older SLES servers to clear out /tmp and /var/tmp. I started with the standard Solaris cron jobs:

/usr/bin/find /tmp/* -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1
/usr/bin/find /var/tmp/* -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1

In spite of how active some of these servers are, if /tmp or /var/tmp are empty, the above commands return a value of 1 instead of 0, thus resulting in a nice email that cron failed. Removing the asterisk was an easy way to fix this, however that posed the problem of deleting /tmp or /var/tmp itself if there was no activity on the server for 30 days. Unlikely, but possible. Rather than touching a file before the find commands (thus moving mtime to current), I opted to exclude directories and only remove files (adding the -type f option). Even though directories can still exist, they take up very little space and will likely get cleaned up on a reboot at some point. This was a happy medium. My script now looks like this:

/usr/bin/find /tmp/ -type f -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1
/usr/bin/find /var/tmp/ -type f -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1

Trying out new Linux distros

One of the last things I usually want to do is take something that works and put it at risk. That’s always how I feel when I’m upgrading any of my Linux systems at home, putting the whole thing at risk. The result: an installation of Ubuntu 10.10 that is no longer supported. Even the old-releases repos don’t have the updates that I want anymore. Nevertheless, I felt it was time for a change.

In the past, this would have meant either removing the drive and replacing it with another, or rolling the dice and just installing over top of my old OS, hoping that I copied all of those files I wanted.

Some time ago I started using Virtualbox, a free download from Oracle. For anyone who is not familiar with Virtualbox and the like, it’s a virtual platform for installing other operating systems.  You take a chunk of your host hard drive (the physical disk in your computer) and create a “virtual disk drive” that you can use as the boot disk for your virtual installation.

Aside from having a variety of operating systems at my fingertips with just one physical system, it also lets me “try before I buy” new distributions, and see how things look and feel before I commit to upgrading my existing Linux system. So far I couldn’t be more pleased, because some of them were discarded, and it’s as easy as right click/Remove, and viola, it’s gone.  You have the option of keeping the VDI (Virtual Disk Image) around, but if I’m trashing a virtual machine, I doubt I’d want the disk image again, so I always opt to save the space.

I just downloaded the newest Ubuntu, 13.04, so I can test it out before I commit…. I’m not a huge fan of Gnome 3, but I’m willing to try it for a while, since I mostly use CLI with screen.