Driving etiquette desperately needed

I’ve noticed as of late, that not only are newer vehicles (especially large pick-up trucks and SUVs) have blinding headlights, but there are so many drivers that drive with their bright lights/hi beams on even with other drivers in front of them.

Sometimes you can easily tell that the hi-beams are on, because the 2nd set of lights are on, which in some cars are separate, rather than just shifting the existing low-beams upward.

While I understand there isn’t much an individual driver can do if their vehicle has higher-than-usual lights, driving with your hi beams on is just downright disrespectful and lazy. Take the time to turn them off when you see the headlights of another vehicle approaching. If your vision is that bad where you need them on constantly, maybe you should consider whether or not you should be driving at night in the first place.

The rest of us end up with seeing spots and headaches, potentially causing an accident because you’re too lazy or don’t care enough to turn off your brights. We as a society can do better.

Helping a Total Stranger

Yesterday while I was out running errands with my wife, as I got back into the car, I heard a loud CLICK CLICK CLICK sound. Anyone who has had a dead battery knows the sound. I looked around, and saw a car diagonally in front of me had lights flicker as I heard the sound again. I popped into the car real quick and told her I was going to offer help. She smiled, fully expecting me to say that.

I asked the guy if he needed a jump and he responded “Oh yeah, do you have cables?” I immediately thought to myself “Who in the world doesn’t keep jumper cables in their car?” Well apparently not a lot of people do, especially the younger generations. (Yes I’m a Gen-X’er). I said that I did, and went to get them from my spare tire compartment. The man opened his hood again and I went to hook up the cables. The man was nice, but had no idea how to properly use the cables, so asked me “How do you normally hook them up?” I told him that I connect the black(negative) clips to both black terminals, then red (positive) of the car that starts, then the red of the car that won’t. I then started my car, and he tried — although it didn’t start, it did try to turn over and you didn’t hear the starter “click”, so that was a good sign. I said we just needed to wait a few minutes and let my car charge up his battery enough to start.

While we were waiting, he mentioned that the car is only a few years old, and he didn’t expect to have issues. I told him that there’s any number of things that could cause a battery to die, but the most common of course (especially when you have young children, which he did) if leaving a light on, or something like that. I told him “Hopefully that’s what it was, because if so you wouldn’t need to replace anything.” He then asked how he could tell if it was the battery or the alternator, so I explained that to him.

“When the battery is charging, it should register around 14 volts with a volt meter. When it’s not charging (or when it’s discharging — i.e. the engine is off and the alternator isn’t running) it should read around 12 volts. So I told him when he gets to where he’s going, check the battery while the car is still running, and if its 12 or less, then you likely have a bad alternator, but if it’s 14 volts, alternator is fine, and you either just needed a charge, or the battery is bad. If this happens again later, it’s likely a bad battery.

Through all this, the couple were very nice, extremely appreciative and very glad I offered. I was happy to help them out, and wished them luck — and said not to turn off the engine until they’re getting to where they need to be in case it doesn’t start again.

The whole experience made me feel good, because I’ve been on the other end of that before and I know how it feels, but it also occurred to me that apparently it’s not as common for folks to carry jumper cables in their car. Do yourself a favor, buy a set, and keep them in your car, trunk, or (like I do) in the spare tire compartment so they’re out of the way. You never know if someone else might need them, or even more so, when you might need them.

Adding a function to your .bashrc for resource switching

Shout out to my friend Drew for this one – I had something similar (but nowhere near as cool) previously!

In my work environment, we have several different kubernetes clusters that my team manages. It’s relatively common to have to switch between them several times a day because of various work items that need completed. Aside from that, there are different namespaces within each environment as well, also which need to be specified. This usually comes in the form of one or two commands:

kubectl config use-context dev-env
kubectl config set-context --current --namespace mynamespace

(You can condense these down into one command, but I’m leaving it as two for simplicity.)

In any event, these commands need to be executed every time you switch from test to dev, from dev to prod, or whatever your environments are, along with the namespaces as well. Each cluster requires a yaml file downloaded from the cluster that contains all of the information kubectl needs to know what cluster to connect to, along with your credentials. This .bashrc function is a very elegant and simple way to be able to switch environments and/or namespaces with a single command:

clus() {
    if [ $# != 2 ]; then
    echo "usage: clus <environment> <namespace>" 1>&2
    return 1
fi
environment=$1
namespace=$2
if ! [[ "${environment}" =~ ^(dev(1|2)|test(1|2)|prod(1|2))$ ]]; then
    echo "error: invalid environment \"${environment}\"" 1>&2
    return 1
fi
if ! [[ "${namespace}" =~ ^(name1|name2|name3) ]]; then  
    echo "error: invalid namespace \"${namespace}\"" 1>&2
    return 1
fi
export KUBECONFIG=${HOME}/workspace/kubeconfigs/${environment}.yaml
kubectl config use-context "${environment}"-fqdn
kubectl config set-context --current --namespace "${namespace}"
}
export -f clus

So needless to say, I’ve obscured some of the company-specific details about our namespace and cluster names, but you get the idea. So now any time I’ve got an active terminal, all I have to do is type:

clus dev2 name3

And I’m configured for the dev2 environment and the name3 namespace. Messages are displayed on the screen to indicate success.

Just remember! You need to have downloaded your cluster yaml files into a directory (here mine is /home/username/workspace/kubeconfigs) for this to work!

Resolving a kubectl error in WSL2

For work, I often have to connect to a Kubernetes cluster to manage resources, and anyone who’s done that via CLI before knows about the kubectl command. To use it locally, you must first download a yaml configuration file to identify the cluster, namespace, etc., then the commands should work. Notice I said “should” work.

So enter the following error message when attempting to run kubectl get pods:

Unable to connect to the server: dial tcp 127.0.0.1:8080: connectex: No connection could be made because the target machine actively refused it.

Obviously I wasn’t wanting to connect to 127.0.0.1 (aka localhost), I was trying to connect to an enterprise Kubernetes cluster. Then later on after re-downloading the target cluster yaml file, I received this error while running kubectl commands:

Unable to connect to the server: EOF

Searching for this error online led me down a multitude of rabbit holes, each as unhelpful as the last, until I found a reference to Docker Desktop. I know that we (the company I work for) used to use it, but we don’t anymore. (At least I don’t in my current role.)

I raised my eyebrow at that one — I had a relatively new laptop, but one of the corporate-loaded tools on it for someone in my role was Docker Desktop. I checked the running services to see if it was running, and it was not, which is expected. I don’t need to run it.

I forgot to mention that I am using WSL2 (Fedora remix) as my VS Code terminal shell, and so far I’m nothing but happy with it. Sensing something off with my local installation of kubectl I ran which kubectl, which gives me the location of the binary currently in my path. (For the record if it appears more than once in your path, it only returns the first one it comes across, in order of your PATH paths.)

Sure enough, it pointed to /usr/local/bin/kubectl, which was unexpected. I wouldn’t think that an enterprise tool would be installed to /usr/local, and I was right. Performing a long listing of that directory showed me the following:

lrwxrwxrwx  1 root root   55 Jul 21 09:43 kubectl -> /mnt/wsl/docker-desktop/cli-tools/usr/local/bin/kubectl

So sure enough, I had been running the Docker desktop version of kubectl and not the one I had installed with yum officially (which existed in /usr/bin, but was after /usr/local in my PATH.)

So I removed the link, and now which kubectl immediately started showing the correct one, installed via the package manager, and it starting working, connecting to the correct cluster, and everything. While this may have been a simple fix for some, not being fully aware of what may be pre-installed on a work laptop did give me some surprises.

Going Down the VS Code Rabbithole

A while back I was introduced to VS Code (Visual Studio Code), a free code editor by Microsoft. It seemed pretty cool, but at the time I was determined to be a die-hard Emacs user. Unfortunately (or perhaps fortunately depending on your perspective) I started migrating away from Emacs/org-mode as a note taking medium, and using Microsoft OneNote instead. I was able to keep my notes in a central location, have easy access no matter where I was because of the mobile app, and I loved the ability to copy and paste images, screen captures, and files into a OneNote page. As my usage of Emacs started to wane, I heard more and more ravings from my peers about VS Code, so I decided to dive in for a while and try it out.

Man, what a ride!

So it’s easy to be overwhelmed at first, but the built-in integrated SCM (git by default) was enough to raise an eyebrow, but the extensive (and I mean EXTENSIVE) list of extensions available for everything under the Sun was just amazing. I started installing everything I thought I’d ever need…. (mistake #1) Then after I got a jazillion pop up messages about this or that not being available, or needed another package, or whatever, I went through and paired it down quite a bit. I ended up with some basic linters, OAS specification problem identification extension, kubernetes cluster info, and a few others. That coupled with WSL on my Windows 10 installation made a nice terminal emulator as well. Once I had all the packages installed that I needed, I was able to give up Cmder and ConEMU as well. (Cmder sits on top of ComEMU so I admit having both was a bit redundant.) The git integration released me from using GitHub Desktop as well.

Suffice to say, using VS Code has allowed me to let go of several other applications on my system, just giving me an even tighter realm of control and ease of use for my work.

Note: if you’re going to install extensions, be aware of the inherit risk — you could be installing something malicious… so if you’re in doubt, try to stick only to the “blue star” or approved/official extensions for whatever you’re working with.

Oh, and one last thing — even though it’s not actually VS Code, if you type a period (.) while browsing a GitHub repository, a VS Code-like editor will load in your browser that mimics what the application does, allowing you to interact with that repository right there on the web. Now that’s cool!

Happy coding!

DIY Upgrades to a Desktop CNC Router

A couple years ago I purchased a 3018 Desktop CNC router from an online retailer. My goal was to learn more about CNC applications and usage, with the goal of eventually graduating to a larger version to manufacture various things. I’m still far away from a larger model, but for now it’s quite a bit of fun to be able to make your own parts for things. Between this and my 3d printer, there isn’t a lot of designs I can’t make given the time or design capability.

I’ve found Teaching Tech on YouTube, and this video in particular has given some great ideas about upgrading. For example, replacing the stock spindle with a Dremel tool. Since I happened to have an extra one laying around my garage, I just might do this!

I’m also becoming much more familiar with sites like Ali Express and dx.com. They offer relatively inexpensive electronics (among many other things) and are a great resource for things like LCD screens, stepper motor controllers, etc. I’ll definitely spend some time browsing them. Always looking for a good deal!

How to Beat the Windows 10 Automatic Reboots

So in the essence of concurrency and preventing old installations of Windows to propagate bugs and vulnerabilities, Microsoft has started performing automatic reboots of the system based on patching. While you can pick a schedule for these reboots to take place, one thing you cannot do (even as an administrator) is easily disable the reboots. I like to have more control over my systems, especially if I’m working on something that requires time – downloading a large file, rendering, etc.

Enter Reboot Blocker from Major Geeks! This will effectively shuffle around your reboot window, so that the window is never achieved. Be advised this runs as a service, but I couldn’t be happier with it. It has allowed me to regain total control over my patching schedule. (Which I still do frequently, just on my own timeline!)

If you try this out, leave a reply below and let me know what you think!

Finally back online!

It took way too long, but I finally got around to getting this blog back up and running. I had fortunately kept a SQL file of all of my old posts, (although several images are probably broken now) and was able to execute it to get my old content back up and running. Hopefully it’ll stay!

Tiling Window Manager – AwesomeWM

For some time now I’ve been trying to reduce the need to use the mouse when I’m on my workstation at work, or my Linux desktop at home. For some applications, the mouse is necessary, but the majority of my work at my job is done through terminal shells. A co-worker opened my eyes to AwesomeWM, and I’ve never looked back. With a few configuration tweaks, it’s easy to arrange your open windows (shells and browsers alike) in a particular pattern. It keeps the notion of major and minor windows, so when there is an arrangement where some windows are bigger than others, the bigger ones are the major windows. The configuration file is relatively straightforward for some options, but the wiki the site has, along with other websites, has more than enough information to get started. For example, I am not a fan of xterm windows (the default term in AwesomeWM), so I used rxvt-unicode (urxvt). It took only seconds to update the configuration file to use a different term. I could have just as easily used gnome-terminal, or any other term you have installed.

If you’re used to Linux already, you’re familiar with the notion of workspaces (some people call them desktops), and while Awesome has these, the notion is somewhat different. Instead they are called tags. By default tag 1 is active, so you’re only seeing windows opened within tag one. With a shift-click or another easy keyboard sequence, you can switch tags, or display multiple tags. (i.e. if you have shells open in 1, and browser in 2, you can overlay 1 and 2 together.)

It’s very lightweight, doesn’t have a lot of fluff, and allows you to maximize your screen real estate. Even the window borders are minimal — seems like a pixel wide to me, and you can only see them when that window is active.

Having used a variety of other desktop and window managers, AwesomeWM is still at the top of my list.

If you’re so inclined, I’d recommend starting with looking at some example screenshots on Google Image Search.

 

CrashPlan vs Carbonite for a Home Backup Solution

After getting semi-serious in the photography arena, and having some paid-for shoots, I made the decision that it was time to bite the bullet and get an off-site backup solution. My “basement fileserver” has RAID1 (mirroring) so if one disk failed, the other one would still work. This doesn’t protect me from other physical disasters (such as a leaking, spraying water heater pipe that sprayed dozens of gallons of water onto the side of the desktop case) and other things like theft, fire, someone knocking it over, etc.

After looking at several solutions, I settled on a bake-off between Carbonite and CrashPlan. Both gave free trial solutions, and both were similarly priced for a single-computer unlimited backup. I tried CrashPlan first, and was pleased. I can control the hours that the backups take place (or unlimited), throttle it based on bandwidth, validation frequency (how often it checks for new files), CPU usage, encryption, and many other options. One other thing I really liked was the fact that you can use it for free to another computer. For example, a friend of mine and I want to back up the other’s files, so we can download their tool and use it completely free of charge rather than writing our own rsync/scp/etc scripts. It had a Linux client, and Windows client (since that’s all I currently have, I didn’t look for any other solutions).

Next was Carbonite. I went to the site and downloaded the installation package to try it out on my desktop (running Windows 7). It seemed to work okay and had many of the same features as CrashPlan, so I decided to try it out on my fileserver (running Linux), but alas, found that there was no Linux client — it is Windows and Mac OSX only. That cinched it for me… no way was I going to convert my fileserver over to Windows, so CrashPlan was the winner.

I later looked into Amazon AWS Glacier storage, since the storage fee was a penny per GB per month, with free uploads. The catch is that they assume this is “cold storage” (hence the name), so you get severely penalized for downloading content. You get 5% of your total storage free per month, but it’s prorated over 4 hour chunks throughout the month. The forums tell stories like how one user got charged $127 for downloading a 638MB archive in one day…. it all has to do with how much total storage you have vs how quickly you download the archive, and quite honestly, I wasn’t willing to worry about such a thing, so I ended up sticking with CrashPlan for now.

The one thing I don’t like in CrashPlan is the option to “keep deleted files”…. I uploaded several really old directories of photographs, ones that I likely will not look at for a long time, and deleted the local copy. I have the option checked to keep those deleted files on CrashPlan’s servers, but if for some reason that box gets unchecked, I’ll lose it all. I know the better solution to that is to get more local storage, but I’d rather have the space for other things.

All in all, for $5.99/month (on a month-to-month basis, it’s cheaper if you buy longer time periods at once), I’m satisfied. I just have to be careful, and this is one computer that nobody else in the house logs into for any reason.