Windows 7 – User Account Control settings

After several years of a single installation, I was having some issues with my Windows 7 desktop computer at home. I decided to offload all of my important files and perform a fresh reinstallation. After getting the base OS loaded, I started the tedious process of reinstalling my applications, starting with an antivirus program. Shortly followed by Synergy, so I can access my Fedora Linux box that is next to my Windows PC. Synergy installed fine, but when the computer said the service was started, I continued to get “Service not available” messages. In trying to stop the services through the task manager, I received a “Access denied” message. I don’t recall having this issue in the past, but it was years ago that I first installed…

After some quick Google-fu, I came across a similar issue and found that the “User Account Control” settings restricts what you can do, even if your ID was set to be an administrator (which mine was). After changing this setting to the minimal setting (basically disabling it), I was again able to start and stop any services on the computer. Way to go Microsoft — nothing like making an administrator bend over backwards to be able to manage the system.

Installing and Configuring Nagios on Fedora 20

In spite of a large corporate installation of Sitescope and Service Center, I decided to take a look into Nagios for the first time. Instead of having to install via source as I expected, I was pleased to see that both Nagios and the Nagios plugins were available in the fc20 repo.

The first thing I noticed was that the instructions were for Fedora 6, when there was no option to install via yum or rpm, but rather from source, so many of the steps could be ignored or had to be altered. For example, to set up login security, here was the sourceforge.net command:

htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Now that nagios was installed via yum, the default installation directory is /etc/nagios, so you have to modify the commands slightly.

I finally got the console to work, only to find that nothing was being reported because apparently of missing plugins (error code 127: out of bounds). In the end, it turned out to be that the plugins were owned by root, and they needed to be owned by the nagios ID. On FC20, the plugins directory (assuming 64 bit) is /usr/lib64/nagios/plugins. These files need to be owned by nagios (and for consistency sake) have the group set to nagios as well. Once I did that, after about 90 seconds all of my monitors cleared and my system was in the green.

Get started at http://www.nagios.org/.

 

Making Gmail the Default Mailer for Mac

With a company-owned Mac Book Pro, there are some limitations. Unfortunately one is an archaic mailing application, Lotus Notes. Most of the time when I click on an email link, it’s not work-related, so I was tired of waiting for Lotus Notes to load (I use the web client, so the thick client is not synched) so I finally did a little search and found that the Google Notifier tool will allow OSX to select it as a default mailer, so no more cringing to think what may happen if I forget or accidentally click on an email link.

You can download Google Notifier here.

How to Disable User List at Login for Gnome 3

Installing Fedora fc18 brought up once more the new features inherit in Gnome 3. One of which being that the dconf-editor is no longer used, because Gnome 3 has it’s own version. Here is how you can disable the user list in the login screen.

As root:

touch /etc/dconf/db/gdm.d/01-custom-gdm-settings

Then add the following to that file:

[org/gnome/login-screen]
disable-user-list=true

You must then update dconf and restart gnome:

dconf update
systemctl restart gdm

You should now be required to enter a valid username when logging in to your system.

CentOS 6.4 Installation failed due to ACPI errors

I got the latest CentOS (6.4) iso to install on my Toshiba Satellite laptop. It’s a few years old, but still works great, and I found myself hardly ever using the Windows 7 install. Unfortunately I ran into a bunch of AE_NOT_FOUND and ACPI errors, always causing a kernel panic. I tried a few different versions, including 6.2 and 6.1 that I already had with the same results. Oddly enough, I downloaded and successfully installed Ubuntu 13.04 64bit, but not wanting that to be my distro, I tried again, this time hitting [tab] at the install menu, and adding “acpi=off” at the boot line, and viola, that did the trick. For some reason the acpi driver is causing a system panic. Once the OS is installed and configured I’ll see if I can get it to turn back on. It can be very useful on a laptop.

Update: acpi=off worked for the CentOS installation, but once it was installed I could not boot. I was able to do this to turn acpi back on (otherwise it can cause your fan to stop working, causing a hang due to overheating):

acpi=enable pci=assign-busse acpi=ht

This successfully allowed my system to boot. Once it is completely booted, I will need to update my grub conf to ensure the settings are persistent.

Clearing /tmp and /var/tmp with find in cron.daily

Moving from Solaris to Linux has been enriching, but not without subtle differences. One of many was the need for older SLES servers to clear out /tmp and /var/tmp. I started with the standard Solaris cron jobs:

/usr/bin/find /tmp/* -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1
/usr/bin/find /var/tmp/* -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1

In spite of how active some of these servers are, if /tmp or /var/tmp are empty, the above commands return a value of 1 instead of 0, thus resulting in a nice email that cron failed. Removing the asterisk was an easy way to fix this, however that posed the problem of deleting /tmp or /var/tmp itself if there was no activity on the server for 30 days. Unlikely, but possible. Rather than touching a file before the find commands (thus moving mtime to current), I opted to exclude directories and only remove files (adding the -type f option). Even though directories can still exist, they take up very little space and will likely get cleaned up on a reboot at some point. This was a happy medium. My script now looks like this:

/usr/bin/find /tmp/ -type f -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1
/usr/bin/find /var/tmp/ -type f -mount -depth -mtime +30 -a -exec rm -rf {} \; > /dev/null 2>&1

Fixing a Permalink problem in WordPress

I wanted to post this because I saw quite a few people having the problem that I was having. I managed to find one thing that caused my problem, as opposed to disabling this plugin or that plugin, none of which I had installed.

The problem:
When changing the default permalink configuration from ?p=5 to /2013/05/my-cool-post, all of a sudden the links to both the posts themselves and to the comment buttons were giving 404 errors. After much searching.

Supporting Evidence:
Some people reported having multiple sites hosted in the same wordpress account or hosting account, but only having this problem with 1 site. This tells me that it is specific to the site, not the hosting provider, or even WordPress. That’s what led me to the solution.

The solution:
While perusing through the httpd.conf file on my system, I was looking at the following:

<Directory />
    Options Indexes FollowSymLinks Includes +ExecCGI
    AllowOverride None
</Directory>

Notice that “AllowOverride None” is set, which basically disables mod_rewrite (an apache module necessary for WordPress to be able to rewrite your urls — ie use Permalinks). I changed this to “All”, and restarted httpd, but still had the same issue.

Finally I went through and searched for “Directory”, and there are quite a few of them in there, but then I found another stanza:

<Directory "/var/www/html">

With quite a few comments inside of it. (That’s why I didn’t notice it the first time around.) Inside that Directory stanza, was this:

AllowOverride None

This one was overriding the “/” directive earlier, so as soon as I changed this from “None” to “All”, and restarted httpd, my links worked perfectly afterward.

I hope this helps you guys, it seems to be a chronic problem in WordPress.

Trying out new Linux distros

One of the last things I usually want to do is take something that works and put it at risk. That’s always how I feel when I’m upgrading any of my Linux systems at home, putting the whole thing at risk. The result: an installation of Ubuntu 10.10 that is no longer supported. Even the old-releases repos don’t have the updates that I want anymore. Nevertheless, I felt it was time for a change.

In the past, this would have meant either removing the drive and replacing it with another, or rolling the dice and just installing over top of my old OS, hoping that I copied all of those files I wanted.

Some time ago I started using Virtualbox, a free download from Oracle. For anyone who is not familiar with Virtualbox and the like, it’s a virtual platform for installing other operating systems.  You take a chunk of your host hard drive (the physical disk in your computer) and create a “virtual disk drive” that you can use as the boot disk for your virtual installation.

Aside from having a variety of operating systems at my fingertips with just one physical system, it also lets me “try before I buy” new distributions, and see how things look and feel before I commit to upgrading my existing Linux system. So far I couldn’t be more pleased, because some of them were discarded, and it’s as easy as right click/Remove, and viola, it’s gone.  You have the option of keeping the VDI (Virtual Disk Image) around, but if I’m trashing a virtual machine, I doubt I’d want the disk image again, so I always opt to save the space.

I just downloaded the newest Ubuntu, 13.04, so I can test it out before I commit…. I’m not a huge fan of Gnome 3, but I’m willing to try it for a while, since I mostly use CLI with screen.