Linux – Unable to boot due to missing drive in fstab

Posted by & filed under Linux, Server Admin.

I had a old server I brought up and it was unable to complete it’s boot due to a missing drive in fstab. Editing the fstab in recovery mode is not a option since the filesystem gets flagged as read only.

In order to make the FS writable and therefore be able to successfully edt the fstab, the following command will remount the FS in read/write mode:

mount -o remount,rw /


Windows XP: Recovering The registry using Linux when windows won’t boot

Posted by & filed under Server Admin.

I recently had a Windows XP laptop crash. Windows would not boot to safe mode or anything, and just displayed the following error message:

Windows XP could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM

I could not afford to simply wipe the laptop and reinstall windows as it had some old software that was no longer available.I located the following article which details a procedure to recover from this issue using the MS recovery console and using the System Restore:

As this laptop did not have a optical cd-rom, it was a difficult proposition to make a XP bootable USB stick to complete this procedure since I do not have the media handy. Additionally, it seemed like a pain to go thru all the steps when it could be simplified quite a bit with a functioning OS like linux. I decided to attempt to recover using a linux live cd:

  1. Create a bootable USB stick with Ubuntu on it using uNetBootin
  2. Boot to the USB stick.
  3. Make backups of any critical files (just in case)
  4. Backup registry files at C:\windows\system32\config to usb stick:
  5. Access the System Volume Information which should contain restore points for the system. See Part 2 Steps 7 through 10 in above MS article for details, but in a nutshell you want to access C:\System Volume Information. There will be one or more folders inside and their names will be similar to “_restore{D86480E3-73EF-47BC-A0EB-A81BE6EE3ED8}”. Inside these folders, look for RPx folders. There may be more than 1, and x would be a number. Look at the created dates of these folders to identify a fairly recent restore point. For example I found one that was two weeks old in RP47.
  6. Access the snapshot folder to retrieve registry backups. Example:
    C:\System Volume Information\_restore{D86480E3-73EF-47BC-A0EB-A81BE6EE3ED8}\RP1\Snapshot
  7. Inside the snapshot directory, copy the registry files to a temp location, and make a backup of them:
  8. Copy the snapshots to C:\windows\system32\config.
  9. Delete the old crashed registry files:
  10. Rename the backup registry files to replace the ones you just deleted:
  11. Cross your fingers and reboot! If it does not work, and you still receive the same error message, you may need to try a older registry snapshot. Simply follow the above steps to try a different registry snapshot.

Good luck!

Slow DNS Resolution on Ubuntu Linux Server 14.05 LTS

Posted by & filed under Linux, Server Admin.

This all started with WordPress timeouts. I was trying to activate some premium plugins, and the license activation was timing out. I started doing some digging and found they use the WordPress core library WP_http which in turn uses curl to make the request. I wrote my own code to use WP_Http and it failed in the same way with a timeout. I added a timeout parameter to the wp_remote_get() call, and it was able to complete without a timeout. I then used a IP address in place of the domain name and it worked without the need for the timeout parameter.

$response = wp_remote_get('');
echo wp_remote_retrieve_body( $response );

With that info in hand, I decided it must be on the server. I started doing some tests:

web@web:~$ time curl ""
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="">here</A>.

real    0m5.565s
user    0m0.007s
sys     0m0.000s

I then did the same test from another server that uses the same DNS servers in resolv.conf:

dev@web1 [~]# time curl
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="">here</A>.

real    0m0.121s
user    0m0.000s

After much googling, I found a few number of suggested solutions:

  • Disable IPv6
  • Ensure /etc/nsswitch.conf is set correctly (hosts: files dns)

Neither of these worked for me. Finally, I added the following directive into my resolv.conf and it fixed the issue!

options single-request

Apparently, this is actually somewhat related to ipv6 — from the resolv.conf manpage:

single-request (since glibc 2.10)
                     Sets RES_SNGLKUP in _res.options.  By default, glibc
                     performs IPv4 and IPv6 lookups in parallel since
                     version 2.9.  Some appliance DNS servers cannot handle
                     these queries properly and make the requests time out.
                     This option disables the behavior and makes glibc
                     perform the IPv6 and IPv4 requests sequentially (at the
                     cost of some slowdown of the resolving process).

Now, I get good response times when I curl:

web@web:~$ time curl
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="">here</A>.

real    0m0.170s
user    0m0.007s
sys     0m0.000s

Looks like the resolver sends parallel requests, fails to see the IPv6 response, waits 5 sec and sends sequential requests because it thinks the nameserver is broken. By adding the options single-request, glibc makes the requests sequentially be default and does not timeout.

I found some good info and hints on this issue here:

Lastly, to bring this whole thing full circle, the WprdPress plugins now are able to get out and communicate successfully. Woohoo!

MassMine: Datamining Facebook, Twitter, Google, and Wikipedia

Posted by & filed under Uncategorized.

MassMine allows you to easily datamine Twitter, Google, Wikipedia, and soon Facebook for data. Pretty cool! From the official site:

MassMine is a social media mining and archiving application that simplifies the process of collecting and managing large amounts of data across multiple sources. It is designed with the researcher in mind, providing a flexible framework for tackling individualized research needs. MassMine is designed to run both on personal computers and dedicated servers/clusters. MassMine handles credential authorizations, rate limiting, data acquisition & archiving, as well as customized data export and analysis.

WordPress Malware hack cleanup

Posted by & filed under Security, Web Development.

A few handy commands to cut to the chase and find the crap spammers/skiddies have added to a WP install:

Find files containing text recursively:

 grep -ri "string to search" .

A good use of this is to search for the below. It can return false positives, but finds a function commonly used to obsfucate code:

grep -ri "base64_decode" .

Diff two installations. If you have a clean copy of WP, you can compare it to a compromised version to find the differences. Here I am excluding the error_log file, and sending the output to diff.txt for review:

diff --exclude "*error_log*" -r /path/to/wp /path/to/other/wp > diff.txt

Find php files (and other filetypes that should not be present in the uploads directory. This is typically one if the first places things are placed:

find /wp-content/uploads -name "*.php" -type f

Grep the DB. Sometimes things get hidden in the database in an effort to hide malware. Considering that a WordPress database is tiny in the grand scheme of things, a simple way to quickly review what is in the database is to use mysqldump, phpmyadmin or whatever tool you would like to export the entire database to SQL. Then you can review the contents easily. Be on the lookout for base64 encoded strings, they are a good giveaway.

Find recently modified PHP files:

find . -name \*.php -mtime -2



Scanning for web malware, back doors, spam scripts, etc on Linux based web servers

Posted by & filed under Security, Server Admin.

In the wake of the recent SoakSoak WordPress vulnerability, among others I have began searching for a better way to keep tabs on malicious code that may get uploaded to client’s hosting accounts.

Enter maldet.

Maldet uses a constantly updated database of malware hashes to identify and quarantine (if required) malicious files. Maldet can be set to run automatically via cron, watch newly updates files, and more.



Linux — Finding top n large files

Posted by & filed under Linux, Server Admin.

As a followup to my previous note , I am adding an additional one-liner that is extremely helpful.

du -a /path | sort -n -r | head -n 10

Obviously, you can adjust the -n param in the head command to return the top 20 for example.

Ubuntu: Unable to install/update packages; Full /boot partition

Posted by & filed under Server Admin.

UPDATE 11/17/15: Another nice command to auto purge old kernels is: sudo apt-get autoremove


Also, removing old kernels is easy with  sudo dpkg -r linux-image-3.2.0-83-generic



Recently I wanted to install a new package on a Ubuntu server. Typically this is as simple as issuing a

sudo apt-get install package-name

But this time around, I got a interesting error:

$ sudo apt-get install vsftpd
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
 linux-image-server : Depends: linux-image-3.0.0-28-server but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

I started poking around and found that the /boot partition is full:

$ df -h
Filesystem            Size  Used Avail Use% Mounted on
                       36G  8.0G   26G  24% /
udev                  2.0G  8.0K  2.0G   1% /dev
tmpfs                 793M  236K  793M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  2.0G     0  2.0G   0% /run/shm
/dev/sda1             228M  228M   0M  100% /boot

Ok, that is starting to make a bit more sense now… so we need to purge old kernel packages to free up space on the /boot partition. The first step is to identify the kernel version we are currently on so we do not delete that. Secondly, it was recommended to me that you keep the oldest kernel as it was the one the system was installed with. We can see the kernel version with:

$ uname -a
Linux server 3.0.0-25-server #41-Ubuntu SMP Mon Aug 13 18:18:27 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

So we are on 3.0.0-25-server and need to make sure not to delete that. A handy command to get a list of all the kernels you are not using is:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'

I attempted to remove old kernels the “nice” way — by letting apt handle the removal:

$ sudo apt-get -y purge linux-headers-3.0.0-12-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
 linux-image-server : Depends: linux-image-3.0.0-28-server but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

But it failed. Following the above instructions to run sudo apt-get -f install, it failed saying that there was not enough disk space on /boot (duh!). So much for being nice.

Inside the /boot partition there are three “types” of files — abi-kernel-version, config-kernel-version, initrd.img-kernel-version,, vmcoreinfo-kernel-version, and vmlinuz-kernel-version. There will be a file for each of these for each version of the kernel you have installed. For example: vmlinuz-3.0.0-28-server. Leaving the earliest kernel version and the version I am running (reported by uname -a), I moved the other kernel files off to another location where there was ample space. It looked something like this:

$ sudo mv abi-3.0.0-16-server config-3.0.0-16-server initrd.img-3.0.0-16-server vmcoreinfo-3.0.0-16-server vmlinuz-3.0.0-16-server /home/tnscweb/boot/

As you can see this is moving the files for kernel version 3.0.0-16 off the /boot partition.

If the boot partition just needed a bit of space freed up, you can now likely use apt-get to purge the other kernels “cleanly”. What I mean by that is apt-get also removes the kernel files from /lib/modules. You could do this by hand as well. I am not sure if it does anything beyond cleaning up /boot and /lib/modules, but I do not believe it does.

Linux: Counting the number of lines inside multiple files

Posted by & filed under BASH, Programming.

Recently I needed to recursively count the number of lines of code in each of the specific file types. In this instance I wanted to count the number of lines of code in my PHP files. The below command worked flawlessly. In addition to breaking down the line count in each directory, it gives a overall total at the end as well.

find . -name '*.php' | xargs wc -l

Linux: Find files greater than n size

Posted by & filed under Linux, Server Admin.

Recently I had a issue where I needed to clean up some disk utilization on a linux server. In order to find a list of larger files, I used the following find command from the root directory I want to recurse through:

find . -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

As you can see, the -size switch is setting the minimum size to find at 50Mb. Another issue that I ran into was deleting a large amount of files at once using something like:

rm -rf /var/my/path/*

“Argument list too large” was the error. Apparently the list of files is too large for rm to handle. I found that there are a variety of methods to solve this, from using loops to split up the files into smaller groups, to recompiling the kernel. One of the simplest is to use the find command to delete the files it finds:

find /var/my/path/ -name "*" -delete

The list of files to get deleted can also be tuned so it does not delete all the files in the path:

find /var/my/path/ -name "filename*" -delete

Writing Linux device drivers

Posted by & filed under Programming.

Nice article on writing drivers for the linux kernel.

User space. End-user programs, like the UNIX shell or other GUI based applications (kpresenter for example), are part of the user space. Obviously, these applications need to interact with the system’s hardware . However, they don’t do so directly, but through the kernel supported functions.

Kernel space. Linux (which is a kernel) manages the machine’s hardware in a simple and efficient manner, offering the user a simple and uniform programming interface. In the same way, the kernel, and in particular its device drivers, form a bridge or interface between the end-user/programmer and the hardware. Any subroutines or functions forming part of the kernel (modules and device drivers, for example) are considered to be part of kernel space.…

Pipe Viewer

Posted by & filed under BASH, Programming, Server Admin.

pv – Pipe Viewer – is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

Installing git on a cPanel server

Posted by & filed under Server Admin.

I needed to install git on a cPanel server recently. After adding the appropriate EPEL5 or EPEL6 repo (, you should be able to simply do a:

yum install git

But yum kept reporting a unmet dependency — a Perl-git package — even though I verified the missing package is actually present in the EPEL repo. After a bit of digging, I found cPanel has set yum to exclude any packages with Perl in the name. Simple enough to fix, but aggravating:

vi /etc/yum.conf

Remove “Perl*” from the exclude line and save.

yum install git

Jump back into the yum.conf file and add the perl* exclusion back in so yum does not eat cPanel’s braiiiinnns….


Sheeva Plugs

Posted by & filed under Uncategorized.

SheevaPlug development kit is a plug computing device that runs network-based software services that normally require a dedicated personal computer. Featuring a 1.2GHz Marvell Sheeva CPU with 512 MB of flash memory and 512 MB of DDR2, the SheevaPlug development kit provides ample performance and resources to develop or port almost any application. Multiple Linux distributions are available for the platform, and software is supported in an open source model. Network connectivity is via Gigabit Ethernet; peripheral devices can be connected using USB2.0.…

Ubuntu, Apache2 and relaying mail thru an external relay

Posted by & filed under Server Admin.

I have a fresh Ubuntu 11 server installation with the LAMP stack installed. When I sent e-mail thru PHP, the message never left the server.

I believe there is a more kosher way to do this, but this is what worked for me.
=> Modify /etc/mail/
=> Locate the lines that say:

# “Smart” relay host (may be null)

=> Edit the DS line like so:

Restart the services… good to go.