Tunnelling SSH/SCP through intermediate host when two hosts can’t directly communicate

Posted by & filed under Linux.


We need to scp a file between two hosts. The problem is that the two hosts (A & C) cannot directly communicate. We can solve this using a SSH tunnel and an intermediate host (B) that can communicate with both. This also means, the command for Host B needs to run first, then the scp command for host A.:


Host A (source)

This will scp to localhost on port 3000 which is actually our tunnel to host c — /destination_file is the path on host C

scp -P 3000 /source/file username@localhost:/destination_file

Host B (intermediate)

ssh -R 3000:ip.of.host.a:22 ip.of.host.c

Host C (destination)



Also, if you have spaces in the paths make sure to escape the space with \ e.g.

scp -P 3000 "/source/file/some\ directory/" username@localhost:/destination_file


Linux – Unable to boot due to missing drive in fstab

Posted by & filed under Linux, Server Admin.

I had a old server I brought up and it was unable to complete it’s boot due to a missing drive in fstab. Editing the fstab in recovery mode is not a option since the filesystem gets flagged as read only.

In order to make the FS writable and therefore be able to successfully edt the fstab, the following command will remount the FS in read/write mode:

mount -o remount,rw /


Slow DNS Resolution on Ubuntu Linux Server 14.05 LTS

Posted by & filed under Linux, Server Admin.

This all started with WordPress timeouts. I was trying to activate some premium plugins, and the license activation was timing out. I started doing some digging and found they use the WordPress core library WP_http which in turn uses curl to make the request. I wrote my own code to use WP_Http and it failed in the same way with a timeout. I added a timeout parameter to the wp_remote_get() call, and it was able to complete without a timeout. I then used a IP address in place of the domain name and it worked without the need for the timeout parameter.

$response = wp_remote_get('http://google.com');
echo wp_remote_retrieve_body( $response );

With that info in hand, I decided it must be on the server. I started doing some tests:

web@web:~$ time curl "http://google.com"
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.

real    0m5.565s
user    0m0.007s
sys     0m0.000s

I then did the same test from another server that uses the same DNS servers in resolv.conf:

dev@web1 [~]# time curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.

real    0m0.121s
user    0m0.000s

After much googling, I found a few number of suggested solutions:

  • Disable IPv6
  • Ensure /etc/nsswitch.conf is set correctly (hosts: files dns)

Neither of these worked for me. Finally, I added the following directive into my resolv.conf and it fixed the issue!

options single-request

Apparently, this is actually somewhat related to ipv6 — from the resolv.conf manpage:

single-request (since glibc 2.10)
                     Sets RES_SNGLKUP in _res.options.  By default, glibc
                     performs IPv4 and IPv6 lookups in parallel since
                     version 2.9.  Some appliance DNS servers cannot handle
                     these queries properly and make the requests time out.
                     This option disables the behavior and makes glibc
                     perform the IPv6 and IPv4 requests sequentially (at the
                     cost of some slowdown of the resolving process).

Now, I get good response times when I curl:

web@web:~$ time curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.

real    0m0.170s
user    0m0.007s
sys     0m0.000s

Looks like the resolver sends parallel requests, fails to see the IPv6 response, waits 5 sec and sends sequential requests because it thinks the nameserver is broken. By adding the options single-request, glibc makes the requests sequentially be default and does not timeout.

I found some good info and hints on this issue here: https://bbs.archlinux.org/viewtopic.php?id=75770

Lastly, to bring this whole thing full circle, the WprdPress plugins now are able to get out and communicate successfully. Woohoo!

OpenSSL – Extracting a crt and key from a pkcs12 file

Posted by & filed under Linux, Server Admin.

Quick and dirty way to pull out the key and crt from a pkcs12 file:

openssl pkcs12 -in filename.pfx -nocerts -out filename.key

openssl pkcs12 -in filename.pfx -clcerts -nokeys -out filename.crt 

If you are using this for Apache and need to strip the password out of the certificate so Apache does not ask for it each time it starts:

openssl rsa -in /path/to/originalkeywithpass.key -out /path/to/newkeywithnopass.key

Recursively finding strings in files

Posted by & filed under Linux, Server Admin.

For example, if you wanted to scan all files in the current directory, and all sub directories for any calls to base64_decode, you could do something like this:

find . -type f -exec grep -A 2 -B 2 -H -i -n "base64_decode" {} + > resultb64.txt

find all files, then execute grep on them, printing out matching lines, filenames and line numbers, finally write output to resultb64.txt

Another twist on this is to filter the filetypes a bit:

find . -name "*.html" -or -name "*.php" -or -name "*.js" -exec grep -A 2 -B 2 -H -i -n "base64_decode" {} + > resultb64.txt

Lastly, if we wanted to find and replace (with nothing) a string:

find ./ -name "*.html" -or -name "*.php" -exec sed -i 's#STRING TO FIND##g' '{}' \;

BASH: Copy files recursively, excluding directories

Posted by & filed under Linux, Server Admin.


Folder /public_html looks like this:


I need to clone all the files and folders (with a couple of exceptions) in this directory into the /public_html/dev folder. We need to exclude the dev/ folder as it is the destination, and also want to exclude the dev2/ folder.

Rsync makes this easy:

rsync -av --exclude='path1/to/exclude' --exclude='path2/to/exclude' source destination

In my scenario, something like the following gets the job done:

rsync -av --exclude='dev/' --exclude='dev2/' /public_html /public_html/dev


Finding failed multipath issues in FreeNAS

Posted by & filed under Linux, Server Admin.

I have two drives, dsa3 and da4 which are showing multipath errors. In order to find the physical drives:

egrep 'da[0-9]' /var/run/dmesg.boot

Drive serials can also be obtained via:

smartctl -i /dev/whatever


Linux — Finding top n large files

Posted by & filed under Linux, Server Admin.

As a followup to my previous note , I am adding an additional one-liner that is extremely helpful.

du -a /path | sort -n -r | head -n 10

Obviously, you can adjust the -n param in the head command to return the top 20 for example.

Enumerating a DNS zone by brute force

Posted by & filed under Linux, Server Admin.

Sometimes a situation pops up where I need to retrieve the contents of a DNS zone but do not have access to the DNS servers. This is typically a situation where a client’s IT person is no longer available and before moving the name servers, we need to recreate the DNS zone on our name servers. If the current host of the DNS zone cannot release the records, we have a few options.

1. Try a zone transfer. I previously wrote about this. This is highly unlikely to work, but if the DNS server is poorly configured, it’s a possibility. This works rarely but if it does work will be the most accurate.

dig -t AXFR @dns.server.domains.is.on.com domain.name.to.dump.com

2. Brute force the zone. It sounds bad, but the reality of it is that most sysadmins don’t log or throttle DNS requests, and therefore with a decent enough dictionary of words it is possible to enumerate a large majority of the dns zone. I have mirrored the zip file containing bfdomain and the dictionaries here. (original source)

python bfdomain.py domain-to-test.com dictionaries/hostnames-big.txt
[*]-Using dictionairy: dictionaries/hostnames-lite.txt (Loaded 1399 words)
 |-mail (line: 690) ==> ('mail.domain-to-test.com', [], [''])
 |-webmail (line: 1294) ==> ('webmail.domain-to-test.com', [], [''])
 |-welcome (line: 1311) ==> ('welcome.domain-to-test.com', [], [''])
 |-www (line: 1362) ==> ('domain-to-test.com', ['www.domain-to-test.com'], [''])
[*]-Total assets found: 4

UPDATE: One other thing that I noticed later on is the fact that this is seemingly only capturing A records, so things like the MX record would not be tried. The python script could be modified easily to add this functionality. Also, the nmap version of this may already do this.

More dictionaries and wordlists:

Linux: Find files greater than n size

Posted by & filed under Linux, Server Admin.

Recently I had a issue where I needed to clean up some disk utilization on a linux server. In order to find a list of larger files, I used the following find command from the root directory I want to recurse through:

find . -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

As you can see, the -size switch is setting the minimum size to find at 50Mb. Another issue that I ran into was deleting a large amount of files at once using something like:

rm -rf /var/my/path/*

“Argument list too large” was the error. Apparently the list of files is too large for rm to handle. I found that there are a variety of methods to solve this, from using loops to split up the files into smaller groups, to recompiling the kernel. One of the simplest is to use the find command to delete the files it finds:

find /var/my/path/ -name "*" -delete

The list of files to get deleted can also be tuned so it does not delete all the files in the path:

find /var/my/path/ -name "filename*" -delete

PlayTerm — Watch the linux guru at work

Posted by & filed under Linux.

PLAYTERM is intended to raise the skills of terminal CLI users, share their skills and inspire others.
PLAYTERM wants to push forward a new way of education, because terminalsessions are language-independent, extremely educative & entertaining.

Basically, PlayTerm lets you ‘look over’ a guru’s shoulder, helping to learn the concept in the video.

It may sound strange, but eventhough CLI stuff sounds like an isolated environment, it is an extremely social playground. Since the eighties, billions of users have helped eachother improving their skills, to get things done faster.
However, there was never a playground for sharing this live, but only peeking over the shoulder of your neighbour (or a *N*X screen -x session).

PLAYTERM wants to restore the actual ‘live’ feeling, which was once established in the BBS scene. There, the BBS’es system operators could ‘takeover’ a users’s session..and show him the way around, or teach him a new programming language.


Linux: Find base64_decode in files

Posted by & filed under Linux.

Bad hax0rs! base64_decode is used by hackers frequently when they hijack a site to obfuscate their malicious code. This quick BASH one-liner will find files containing this evil function and lists them out:

find . -name '*.php' | while read  FILE; do  if grep  'eval(base64_decode' "$FILE"; then echo  "$FILE" >>  infectedfiles; else echo "$FILE"  >> notinfected; fi ; done

Linux: Find files modified between dates

Posted by & filed under BASH, Linux.

I found a handy technique to find files modified between a specific date. In essence, we touch two temp files, setting the modified dates to the range we want to find:

touch temp -t 201104141130
touch ntemp -t 201104261630

Note the date is yyymmddtime.

Then we run the find command:

find /path/to/folder/ -cnewer temp -and ! -cnewer ntemp


BASH: Recursively finding files

Posted by & filed under Linux.

I recently wanted to locate a specific file recursively within my directory tree. A simple way to do this is to:

find . -name .htaccess -print

In this example, we would be searching for a .htaccess file, but this could be changed to any file name, and even use wildcards like *.php.