Tuesday, July 15, 2014

Thoughts from the desk of a SysAdmin: Migrating from Apache + PhusionPassenger to nginx + Unicorn

Today's topic: Migrating from Apache + PhusionPassenger to nginx + Unicorn

I recently was presented with the challenge of building a new web application server. The previous server was outdated and so far behind on updates that replacing it was the best course of action. Since the previous server (we'll call it Boe, henceforth) was functional and not in need of immediate replacement, I had time to design and build a really awesome system. First, I needed to decide how I was going to resolve the following items:
  1. Replace PhusionPassenger
  2. What to use instead of PhusionPassenger
  3. Replace Apache
  4. What to use instead of Apache
Why replace PhusionPassenger? Regardless of Apache or nginx, it made the upgrade path too dirty and hairy to maintain. PhusionPassenger also wasn't very flexible or scalable enough. With nginx, you would have to re-compile the nginx binary and re-build the RPM each time. With Apache, you only need to re-compile mod_passenger each time. That is, so long as that version of Apache is compatible. There also didn't appear to be any functionality in PhusionPassenger to support multiple versions of ruby. These are the primary reasons as to why I wanted to replace PhusionPassenger.

What to use instead of PhusionPassenger? After a bit of research and reading, I felt that Unicorn was the best solution as the application server. It allowed me to keep the applications secure so that nginx talked to it over a unix socket. Unicorn is very configurable and I like how it utilizes a master/child worker architecture and integrated easily with the web application.

Why replace Apache? For application servers, I prefer to use nginx because of its small memory footprint, caching and performance. Apache is great, in my opinion, for hosting websites or simple applications like wordpress, cacti or nagios. This was my justification for replacing Apache.

What to use instead of Apache? Well, nginx immediately came to mind. Since the web applications are heavy and need a fast front-end, nginx was the best solution. I also like how nginx buffers the response from the back-end so that slow clients won't kill the response time of the application and result in degraded performance.

After additional research and preliminary testing, I built the new system and tested out my new configurations. I ran into a couple snags though with migrating away from PhusionPassenger. The first "gotcha" being that just because a web application lives at "/myapp" on your site, doesn't mean that's how it'll see requests coming across. The URI is actually re-written by PhusionPassenger so your web application thinks it is living at "/" instead of "/myapp". So, if you have an application route for "/webapp/programs/login", PhusionPassenger would actually re-write it as "/programs/login" then pass the request to the application. Part of the problem was resolved by setting the following option in the web application's "config/environments/production.rb" file:

config.action_controller.relative_url_root = '/myapp'

Since there were a handful of web applications coming from Boe, each of which were previously served via PhusionPassenger, it made nginx configuration a bit tricky. This is because PhusionPassenger was serving up static content in addition to the web application. There were also some issues with the web application generating links or redirecting to "/myapp/myapp/route/path". I resolved this with a nested location match, case-insensitive regular expression and URI re-write. This also increased performance by having nginx serve up static content while everything else was passed to the Unicorn. The location match and nested match I setup for "myapp" is as follows:

        location ^~ /myapp {
            location ~* /(images|javascripts|stylesheets|documents) {
                root /opt/myapp/public;
                rewrite ^/myapp/(.*)$ /$1? break;
            }
            root /opt/myapp/public;
            proxy_cache myapp;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_read_timeout 1000;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_pass http://myapp_app_server;
        }


Voila! The web application was fully functional and nginx served up page styling. After these changes, I was able to successfully migrate the web applications from the aging Boe to its replacement. There were a few more hiccups but they were mostly issues with application routes that the developers had to resolve. I will eventually be able to remove the URI re-write once the developers have finished updating the web applications.

Hopefully these notes will help someone else with a similar challenge. For posterity and your reference, here are links to the respective resources I utilized:

ruby: https://www.ruby-lang.org/en/
Unicorn: http://unicorn.bogomips.org/
PhusionPassenger: https://www.phusionpassenger.com/
PhusionPassenger (setup guides): Apache guide, nginx guide
nginx: http://nginx.org/
Apache: https://httpd.apache.org/

Until next time ...

Thursday, January 23, 2014

How-to: Connect to an SMB share via a NAT'ed IPsec tunnel

Yes, I know, you'd think this would be simple. Configure libreswan, establish the tunnel and either mount the smb share or "get" your file using smbclient. Unfortunately, it isn't that simple if you're using a NAT'ed ipsec tunnel as required by your remote endpoint. Since my peer address is on eth0 and my (assigned) NAT'ed address is on tunl0, that creates a slight problem when using smbclient. I attempted to use the interfaces="tunl0=eth0" option in ipsec.conf, but it did not behave as expected. Maybe the "interfaces" option isn't supposed to function like I thought it would, nevertheless, with the tunnel running my problem was now that I couldn't use smbclient.

Why can't you use smbclient, you ask? Well, I needed to connect to the remote party's server and smb share. However, you cannot tell smbclient which interface to use like you can with ping. I was able to confirm connectivity to the remote server via the tunnel by issuing the following in my shell:

$ ping -I tunl0 123.45.67.89

After a few hours of crawling through Google results using different search queries, I decided to develop my own solution. I knew about socat (netcat++) but had never used it; this seemed to be the perfect solution. After some reading and tinkering, I came up with the following command:

$ socat -d -d \
TCP4-LISTEN:139,bind=localhost,reuseaddr \
TCP4:123.45.67.89:139,bind=172.16.100.1

Let's break this down a bit ...

socat -d -d

This tells socat to run and print messages that are fatal, error, warning, or notice.

TCP4-LISTEN:139,bind=localhost,reuseaddr

This portion of the command tells socat to start listening for TCP/IPv4 connections on localhost at port 139. The "reuseaddr" option tells socat to let other sockets bind to the address even if only part of it is being used (i.e., the port we're listening on).

TCP4:123.45.67.89:139,bind=172.16.100.1

The last portion of the command tells socat to proxy TCP/IPv4 connections to port 139 on the remote system. The "bind" option tells socat to use the NAT'ed address that we've assigned to the device tunl0.

With the first half of the problem resolved, I was able to begin using smbclient to connect to the remote Windows server. Since I was going to ultimately script the whole process, I developed the following command using smbclient:

$ smbclient //PDC01/private \
-I 127.0.0.1 \
-U PDC\\MyUser%pass123 \
-c "get thisfile.txt /tmp/thisfile.txt"

Again, let's break this down a bit ...

smbclient //PDC01/private

This starts smbclient and tells it to connect to the smb share "private" on the Windows server named "PDC01".

-I 127.0.0.1

This tells smbclient to connect to 127.0.0.1 (localhost) where we're running our socat proxy.

-U PDC\\MyUser%pass123

This identifies our credentials; since Windows accounts are in the format "NT-Domain\UserName" we have to escape the forward slash, hence the "\\". To include a password for scripting purposes, we separate the account and password with a percent sign.

-c "get thisfile.txt /tmp/thisfile.txt"

This passes the above command to smbclient. Simply, we tell it what remote file to download and the location of where we want to save it.

After the file transfer completes, socat will detect that the TCP session has ended and thus shutdown nicely. With that, we have our solution; I hope this helps anyone that may encounter a similar situation.

If you would like to read more about socat, libreswan or samba, here are their respective websites:

socat: http://www.dest-unreach.org/socat/
libreswan: https://www.libreswan.org/
samba: http://www.samba.org/

Until next time ...

Friday, December 20, 2013

How-to: Create a gzipped tarball via SSH

I recently needed to remotely backup a server and store the data on another system. However my complication was that there wasn't enough space on the server to create the backup then transfer it to the remote system. I thought about using scp but that creates a lot of overhead as each file transfer is essentially a new SSH session. The crux was that I needed to keep the data secure but transfer within a short amount of time. I thought about using netcat, but that would mean the data would be going across the wire unencrypted even if I compressed it using gzip.

I ultimately decided that I needed SSH, tar and gzip. I popped up the manual page for tar and found that I can send the output to stdout (standard out). I immediately knew this was the best solution for the problem; I tinkered around a bit and developed a command that would allow me to use SSH, tar and gzip. The way it works is you send the tar command through SSH, tell tar to output via stdout and redirect the output from SSH to a gzipped tarball. The only downside is I had to temporarily enable root access via SSH; this is required if you're going to archive the whole file system. You can see how I accomplished the remote backup using the command below:

$ ssh root@remote.host.com 'tar -czvf - / --exclude=/dev --exclude=/proc --exclude=/var/run --exclude=/tmp --exclude=/sys --exclude=/usr/lib' > my-server-backup.tar.gz

I'll break down the command for you:

$ ssh root@remote.host.com 'command'

With SSH you can run commands remotely and the output will be displayed in your local terminal. The tar command:

 tar -czvf - / --exclude=/dev --exclude=/proc --exclude=/var/run --exclude=/tmp --exclude=/sys --exclude=/usr/lib

We invoke tar on the remote machine, telling it to create a gzipped tarball and list the files as it compresses them (czvf). Normally where we would specify the archive's file name we place a hyphen (-) which tells tar to output the archive to stdout (standard out, our screen). We want to backup the whole file system so we tell tar to start at the root of the directory tree (/) but we don't need libraries, device descriptors, process or temp files; thus we exclude specific directories with the exclude flag (--exclude=/dir/to/leave/behind). The last part of the command is important:

 > my-server-backup.tar.gz

We redirect the output to a file on the system receiving the archive; this is what the greater-than symbol does. To store the output into a file we specify a file name (my-server-backup.tar.gz); the ".tar.gz" extension identifies the file as a gzipped-tarball. To look at it without the confusion of the tar command:

 $ ssh root@remote.host.com 'command' > output.file

You can essentially perform the same function with any other command, so long as it supports outputting the data via stdout. At any rate, if you were to run the tar command all you (the user) would see is the file listing, like such:

$ ssh sysadmin@my.server.com 'tar -czvf - Downloads/' > sysadmin-downloads.tar.gz
Downloads/
Downloads/file-a.txt
Downloads/file-b.txt
Downloads/file-c.txt

In the above example, I made a backup of the Downloads directory that resides in the sysadmin user's home directory. You can even check to make sure the file is of the appropriate format using the file command:

$ file sysadmin-downloads.tar.gz
sysadmin-downloads.tar.gz: gzip compressed data, from Unix, last modified: Fri Dec 20 17:53:28 2013

There you have it! A remote, compressed backup streamed to your server or workstation over SSH. Until next time, my friends ...

Thursday, December 12, 2013

How-to: Check the bit-length of an SSH RSA keypair

As a SysAdmin, some of my responsibilities include maintaining system security, access and auditing accounts. However, from time to time I've come across the need to check a public key to make sure it conforms with my requirements. I've performed this operation enough times that I thought I would make a short post to help others in this situation. Albeit a simple solution, its not very obvious.

Let's begin by creating a public/private key pair:

$ ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/derp/.ssh/id_rsa): derp_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in derp_rsa.
Your public key has been saved in derp_rsa.pub.
The key fingerprint is:
fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 derp@herpderp
The key's randomart image is:
+--[ RSA 4096]----+
|          B=.    |
|         +.=.    |
|        E =.o    |
|       . o.* .   |
|        S.o +    |
|       . o..     |
|        ... .    |
|         o.o     |
|         .+      |
+-----------------+


Since the public key is what is provided by the user, that is all the SysAdmin will ever see. To check the public key, we test it like so:

$ ssh-keygen -lf derp_rsa.pub
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 derp_rsa.pub (RSA)


As you can see, the bit-length is printed at the beginning of the string (4096), followed by the key's fingerprint, the file name and the encryption algorithm (RSA). Looking at a raw pubkey doesn't allow one to easily identify the bit-length.

If you are the one providing the public key, you can easily check the bit-length of your private key with a simple command and pipe it to grep:

$ openssl rsa -text -noout -in derp_rsa | grep -i "private\-key"
Private-Key: (4096 bit)


Simple enough? Yup! To take it one step further, if you have multiple key pairs to manage, here is a simple way to test them against each other. You simply compare the fingerprint of the private key with the public key:

$ ssh-keygen -lf derp_rsa && ssh-keygen -lf derp_rsa.pub
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 derp_rsa.pub (RSA)
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 derp_rsa.pub (RSA)


There you have it! Until next time ...

Monday, December 9, 2013

Random Password Generator

I recently needed a facility for easily generating random passwords for new accounts on a server. I looked at different packages but I didn't find anything that was satisfactory. I then stumbled across a blog post that talked about using openssl to generate random passwords. I started toying with openssl and was able to generate a 12-byte base64 string:
$ openssl rand -base64 12
Elg6gD/+jGAl88/S
However, I needed the password simple enough to give to users which meant I needed to remove the special characters. I know, removing special characters decreases security and increases the attack vector. However, these are SFTP accounts that are chrooted with no shell ... so I'm not too concerned. Moving forward, I decided to use sed to remove the characters:
$ echo "Elg6gD/+jGAl88/S" | sed 's/\///g' | sed 's/\+//g' | sed 's/\=//g'
Elg6gDjGAl88S
Goodie. Now, I need to keep it out of memory and store it in a randomly named temporary text file; mktemp to the rescue! With that in mind, I now had everything I needed to build a function:
makepass (){
    local TMPFILE=$(eval mktemp)
    openssl rand -base64 12 > $TMPFILE
    sed -i 's/\///g' $TMPFILE
    sed -i 's/\+//g' $TMPFILE
    sed -i 's/\=//g' $TMPFILE
    cat $TMPFILE; rm -f $TMPFILE
}
Now, implementing it was easy as one, two, three:
PASS = $(eval makepass)
echo $PASS | passwd --stdin NewUser
Huzzah! Once the account setup is done, all I do is echo $PASS so that the SysAdmin (me) can provide it to the end user.

Until next time and as Richard Stallman says, "Happy hacking, folks."

Wednesday, August 22, 2012

xfce, xterm & Terminal

This is just a quick post because I felt the need to share this information. I have always been a slave to GUI eye candy even though my true love is the CLI. Recently, (not just Gnome) but KDE has shoved their head so far up their own rear end that they've decided they will compromise ease-of-use and speed for a poorly written, resource hog of a window manager. So, when I decided to install a GUI on my Debian laptop, I decided to go with xfce because it reminded me of the days of pleasant eye candy + functionality + ease-of-use + resource savvy. Yes, that's quite a mouthful but that's just honestly how I feel.

At any rate, I needed a way to set a default size for my Terminal window but it wasn't as easy as clicking Edit > Preferences. However, it was as easy as this:

matthew@intrepid$ vi ~/.config/Terminal/terminalrc

I changed the following line:

MiscDefaultGeometry=80x24

To:

MiscDefaultGeometry=108x30

I also changed a few others, but it was as simple as updating the aforementioned line with the geometry I wanted. If you're unsure of how to read the geometry:

Number of characters per line = 108
Number of lines = 30

For those that are picky about the location of the window when the application starts, you can change your configuration line to look like this:

MiscDefaultGeometry=108x30+0+0

+0+0 places the window in the upper left hand corner of the screen. Here is how to read the window placement:

+xoffset+yoffset

With regards to screen geometry, 0,0 is the top left corner of the screen. The +xoffset is how many lines to the right you want the window shifted. The +yoffset is how many lines down from the top of the screen you want the window shifted.

Now, given all this information, if I want my window to have a size of 108x30 and open at the center of my screen I would set the configuration option in terminalrc to:

MiscDefaultGeometry=108x30+350+300

Now, on my system, xterm is the default terminal emulator. Before making changes to your system, verify that this is the correct syntax to use with your terminal application otherwise you could experience unexpected behavior.

Until next time ...

Saturday, July 23, 2011

vmware, qemu & winxp

so linux now has great support for virtual machines. i use virtualbox on my macbook but for years have used vmware on my debian box. now that i don't have to rely on vmware's resource-intensive hypervisor, i started to research alternative solutions. i came across qemu; apparently xen open source and kvm use qemu under their hoods. qemu can also be utilized independently, so i decided not to bother with xen or kvm. my first problem was that i my vmware vmdk image file was distributed instead of centralized (i had multiple vmdk files).

matthew@hufflepuff:~$ ll ~/vmware/
total 11G
drwxr-xr-x 2 matthew matthew 4.0K Jun 8 14:18 .
drwxrwxrwt 3 root root 4.0K Jun 8 20:19 ..
-rw------- 1 matthew matthew 8.5K May 18 19:37 nvram
-rw-r--r-- 1 matthew matthew 43K Mar 19 2010 vmware-0.log
-rw-r--r-- 1 matthew matthew 49K Mar 18 2010 vmware-1.log
-rw-r--r-- 1 matthew matthew 49K Mar 17 2010 vmware-2.log
-rw-r--r-- 1 matthew matthew 50K May 18 19:37 vmware.log
-rw------- 1 matthew matthew 2.0G May 18 19:37 Windows XP Professional-f001.vmdk
-rw------- 1 matthew matthew 2.0G May 18 19:37 Windows XP Professional-f002.vmdk
-rw------- 1 matthew matthew 2.0G May 18 19:37 Windows XP Professional-f003.vmdk
-rw------- 1 matthew matthew 2.0G May 18 19:37 Windows XP Professional-f004.vmdk
-rw------- 1 matthew matthew 2.0G May 18 19:37 Windows XP Professional-f005.vmdk
-rw------- 1 matthew matthew 1.3M Dec 5 2007 Windows XP Professional-f006.vmdk
-rw------- 1 matthew matthew 732 May 18 19:25 Windows XP Professional.vmdk
-rw------- 1 matthew matthew 0 Dec 5 2007 Windows XP Professional.vmsd
-rwxr-xr-x 1 matthew matthew 1.4K May 18 19:37 Windows XP Professional.vmx
-rw-r--r-- 1 matthew matthew 1.7K Mar 16 2010 Windows XP Professional.vmxf


i needed to merge these files without damaging the windows' ntfs. i came across a utility called "vdiskmanager". this merged all my files into one vmdk:

matthew@hufflepuff:~/vmware$ vmware-vdiskmanager –r Windows\ XP\ Professional.vmdk –t 0 WinXP.vmdk

i then needed to convert it to a qemu raw image format:

matthew@hufflepuff:~/vmware$ qemu-img convert -f vmdk WinXP.vmdk -O raw WinXP.img

then to verify the conversion:

matthew@hufflepuff:~/vmware$ qemu-img info -f raw WinXP.img
image: WinXP.img
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 7.4G

and double check (because i'm paranoid):

matthew@hufflepuff:~/vmware$ file WinXP.img
WinXP.img: x86 boot sector, Microsoft Windows XP MBR, Serial 0x2e682e67; partition 1: ID=0x7, active, starthead 1, startsector 63, 20948697 sectors, code offset 0xc0


i then proceeded to launch my vm:

matthew@hufflepuff:~/vmware$ qemu -hda WinXP.img -m 365 -vga std

but it blue screened ... the only reliable function in windows. so, i researched the issue, turned out it was because i didn't remove the vmware drivers before performing all this work on my image file.

http://support.microsoft.com/kb/324764

The agp440.sys service had to be disabled so that WinXP would only load basic vga drivers.

i made an image of my WinXP install cd via dd:

matthew@hufflepuff:~/vmware$ dd if=/dev/sr0 of=./XPInstall.img bs=1024

then i ran qemu, let the cdrom image boot and followed microsoft's instructions:

matthew@hufflepuff:~/vmware$ qemu -hda WinXP.img -cdrom XPInstall.img -m 365 -vga std

and after making the changes, i booted my vm as normal:

matthew@hufflepuff:~/vmware$ qemu -hda WinXP.img -m 365 -vga std

tada! we have a hypervisor running a vm and utilizing less resources than vmware.

houston, mission accomplished!