Sunday, April 9, 2017

Thoughts from the desk of a SysAdmin: fail2ban, sshd, and named

Today's topic: fail2ban, sshd, and named

I recently rebuilt one of my cloud VMs because it was aging and I wasn't happy with it anymore. After rebuilding it, I decided to use it to also run my own caching recursive DNS service that includes domain blacklisting (blocking known malware, phishing, and ad domains). Since this would have to be public-facing, I knew (at some point) someone would get curious and try to attack it. I have previous experience with just sshd being public, watching as countless bots scan and potential attackers probe for weaknesses.

I've always run SELinux and fail2ban, but I recently decided to take it a step further with hardened configuration files and custom fail2ban filters. fail2ban has some great builtin filters and they'll help protect a system against basic attacks with minimal configuration. However, what makes fail2ban really awesome are custom filters and actions. These enhance the functionality of fail2ban to give you the most protection possible. Nowadays, cyber criminals are after everyone, not just businesses.

Disclaimer: The file locations in this post are focused towards RHEL-based systems (i.e. CentOS, Fedora, etc.). I also have no experience (yet!) with firewalld; my current expertise is limited to iptables. However, the concepts and configurations should easily translate to any modern Linux system.

What's fail2ban?

fail2ban is an awesome IDS/IPS (Intrusion Detection System/Intrusion Prevention System) that uses regular expressions to scan logs then subsequently perform certain operations. Namely, it interfaces very well with iptables. Out of the box, it has support for a wide range of services and builtin regular expressions to help defend against the most common types of attacks. It also has support for custom filters and actions, making it highly extensible. fail2ban, leveraged with sound security practices, can provide assistance with protecting your system and services.

sshd (or, "Secure Shell Daemon")

sshd is the best service to use for performing remote system management. In some cases, it's the only way you can get into a box. Thus, taking advantage of the protection that fail2ban can bring to your system should be paramount. There are a few steps I would recommend making in addition to activating the sshd IDS/IPS protection in fail2ban.

First, set your default iptables INPUT policy to DROP and save the rules. The default policy provided by the vendor is ACCEPT, which is the equivalent of "accept all, deny none" unless you have a rule specifically blocking a port and/or protocol. Setting your default INPUT policy to DROP basically says, "deny all, accept none". This allows you to setup your iptables rules to only allow specific traffic. This protection is a basic rule that everyone should employ - least privilege methodology. Your system can still be compromised even if you have nothing listening; if you do and you forget to configure it to only listen on localhost, it immediately becomes an attack vector. This method also allows you to setup a new service and configure it before activating a rule, thereby allowing traffic to hit it.

Before setting your default policy to DROP, make sure you also have a rule that allows for STATEFUL,ESTABLISHED connections. This rule is typically included by default on RHEL-based systems. If you're not running on a RHEL-based system, make sure you have this rule at the top of your INPUT table.

Remember: Firewall rules are processed based on their order in the table.

Next, make sure you have a rule allowing SSH traffic on port 22. Once you have the aforementioned rules added, you can then feel safe to set your default INPUT policy to DROP. Don't lock yourself out of your own box! If you already have fail2ban installed, their rules should be first while the service is running. Simply stop the fail2ban service, add your rules (don't forget to save!), then start it again.

Second, secure your sshd service by hardening your sshd_config file. This file is typically found at /etc/ssh/sshd_config. Start by denying root login via SSH. Set PermitRootLogin to "no":

PermitRootLogin no

You should never remotely login to your box as root. Only allow logins by an underprivileged account that has permission (if necessary) to escalate to root or access the sudo command. Next, set some strong Key Exchange Algorithms, data encryption Ciphers, and Message Authentication Ciphers. This will protect you from attackers and the Big Bad (cough, cough - NSA). It's actually surprising how many attackers don't have a modern SSH client that supports newer ciphers. It's likely due to their targets being systems that aren't using modern cryptography. As clients are updated with modern algorithms, deprecated ones are removed. Nonetheless, any modern macOS and Linux operating system should support these ciphers. I haven't tested with Windows and PuTTY, though. If you'd like to ensure your session is encrypted using strong stuff, add the following text below the "Protocol 2" line:

# Specify Algos and Ciphers (so we're not vulnerable to default and weak cryptos)
KexAlgorithms diffie-hellman-group-exchange-sha256
Ciphers aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512,hmac-sha2-256

Increase sshd's logging level. Set the LogLevel to VERBOSE:


This allows you to more closely audit the activity of sshd and key authentication. This is also required for a custom fail2ban filter that I will discuss later.

Disable password authentication and only utilize key-based authentication. It's simple:

PasswordAuthentication no

Make sure you generate a key pair (I recommend 2048-bit RSA or greater); store the public key in ~/.ssh/authorized_keys and chmod it to 0400. Only leave password authentication enabled if you're using a form of two-factor (key + account password).

Whether using key-based or password-based authentication, a valid user shouldn't have any reason to take longer than a minute to login. Set the LoginGraceTime to a conservative value; I have mine at one minute:

LoginGraceTime 1m

Let's also protect against resource consumption DoS attacks and reduce the number of chances an attacker has per connection. This is as simple as reducing MaxAuthTries:

MaxAuthTries 3

The default is set to six. As long as you have fail2ban blocking an IP after three authentication failures, you should be safe.

Finally, with regards to sshd_config, I set the following to reduce the attack surface on the server and client:

GSSAPIAuthentication no
X11Forwarding no

I don't use GSS and I don't want it available to a potential attacker. I don't use X11 on my remote server, so I close that open channel (which can be an attack vector on the client).

Third, custom sshd filter in fail2ban. Even though disabling password authentication and setting strong algorithms should seem like enough, it's not. While attackers may not reach a preauth or auth stage, they still consume resources by establishing a connection. To further prevent a resource consumption DoS, I setup the following file in /etc/fail2ban/filter.d/sshd-badmac.conf:

# This rule augments the default fail2ban sshd rules by adding an additional filter to block potential attackers.
# There really aren't any legitimate reasons why an authorized person would connect using unsupported macs, kexs, or algos.


# Read common prefixes. If any customizations available -- read them from
# common.local
before = common.conf


_daemon = sshd

# these are the log entries we're looking for
#Mar 28 14:30:21 myserver sshd[30255]: Connection from port 98765
#Mar 28 14:30:22 myserver sshd[30256]: fatal: no matching mac found: client hmac-md5,hmac-sha1 server hmac-sha2-512,hmac-sha2-256

failregex = ^(?P<__prefix>%(__prefix_line)s)Connection from port \d+(?: on \S+ port \d+)?%(__prefix_line)sfatal: no matching mac found: .+$

ignoreregex =

maxlines = 2

journalmatch = _SYSTEMD_UNIT=sshd.service + _COMM=sshd

And activate it in /etc/fail2ban/jail.d/jail.local:

enabled = true
filter = sshd-badmac
port = ssh
logpath = /var/log/secure
bantime = 86400
findtime = 3600
maxretry = 1
action = iptables[name=SSHBADMAC, port=ssh, protocol=tcp]

Ba'da bing, ba'da boom. Hoohaw! Three steps to secure your sshd service and protect your system from attackers.

bind (or, "named")

In my opinion, bind is the best DNS server available. It's highly configurable, fast, and performs well under a heavy load. However, there are some security considerations to take in if you're going to host it on a public system. To make this work, you're going to have to allow queries from subnets outside of your host's public subnet. By default, bind allows recursive queries as long as the client is within the same subnet as the host. If you wish to take it a step further, you can block non-domestic IPs based on the ranges available from ARIN (American Registry of Internet Numbers). Even if you employ this technique, domestic attackers may still attempt to DoS your DNS server, compromise it, or even rack up your bandwidth charge because they're bored. There are a few steps you can take to protect your public-facing DNS service whether you're running a resolver or recursive name server.

First, we need to harden our bind configuration to rate-limit queries. I have averaged usage by a few users that I share my DNS service with and a reasonable user shouldn't be making more than 10 queries per second. Start by adding this to your bind config (in the options section), typically found at /etc/named.conf:

rate-limit {
        responses-per-second 10;
        log-only yes;

Make sure you also enable DNSSEC:

dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;

Also be sure to deny transfers:

allow-transfer { none; };

Make sure you aren't replying authoritatively for domains that you do not resolve for:

auth-nxdomain no;

Finally, block your version number from being exposed to attackers, thereby making their job easier. Blocking the version number prevents an attacker from looking up existing vulnerabilities and attacking your service. This makes their job harder and they will likely pass you up. To do this, set the following option:

version "unknown";

Second, setup your bind logging to log everything you'd ever need to know about bind's activity. This will also come in handy with setting the rate limiting I mentioned earlier. The fail2ban custom filter utilizes the query log to block potential malicious activity. Here's what my logging stanzas look like:

logging {
        channel default {
                file "/var/log/named-auth.log";
                severity info;
                print-time yes;
                print-category yes;
                print-severity yes;
        channel default_debug {
                file "data/";
                severity dynamic;
        category lame-servers { null; };
        category default { default; };
        channel "querylog" { file "/var/log/named-query.log"; print-time yes; };
        category queries { querylog; };

Third, setup a custom fail2ban filter to block users that are rate limited. I setup the following filter in /etc/fail2ban/filter.d/named-throttling.conf:

# fail2ban filter for hosts attempting to cause a DoS
# this works in conjunction with the bind query throttling. messages are printed when
# hosts start abusing the system.

# example of what we're looking for to block
# 28-Mar-2017 22:53:22.624 client query: IN ANY +E (
# 28-Mar-2017 22:53:22.624 client would slip response to for IN ANY  (00111ec3)
# 28-Mar-2017 22:53:22.624 client query: IN ANY +E (
# 28-Mar-2017 22:53:22.624 client would drop response to for IN ANY  (00111ec3)


# Daemon name

# Shortcuts for easier comprehension of the failregex


#       hostname       daemon_id         spaces
# this can be optional (for instance if we match named native log files)
__line_prefix=(?:\s\S+ %(__daemon_combs_re)s\s+)?

failregex = ^%(__line_prefix)s?\s*client #\S+( \([\S.]+\))?: would (slip|drop) response .+$

ignoreregex =

maxlines = 1

I then activated the filter in my /etc/fail2ban/jail.d/jail.local file with the following:

enabled = true
filter = named-throttling
port = domain
logpath = /var/log/named-query.log
bantime = 120
findtime = 60
maxretry = 1
action = iptables[name=NAMEDTHROTTLE, port=domain, protocol=udp]

This ensures that anyone who triggers bind's rate limiting is blocked for two minutes. As I stated earlier, any reasonable user shouldn't have more than 10 queries per second. So, if you're not abusing the service, you shouldn't have to worry. Keep in mind, I set this up for my friends and family to use, as well as myself. So, based on your usage, you may need to tweak the bind rate limiting.

There you have it, folks. My thoughts on how to secure your system with hardened configurations and custom filters. I have some advanced tips I'll likely cover in future articles on iptables, sshd, bind, and fail2ban; they just didn't fit here.


CentOS: release 6.8
bind/named: bind-9.8.2-0.47.rc1.el6_8.4.x86_64
sshd: openssh-5.3p1-118.1.el6_8.x86_64
fail2ban: fail2ban-0.9.6-1.el6.1.noarch


EPEL (where you can grab fail2ban):

Until next time ...

Tuesday, July 15, 2014

Thoughts from the desk of a SysAdmin: Migrating from Apache + PhusionPassenger to nginx + Unicorn

Today's topic: Migrating from Apache + PhusionPassenger to nginx + Unicorn

I recently was presented with the challenge of building a new web application server. The previous server was outdated and so far behind on updates that replacing it was the best course of action. Since the previous server (we'll call it Boe, henceforth) was functional and not in need of immediate replacement, I had time to design and build a really awesome system. First, I needed to decide how I was going to resolve the following items:
  1. Replace PhusionPassenger
  2. What to use instead of PhusionPassenger
  3. Replace Apache
  4. What to use instead of Apache
Why replace PhusionPassenger? Regardless of Apache or nginx, it made the upgrade path too dirty and hairy to maintain. PhusionPassenger also wasn't very flexible or scalable enough. With nginx, you would have to re-compile the nginx binary and re-build the RPM each time. With Apache, you only need to re-compile mod_passenger each time. That is, so long as that version of Apache is compatible. There also didn't appear to be any functionality in PhusionPassenger to support multiple versions of ruby. These are the primary reasons as to why I wanted to replace PhusionPassenger.

What to use instead of PhusionPassenger? After a bit of research and reading, I felt that Unicorn was the best solution as the application server. It allowed me to keep the applications secure so that nginx talked to it over a unix socket. Unicorn is very configurable and I like how it utilizes a master/child worker architecture and integrated easily with the web application.

Why replace Apache? For application servers, I prefer to use nginx because of its small memory footprint, caching and performance. Apache is great, in my opinion, for hosting websites or simple applications like wordpress, cacti or nagios. This was my justification for replacing Apache.

What to use instead of Apache? Well, nginx immediately came to mind. Since the web applications are heavy and need a fast front-end, nginx was the best solution. I also like how nginx buffers the response from the back-end so that slow clients won't kill the response time of the application and result in degraded performance.

After additional research and preliminary testing, I built the new system and tested out my new configurations. I ran into a couple snags though with migrating away from PhusionPassenger. The first "gotcha" being that just because a web application lives at "/myapp" on your site, doesn't mean that's how it'll see requests coming across. The URI is actually re-written by PhusionPassenger so your web application thinks it is living at "/" instead of "/myapp". So, if you have an application route for "/webapp/programs/login", PhusionPassenger would actually re-write it as "/programs/login" then pass the request to the application. Part of the problem was resolved by setting the following option in the web application's "config/environments/production.rb" file:

config.action_controller.relative_url_root = '/myapp'

Since there were a handful of web applications coming from Boe, each of which were previously served via PhusionPassenger, it made nginx configuration a bit tricky. This is because PhusionPassenger was serving up static content in addition to the web application. There were also some issues with the web application generating links or redirecting to "/myapp/myapp/route/path". I resolved this with a nested location match, case-insensitive regular expression and URI re-write. This also increased performance by having nginx serve up static content while everything else was passed to the Unicorn. The location match and nested match I setup for "myapp" is as follows:

        location ^~ /myapp {
            location ~* /(images|javascripts|stylesheets|documents) {
                root /opt/myapp/public;
                rewrite ^/myapp/(.*)$ /$1? break;
            root /opt/myapp/public;
            proxy_cache myapp;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_read_timeout 1000;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header Host $http_host;
            proxy_redirect off;
            proxy_pass http://myapp_app_server;

Voila! The web application was fully functional and nginx served up page styling. After these changes, I was able to successfully migrate the web applications from the aging Boe to its replacement. There were a few more hiccups but they were mostly issues with application routes that the developers had to resolve. I will eventually be able to remove the URI re-write once the developers have finished updating the web applications.

Hopefully these notes will help someone else with a similar challenge. For posterity and your reference, here are links to the respective resources I utilized:

PhusionPassenger (setup guides): Apache guide, nginx guide

Until next time ...

Thursday, January 23, 2014

How-to: Connect to an SMB share via a NAT'ed IPsec tunnel

Yes, I know, you'd think this would be simple. Configure libreswan, establish the tunnel and either mount the smb share or "get" your file using smbclient. Unfortunately, it isn't that simple if you're using a NAT'ed ipsec tunnel as required by your remote endpoint. Since my peer address is on eth0 and my (assigned) NAT'ed address is on tunl0, that creates a slight problem when using smbclient. I attempted to use the interfaces="tunl0=eth0" option in ipsec.conf, but it did not behave as expected. Maybe the "interfaces" option isn't supposed to function like I thought it would, nevertheless, with the tunnel running my problem was now that I couldn't use smbclient.

Why can't you use smbclient, you ask? Well, I needed to connect to the remote party's server and smb share. However, you cannot tell smbclient which interface to use like you can with ping. I was able to confirm connectivity to the remote server via the tunnel by issuing the following in my shell:

$ ping -I tunl0

After a few hours of crawling through Google results using different search queries, I decided to develop my own solution. I knew about socat (netcat++) but had never used it; this seemed to be the perfect solution. After some reading and tinkering, I came up with the following command:

$ socat -d -d \
TCP4-LISTEN:139,bind=localhost,reuseaddr \

Let's break this down a bit ...

socat -d -d

This tells socat to run and print messages that are fatal, error, warning, or notice.


This portion of the command tells socat to start listening for TCP/IPv4 connections on localhost at port 139. The "reuseaddr" option tells socat to let other sockets bind to the address even if only part of it is being used (i.e., the port we're listening on).


The last portion of the command tells socat to proxy TCP/IPv4 connections to port 139 on the remote system. The "bind" option tells socat to use the NAT'ed address that we've assigned to the device tunl0.

With the first half of the problem resolved, I was able to begin using smbclient to connect to the remote Windows server. Since I was going to ultimately script the whole process, I developed the following command using smbclient:

$ smbclient //PDC01/private \
-I \
-U PDC\\MyUser%pass123 \
-c "get thisfile.txt /tmp/thisfile.txt"

Again, let's break this down a bit ...

smbclient //PDC01/private

This starts smbclient and tells it to connect to the smb share "private" on the Windows server named "PDC01".


This tells smbclient to connect to (localhost) where we're running our socat proxy.

-U PDC\\MyUser%pass123

This identifies our credentials; since Windows accounts are in the format "NT-Domain\UserName" we have to escape the forward slash, hence the "\\". To include a password for scripting purposes, we separate the account and password with a percent sign.

-c "get thisfile.txt /tmp/thisfile.txt"

This passes the above command to smbclient. Simply, we tell it what remote file to download and the location of where we want to save it.

After the file transfer completes, socat will detect that the TCP session has ended and thus shutdown nicely. With that, we have our solution; I hope this helps anyone that may encounter a similar situation.

If you would like to read more about socat, libreswan or samba, here are their respective websites:


Until next time ...

Friday, December 20, 2013

How-to: Create a gzipped tarball via SSH

I recently needed to remotely backup a server and store the data on another system. However my complication was that there wasn't enough space on the server to create the backup then transfer it to the remote system. I thought about using scp but that creates a lot of overhead as each file transfer is essentially a new SSH session. The crux was that I needed to keep the data secure but transfer within a short amount of time. I thought about using netcat, but that would mean the data would be going across the wire unencrypted even if I compressed it using gzip.

I ultimately decided that I needed SSH, tar and gzip. I popped up the manual page for tar and found that I can send the output to stdout (standard out). I immediately knew this was the best solution for the problem; I tinkered around a bit and developed a command that would allow me to use SSH, tar and gzip. The way it works is you send the tar command through SSH, tell tar to output via stdout and redirect the output from SSH to a gzipped tarball. The only downside is I had to temporarily enable root access via SSH; this is required if you're going to archive the whole file system. You can see how I accomplished the remote backup using the command below:

$ ssh 'tar -czvf - / --exclude=/dev --exclude=/proc --exclude=/var/run --exclude=/tmp --exclude=/sys --exclude=/usr/lib' > my-server-backup.tar.gz

I'll break down the command for you:

$ ssh 'command'

With SSH you can run commands remotely and the output will be displayed in your local terminal. The tar command:

 tar -czvf - / --exclude=/dev --exclude=/proc --exclude=/var/run --exclude=/tmp --exclude=/sys --exclude=/usr/lib

We invoke tar on the remote machine, telling it to create a gzipped tarball and list the files as it compresses them (czvf). Normally where we would specify the archive's file name we place a hyphen (-) which tells tar to output the archive to stdout (standard out, our screen). We want to backup the whole file system so we tell tar to start at the root of the directory tree (/) but we don't need libraries, device descriptors, process or temp files; thus we exclude specific directories with the exclude flag (--exclude=/dir/to/leave/behind). The last part of the command is important:

 > my-server-backup.tar.gz

We redirect the output to a file on the system receiving the archive; this is what the greater-than symbol does. To store the output into a file we specify a file name (my-server-backup.tar.gz); the ".tar.gz" extension identifies the file as a gzipped-tarball. To look at it without the confusion of the tar command:

 $ ssh 'command' > output.file

You can essentially perform the same function with any other command, so long as it supports outputting the data via stdout. At any rate, if you were to run the tar command all you (the user) would see is the file listing, like such:

$ ssh 'tar -czvf - Downloads/' > sysadmin-downloads.tar.gz

In the above example, I made a backup of the Downloads directory that resides in the sysadmin user's home directory. You can even check to make sure the file is of the appropriate format using the file command:

$ file sysadmin-downloads.tar.gz
sysadmin-downloads.tar.gz: gzip compressed data, from Unix, last modified: Fri Dec 20 17:53:28 2013

There you have it! A remote, compressed backup streamed to your server or workstation over SSH. Until next time, my friends ...

Thursday, December 12, 2013

How-to: Check the bit-length of an SSH RSA keypair

As a SysAdmin, some of my responsibilities include maintaining system security, access and auditing accounts. However, from time to time I've come across the need to check a public key to make sure it conforms with my requirements. I've performed this operation enough times that I thought I would make a short post to help others in this situation. Albeit a simple solution, its not very obvious.

Let's begin by creating a public/private key pair:

$ ssh-keygen -t rsa -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/derp/.ssh/id_rsa): derp_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in derp_rsa.
Your public key has been saved in
The key fingerprint is:
fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 derp@herpderp
The key's randomart image is:
+--[ RSA 4096]----+
|          B=.    |
|         +.=.    |
|        E =.o    |
|       . o.* .   |
|        S.o +    |
|       . o..     |
|        ... .    |
|         o.o     |
|         .+      |

Since the public key is what is provided by the user, that is all the SysAdmin will ever see. To check the public key, we test it like so:

$ ssh-keygen -lf
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 (RSA)

As you can see, the bit-length is printed at the beginning of the string (4096), followed by the key's fingerprint, the file name and the encryption algorithm (RSA). Looking at a raw pubkey doesn't allow one to easily identify the bit-length.

If you are the one providing the public key, you can easily check the bit-length of your private key with a simple command and pipe it to grep:

$ openssl rsa -text -noout -in derp_rsa | grep -i "private\-key"
Private-Key: (4096 bit)

Simple enough? Yup! To take it one step further, if you have multiple key pairs to manage, here is a simple way to test them against each other. You simply compare the fingerprint of the private key with the public key:

$ ssh-keygen -lf derp_rsa && ssh-keygen -lf
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 (RSA)
4096 fe:5c:60:be:46:54:10:7a:25:19:63:cb:67:ce:09:62 (RSA)

There you have it! Until next time ...

Monday, December 9, 2013

Random Password Generator

I recently needed a facility for easily generating random passwords for new accounts on a server. I looked at different packages but I didn't find anything that was satisfactory. I then stumbled across a blog post that talked about using openssl to generate random passwords. I started toying with openssl and was able to generate a 12-byte base64 string:
$ openssl rand -base64 12
However, I needed the password simple enough to give to users which meant I needed to remove the special characters. I know, removing special characters decreases security and increases the attack vector. However, these are SFTP accounts that are chrooted with no shell ... so I'm not too concerned. Moving forward, I decided to use sed to remove the characters:
$ echo "Elg6gD/+jGAl88/S" | sed 's/\///g' | sed 's/\+//g' | sed 's/\=//g'
Goodie. Now, I need to keep it out of memory and store it in a randomly named temporary text file; mktemp to the rescue! With that in mind, I now had everything I needed to build a function:
makepass (){
    local TMPFILE=$(eval mktemp)
    openssl rand -base64 12 > $TMPFILE
    sed -i 's/\///g' $TMPFILE
    sed -i 's/\+//g' $TMPFILE
    sed -i 's/\=//g' $TMPFILE
    cat $TMPFILE; rm -f $TMPFILE
Now, implementing it was easy as one, two, three:
PASS = $(eval makepass)
echo $PASS | passwd --stdin NewUser
Huzzah! Once the account setup is done, all I do is echo $PASS so that the SysAdmin (me) can provide it to the end user.

Until next time and as Richard Stallman says, "Happy hacking, folks."

Wednesday, August 22, 2012

xfce, xterm & Terminal

This is just a quick post because I felt the need to share this information. I have always been a slave to GUI eye candy even though my true love is the CLI. Recently, (not just Gnome) but KDE has shoved their head so far up their own rear end that they've decided they will compromise ease-of-use and speed for a poorly written, resource hog of a window manager. So, when I decided to install a GUI on my Debian laptop, I decided to go with xfce because it reminded me of the days of pleasant eye candy + functionality + ease-of-use + resource savvy. Yes, that's quite a mouthful but that's just honestly how I feel.

At any rate, I needed a way to set a default size for my Terminal window but it wasn't as easy as clicking Edit > Preferences. However, it was as easy as this:

matthew@intrepid$ vi ~/.config/Terminal/terminalrc

I changed the following line:




I also changed a few others, but it was as simple as updating the aforementioned line with the geometry I wanted. If you're unsure of how to read the geometry:

Number of characters per line = 108
Number of lines = 30

For those that are picky about the location of the window when the application starts, you can change your configuration line to look like this:


+0+0 places the window in the upper left hand corner of the screen. Here is how to read the window placement:


With regards to screen geometry, 0,0 is the top left corner of the screen. The +xoffset is how many lines to the right you want the window shifted. The +yoffset is how many lines down from the top of the screen you want the window shifted.

Now, given all this information, if I want my window to have a size of 108x30 and open at the center of my screen I would set the configuration option in terminalrc to:


Now, on my system, xterm is the default terminal emulator. Before making changes to your system, verify that this is the correct syntax to use with your terminal application otherwise you could experience unexpected behavior.

Until next time ...