My Little Corner of the Net

Using Mailman with Vesta on CentOS 6

About a year and a half ago I moved several of my websites to a server running the VestaCP control panel. At the time I posted an extensive review of the software, including a footnote that I found a way to get Mailman working with it. Apparently this is something a lot of people want to do because, since then, several people have contacted me looking for instructions. Life’s been hectic, and this wasn’t on the top of my mind, but I promised that I’d write a post. So, better late than never, here it is…

This tutorial will help you get GNU Mailman 2.1.x running on a Linux server. It’s geared toward CentOS 6.x, but will probably work with other Linux distros, although some file paths may change. It also assumes a standard VestaCP installation, with both Apache and Nginx running on the server.

Before we get too far into this, I should also point out that this tutorial only gets Mailman running on a Vesta server, it does not integrate it with the Vesta web interface. This means that you, as root, will need to set up new lists on the command line—you won’t be able to let your users create their own lists. Once a list is set up, however, it can be completely administered through the Mailman web interface, so hopefully this won’t be too big a deal for most situations.

Mailman is available as a CentOS package, but at the time that I did the install, several of the big email providers had recently made changes to their DMARC policies that broke older versions of Mailman, so I chose to build from the then-latest source release, which included new workarounds to address the issue.

When I did my installation, Mailman 3 was in beta, I believe, but since it wasn’t yet stable, and since I was moving existing Mailman 2.1 lists, I chose to stick with 2.1. Mailman 3 is now generally available and has some interesting new features, but it’s a significantly different beast and this tutorial probably won’t be helpful if you want to jump to 3.0. Fortunately, Mailman 2.1 is still in support and is still receiving regular updates, with 2.1.20 being the most recent version as of this writing.

This tutorial assumes that System V is used to manage services on the server. Many new Linux distros, including CentOS 7, have switched to systemd, which uses a different configuration format. If you’re using a systemd-based distribution, you’re on your own to figure that out, as I haven’t tried to do it yet (with Mailman anyway) myself.

There are four main parts to getting Mailman running on Vesta:

  • Install Prerequisites
  • Building and installing Mailman
  • Configuring Exim
  • Configuring Apache and Nginx
  • Creating your lists

While Mailman 2.1 supports multiple domain names, it does not allow the same list name to be used multiple times on different domains. In other words, if you host cats.com and dogs.com on the same server, and you create a customers@cats.com list, you can’t also create a customers@dogs.com list. While this wasn’t really a deal breaker for me, I decided to come up with a work around anyway, taking my lead from cPanel. cPanel appends the domain name onto the list name (i.e. customers_cats.com), but somehow strips it out in the email messages so that users only see customers@cats.com. I couldn’t figure out exactly how cPanel does this, but I came up with a pretty good facsimile by using Exim’s address rewriting features.

Installing Prerequisites

Mailman requires that the a C compiler and the Python development libraries be installed on the server, which are not installed by default. In addition, the pip command is required to install the dnspython Python module:

yum install -y gcc gcc-c++ python-devel python-pip

Now run the following command to install dnspython:

pip install dnspython

Building Mailman

We’ll start by downloading the latest release of the 2.1 branch of Mailman (2.1.20 as of this writing). Check http://launchpad.net/mailman to be sure you’re using the latest version.

wget https://launchpad.net/mailman/2.1/2.1.20/+download/mailman-2.1.20.tgz

Unzip the package

tar xvzf mailman-2.1.20.tgz

Switch to the directory that was created when the package was unzipped

cd ./mailman-2.1.20

Create the account and group that Mailman will run under:

useradd -r mailman

Mailman expects that the directories it will be installed into exist when you start the installation. Create those directories and set the ownership and permissions that Mailman requires:

mkdir -p /usr/local/mailman /var/mailman
chown mailman.mailman /usr/local/mailman /var/mailman
chmod 02775 /usr/local/mailman /var/mailman

Run the configure script to ensure all necessary libraries are available and to get Mailman ready to build:

./configure --prefix=/usr/local/mailman --with-var-prefix=/var/mailman --with-mail-gid=mailman --with-cgi-gid=apache

Build the package:

make

And install it:

make install

Run the check_perms script to ensure that permissions are as Mailman expects them to be:

/usr/local/mailman/bin/check_perms

If the above script returns any errors (and it probably will), run it again with the -f flag to have it try to fix the errors. In some cases, you may need to do this a few times before everything works.

/usr/local/mailman/bin/check_perms -f

Copy the /var/mailman/scripts/mailman file to /etc/init.d. This is the script that will be used to start and stop the mailman service:

cp /usr/local/mailman/scripts/mailman /etc/init.d

Copy the mailman crontab file to /etc/cron.d so that Mailman’s periodic tasks, such as sending out email reminders of posts awaiting moderation and managing the list archive, are run regularly.

cp /usr/local/mailman/cron/crontab.in /etc/cron.d/mailman

Now set a default password for Mailman. This password can be used in place of any list’s administrator password, so be sure to select a strong password.

/usr/local/mailman/bin/mmsitepass

Mailman requires that a default list, aptly named “mailman,” be created. You’ll be prompted for an administrator email address and a list password when you run this command. You can ignore the list of aliases that is displayed when you run the command.

/usr/local/mailman/bin/newlist mailman

Configure the system so that Mailman is automatically started when the server boots up:

chkconfig mailman on

And start the mailman service:

service mailman start

Mailman writes its log files to /var/mailman/logs. To be more consistent with other services, thereby making the logs easier to find, symlink Mailman’s logs directory in /var/log:

ln -s /var/mailman/logs /var/log/mailman

You’ll also want to rotate these logs on a regular basis so that they don’t get too big. To do this, create a new log rotate script. (Note: I prefer vi and use it in any instructions that require editing files in this tutorial, but feel free to substitute nano or your favorite editor if you don’t know vi or prefer something else.)

vi /etc/logrotate.d/mailman

Add the following contents to that file:

/var/log/mailman/bounce /var/log/mailman/digest /var/log/mailman/error /var/log/mailman/post /var/log/mailman/smtp /var/log/mailman/smtp-failure /var/log/mailman/qrunner /var/log/mailman/locks /var/log/mailman/fromusenet /var/log/mailman/subscribe /var/log/mailman/vette {
  missingok
  sharedscripts
  postrotate
  /usr/local/mailman/bin/mailmanctl reopen >/dev/null 2>&1 || true
  endscript
}

Configuring Exim

The next step in the configuration process is to integrate it with Exim, Vesta’s mail transfer agent (MTA). This allows Vesta to properly route incoming messages sent to a Mailman list to the Mailman software to process them. Exam is actually a preferred MTA to use with Mailman because it’s ability to route messages based on directory listings means that no per-list Exim configuration is necessary. By contrast, most other MTAs require you to set up several email aliases for each list you create.

The next several steps require editing the /etc/exim/exim.conf file. To make what’s going on more understandable, I’m going to start at the bottom of the file and work my way toward the top.

First, create a backup of the conf file, just to be safe:

cp /etc/exim/exim.conf /etc/exim/exim.bak

Then open it to edit.

vi /etc/exim/exim.conf

As mentioned above, Mailman does not support lists on multiple domains that share the same name. To work around this, I decided to follow cPanel’s lead and appended the domain name to the list name in the form listtname__domain.tld__. (Note that the trailing double underscore is necessary to properly parse address that contain a command, such as “-unsubscribe.” I couldn’t get the rewrite to properly parse these addresses without the underscores there.) This creates list email addresses that look like listname__domain.tld__@domain.tld which is undesireable. When mail is sent, however, Exim rewrites the email addresses it finds in the message into the preferred form, listtname@domain.tld, so end users see the cleaner form of the address.

Jump to the bottom of the file and find a line that starts with “begin rewrites” and add the following after that line:

#messages generated by Mailman will have the format of list__domain__@domain
#this rule will rewite them to list@domain before they are delivered
^([a-z0-9-\.]+)__[a-z0-9-\.]+__(-[a-z0-9]+)?@(.*) $1$2@$3 SEh

The above rewrite rule will strip the extraneous domain name from the list name when messages are sent, but only when the address appears in specific email headers. While this ensures that the to, from, and cc headers, for example are rewritten, it does not rewrite some of the more obscure headers, such as those that instruct mail clients how to handle unsubscribe requests. For this reason, we include two transports and two routers, the next features we’ll configure, that will properly route inbound messages that use either address format.

Transports that tell Exim how to handle a given incoming message. In this case, the transports instruct Exim to open a pipe to Mailman, when it receives a message associated with a list, through which it will pass the contents of the message, allowing Mailman to take over the processing.

Find the line “begin transports,” and add the following lines after this line but before the “begin rewrites” line.  There are several other transports already defined.  Where you put these doesn’t matter, as long as they’re in the “transports” section of the file.

mailman_transport:
  driver = pipe
  command = /usr/local/mailman/mail/mailman \
    '${if def:local_part_suffix \
    {${sg{$local_part_suffix}{-(\\w+)(\\+.*)?}{\$1}}} \
    {post}}' \
    ${lc:$local_part}__${lc:$domain}__
  current_directory = /usr/local/mailman
  home_directory = /usr/local/mailman
  user = mailman
  group = mailman
mailman_transport_norewrite:
  driver = pipe
  command = /usr/local/mailman/mail/mailman \
    '${if def:local_part_suffix \
    {${sg{$local_part_suffix}{-(\\w+)(\\+.*)?}{\$1}}} \
    {post}}' \
    ${lc:$local_part}
  current_directory = /usr/local/mailman
  home_directory = /usr/local/mailman
  user = mailman
  group = mailman

Finally, we create two routers. These tell Exim where to look to determine whether an incoming message has a valid destination on the server and, when a match is found, which transport it should be routed to for processing.

Add the following lines to the file between “begin routers” and “begin transports.”  Like with the transports, positioning doesn’t matter, as long as both definitions appear before the “begin transports line of the file.

mailman_router:
  driver = accept
  require_files = /usr/local/mailman/mail/mailman : \
    /var/mailman/lists/${lc::$local_part}__${lc::$domain}__/config.pck
  local_part_suffix_optional
  local_part_suffix = -admin : \
    -bounces : -bounces+* : \
    -confirm : -confirm+* : \
    -join : \
    -leave : \
    -owner : \
    -request : \
    -subscribe : \
    -unsubscribe
  transport = mailman_transport
mailman_router_norewrite:
  driver = accept
  require_files = /usr/local/mailman/mail/mailman : \
    /var/mailman/lists/${lc::$local_part}/config.pck
  local_part_suffix_optional
  local_part_suffix = -admin : \
    -bounces : -bounces+* : \
    -confirm : -confirm+* : \
    -join : \
    -leave : \
    -owner : \
    -request : \
    -subscribe : \
    -unsubscribe
  transport = mailman_transport_norewrite

Save the changes to the file and restart exim to enable them:

service exim restart

Configuring Apache

Configuring Apache was a bit of a challenge because Mailman is very specific about which user account it runs under. Vesta, on the other hand, uses suexec to run all scripts under the UID of the site owner, which breaks mailman. The solution is to run Mailman on a different port, where it is not bound by suexec’s rules. Later, we’ll set up a proxy in Nginx so that users won’t need to remember complicated URLs to manage their lists.

Mailman’s web-based admin tool has several small images at the bottom, which it expects to find in Apache’s /var/www/icons directory. Create symlinks in the icons directory that point at these images.

cp -s /usr/local/mailman/icons/* /var/www/icons

Create a new configuration file for Mailman’s Apache configuration.

vi /etc/httpd/conf.d/mailman.conf

Add a listen directive at the very top of this file. This tells Apache to bind to port 8090 when it starts and to listen for HTTP connections on this port.

Listen 8090

Next, add a VirtualHost block to handle requests coming in to this port.

<VirtualHost *:8090>
  ScriptAlias /mailman/ /usr/local/mailman/cgi-bin/
 
  AllowOverride None
  Options ExecCGI
  Order allow,deny
  Allow from all

  Alias /pipermail/ /usr/local/mailman/archives/public/
  Options Indexes MultiViews FollowSymLinks
  AllowOverride None
  Order allow,deny
  Allow from all
</VirtualHost>

Save the file and restart Apache:

service httpd restart

Now, depending on your firewall settings, you may be able to access the Mailman web interface at http://domain.tld:8090/mailman/listinfo. If you can’t, don’t worry about it as we’ll set that up next.

Configuring Nginx

Next, Nginx needs to be configured on each site to proxy requests to Mailman’s Apache listener on port 8090. Fortunately, we only need to apply this change to a few template files. Vesta offers a tool to reapply the template to a site’s configuration, which is a big help if you already have a lot of sites configured on the server.

Vesta stores it’s Nginx configuration templates in /usr/local/vesta/data/templates/web/nginx. This directory contains templates for each of the “proxy templates” you can choose when setting up a hosting package. Files with the extension .tpl are for HTTP configurations and .stpl files are for HTTPS configurations. You’ll need make the follow edits to each of the templates in the directory (or at least each of the templates that you use on your server).

We’ll start with HTTPS. Mailman, for the most part, uses relative URLs for most of it’s interface, so it runs fine on both HTTP and HTTPS. The user administration pages, however, use absolute URLs, so when managing users you can be dropped to HTTP unexpectedly. While not a standard module of Nginx, the CentOS builds include the ngx_http_sub_module, which can do string substitutions on page output. We can use this to rewrite the HTTP URLs in Mailman’s output to HTTPS to avoid problems.

Open the hosting.stpl file in the directory noted above:

vi /usr/local/vesta/data/templates/web/nginx/hosting.stpl

Between the end of the block that begins “location /error/“ and before the beginning of the block that starts “location @fallback” add the following:

location ~ ^/((mailman|pipermail)/?.*)$ {
  proxy_pass http://127.0.0.1:8090/$1$is_args$args;
  sub_filter http://%domain_idn% https://%domain_idn%;
  sub_filter_once off;
}
location /icons/ {
  alias /var/www/icons/;
}

Save the file and do the same for the other .stpl files in the directory.

There are two options for the HTTP configuration. If you know that all of your sites will have SSL certificates, as mine do, you can use the following configuration to direct all HTTP requests to their HTTPS counterparts. I recommend this approach if you can support it.

Open the hosting.tpl file:

vi /usr/local/vesta/data/templates/web/nginx/hosting.tpl

Again, between the “location /error/“ and “location @fallback” blocks, add the following:

location ~ ^/((mailman|pipermail)/?.*)$ {
  rewrite ^(.*)$ https://$host$1;
}
location /icons/ {
  alias /var/www/icons/;
}

And, like before, repeat the change on the other .tpl files as well.

If you can’t rely on all of your sites having SSL available, you can instead use a variation on the HTTPS configuration that doesn’t include the HTTP-to-HTTPS conversion. Follow the same instructions for the above HTTP changes, but add the following instead:

location ~ ^/((mailman|pipermail)/?.*)$ {
  proxy_pass http://127.0.0.1:8090/$1$is_args$args;
}
location /icons/ {
  alias /var/www/icons/;
}

Now that the templates are updated, we need to apply them to each existing site. To do this, run the following command for each hosting user account (not domain) on the server:

/usr/local/vesta/bin/v-rebuild-web-domains username

Once all accounts have been updated, restart Nginx to make the changes take effect:

service nginx restart

Creating Lists

Mailman is now up and running. All that’s left to do is to start creating lists.

When setting up a new list, remember that it is necessary to use the full address syntax, listname__domain.tld__@domain.tld for the list address. You should also specify both the emailhost and urlhost options to ensure that the list is configured correctly.

/usr/local/mailman/bin/newlist —emailhost=domain.tld —urlhost=www.domain.tld listname__domain.tld__@domain.tld

You’ll be prompted for an email address of the list administrator and for a list password. When the list is set up, you can access it at http://www.domain.tld/mailman/admin/listname__domain.tld__.

Since, through the Exim rewrites, we’re running the list from a different address than we configured Mailman to use, it is necessary to make one small settings change to the list’s settings. Open the list’s administrative page in a browser and log in with the password you provided in the previous step.

Once logged in, click on “Privacy Options” in the “Configuration Categories” menu. Then click on “Recipient Filters.”

On the Recipient Filters page, find the field labelled “Alias names (regexps) which qualify as explicit to or cc destination names for this list” and add the preferred list address (listtname@domain.tld). Without this, Mailman will not recognize the preferred address as being a valid list address and will hold any messages that are sent using it for moderation.

Click the “Submit Your Changes” button to save the change. Then make any additional settings changes you require and add recipients to the list under “Membership Management” in the “Configuration Categories” menu. You’re list is now ready to use.

Remote Access on a Raspberry Pi

OK, so you have a Raspberry Pi running heedlessly (no keyboard or screen) on your network and you want to do something with it. What do you do? Well, there’s SSH, of course, but what if you want to play with any of the Pi’s graphical tools?

The Raspbian OS (as well as most of the other general-use OS options available on the Raspberry Pi site) runs an XWindow service by default. This provides the GUI when the Pi is plugged in to a screen, but it can also be accessed remotely. This post will look at some of the several ways to do this.

TL;DR version: most of these examples are either too difficult to set up or too impractical to use reliably. For a no-nonsense tutorial on a tool that works pretty well, jump straight to the last section, XRDP.

X11 Forwarding

On Raspbian, the Pi’s SSH server has X11 forwarding turned on by default. This means that you can run GUI programs on your Pi, but display the interface on your local desktop, provided your local desktop has an X server itself. If you’re on Linux or some other form of graphical Unix, you’re good to go. Mac OS X users will need to install an X server, such as XQuartz. Then ssh into the Pi as you normally would, but add a -Y flag to enable local machine to receive the X11 data (replace the IP in the example with that of your Pi, of course):

ssh -Y pi@192.168.1.123

Once you log in, you’ll have a prompt that looks like any normal SSH session, but try running an XWindow program, like xeyes:

xeyes &

You should see a window open with two eyes it it that follow your mouse around the screen. Note the ampersand at the end of the command. This tells the Linux shell to move the xeyes process to the background, allowing the shell to return a prompt for the next thing you want to run. If you don’t include it, you’ll need to close xeyes before you can run something else.

Windows users don’t need to feel left out, either, as there’s a number of X server implementations for Windows, such as XMing, Cygwin/X, and XWin32 (commercial).

The advantage of X11 forwarding is that it’s already built in to the Raspbian OS and doesn’t require a lot of work to set up. The downsides are that you need to know the name of the programs you want to run, since you don’t have the GUI menu bar to select from and that it can be a little tricky to get working on the client desktop, especially if that machine runs Windows.

On the first point, however, you can launch the LXDE (Raspbian’s graphical environment) menu system by running:

lxsession &

This will open the Raspberry PI menu bar on your local screen so you can easily launch programs, but it won’t create a windowed version of the Pi desktop as you might expect, but instead becomes a weird mix of your local desktop and the remote desktop that’s confusing and difficult to use. Some X servers have an option to switch to a windowed mode, but if you want the windowed interface without a lot of fuss, you may want to consider another option, such as one of the ones below.

VNC

VNC, which stands for Virtual Network Computing, is a graphical desktop sharing protocol that was developed by a partnership with Olivetti, Oracle, and later AT&T in the late 1990’s. Since the code for the protocol was open sourced, many different clients and servers have been developed for nearly every platform you might encounter.

There are plenty of tutorials for getting VNC running on a Raspberry Pi, so I won’t spend time on that here. If you want to try it, this tutorial on the Raspberry Pi site will get you going.

You’ll also probably need to install a VNC client on your desktop—TightVNC seems to be one of the more popular choices, as are RealVNC (from the original developers of the protocol), UltraVNC (Windows only), and Chicken (formerly Chicken of the VNC, Mac only). Mac users take note: there’s already a VNC client built in to OS X—it’s called “Screen Sharing.app.” It’s buried pretty deep in the system, so you won’t find it in your Applications folder, but it should come up in a Spotlight search.

The problem with VNC is that its underlying Frame Buffer Protocol sends entire copies of the remote screen to the client, even if only a small portion of the screen has changed, which means it can feel extremely sluggish, even when doing simple tasks, like editing a document.

Chrome Remote Desktop

Chrome Remote Desktop is a remote access solution created by Google and available through the Chrome browser via Chrome Web Store. Rather than connecting directly, machines running the Chrome Remote Desktop service register themselves with Google’s servers when they start up, and Google serves as a proxy between the remote machine and the client accessing it, in a way similar to how instant messaging services work. This allows connecting over the Internet to remote computers that are sitting behind NAT firewalls, which is not possible with any of the other services listed here.

I use Chrome Remote Desktop regularly to access the Mac Pro in my office when I need to work remotely. The service uses SSL encryption to ensure privacy and Google’s VP8 video format to send the screen image, and it’s very responsive.

I have not tried Chrome Remote Desktop on a Raspberry Pi, but others have reported good luck with it on the older Raspbian Wheezy. Unfortunately, Chromium, Chrome’s open source cousin, is not available in the new Raspbian Jessie repositories (yet?), so short of building from source, this isn’t an option for me at the moment. Also, since it requires using Chromium to set it up, the Pi needs to be connected to something with a screen, at least initially (or you could configure it with X11 Forwarding).

NX

NX is a protocol developed by NoMachine, with client and server implementations for Linux, Mac, and Windows. Prior to version 4, NX was open source software that was tunneled to the client over SSH, so it was extremely easy to get running on a Linux box with very little fuss. Unlike VNC, however, NX uses compression to reduce the transferred data so that connections are responsive, even over slower networks.

NX has been my go-to tool for Linux remote access for years, but unfortunately no precompiled versions of it, or any of the open source forks or it, are readily available for the Raspberry Pi, and compiling it from source is tricky given the Pi’s limited resources.

XRDP

Fortunately I discovered XRDP some time ago. If you’re a Windows user, you might be familiar with the Windows Remote Desktop Protocol (RDP), which has been included as part of most Windows distributions since Windows XP. RDP achieves very fast speeds by sending only the portions of the screen that have changed to the client. Because of this, many programs appear almost as responsive remotely as they do when logged in to the machine directly.

XRDP is implemented as a hybrid between VNC and RDP. The actual remote control of the machine done in VNC, but data is sent back to the client through RDP, where it can benefit from the efficiencies of that protocol. This helps make XRDP faster than VNC, since much of the VNC overhead is never sent over the wire. RDP is also a widely supported protocol, with clients built in to most Windows computers.

To install XRDP, simply run the following commands on the pi:

sudo apt-get update
sudo apt-get install xrdp

Once XRDP is installed, you’ll want to make one small change to the configuration. By default, XRDP is configured with limited encryption, so someone could conceivably eavesdrop on your session. To fix this, open the xrdp.ini file on your Pi in your favorite editor (mine is vi (vim, actually), but feel free to substitute nano or something else if you’re not comfortable with vi):

sudo vi /etc/xrdp/xrdp.ini

In the general section of the file, find the line that starts with crypt_level and set it to high:

crypt_level=high

The crypt levels are defined as follows:

– low — Data you send to the server is encrypted with 40-bit RC4 encryption, but the data you receive is sent in the clear.
– medium — Data is sent in both directions using 40-bit RC4 encryption.
– high — Data is sent in both directions using 128-bit RC4 encryption

Once you’ve set the crypt_level, save the file and restart the service:

service xrdp restart

Then open your Remote Desktop Connection app (it’s in the Accessories folder of nearly every Windows machine, the official Microsoft version is a free download in the Mac App Store, and there are lots of third-party versions for Linux, iOS, and Android) and type in your Pi’s IP address:

Remote Desktop Setup

When you connect, you’ll be prompted for your credentials on the Pi. Leave the “module” set to “sesman-Xvnc,” enter your username and password, and click OK.

XRDP Login Window

In a few seconds, you should have full access to your Raspberry Pi desktop.

XRDP Desktop

Dead Simple Dynamic DNS Updater

I run a VPN on my home network which lets me access my systems and files remotely and gives me a secure route to the Internet when I have to use questionable networks. Since my Internet provider does not give me a static IP address, I rely on dynamic DNS services to keep my IP mapped to a hostname I can always use to “phone home.”

Since the DNS servers for the service I’ve been using seemed to vanish a couple weeks ago, I started “shopping” for a new provider and came across dtDNS. dtDNS allows you to set up five dynamic DNS hostnames for free, or you can pay a $5.00 one-time fee to get unlimited (“within reason,” according to the site) hosts.

Once I had my new hostname set up, it was time to set up a client app to keep my IP in sync. I had some trouble getting ddclient, which I’ve been using for a while now, to work with dtDNS, and the Linux options on dtDNS’s update clients page were either no longer available, required Java, or expected the machine to have a public IP address, which mine does not. So with a bit of research, I wrote my own.

My updater is a simple shell script with less than 10 lines of code. It uses icanhazip.com to find the external IP address, so it will work on systems that don’t have public IPs, and it only pushes a change request when it sees that the IP has changed.

#!/bin/bash
# dtDNS Dynamic IP update Script
# Author: Jason R. Pitoniak 
# 
# Copyright (c) 2015 Jason R. Pitoniak

# Set your dtDNS hostname and password below
HOSTNAME='MYNAME.dtdns.net'
PASSWORD='PASSWORD'

# We need to find your external IP address as your system may have an non-public address
# on your local network. icanhazip.com (or any number of other sites) will do this for us
EXTIP=`curl -s http://icanhazip.com/`

# Now we check which IP dtDNS currently has recorded by checking their DNS server
LASTIP=`nslookup $HOSTNAME ns1.darktech.org | tail -2 | awk '{ print $2 }'`

# If the current external IP is different from the one with dtDNS, update dtDNS
if [ "$EXTIP" != "$LASTIP" ]
then
    curl "https://www.dtdns.com/api/autodns.cfm?id=$HOSTNAME&pw=$PASSWORD&ip=$EXTIP"
fi

It should run on any Unix-like system including Mac OS X. It will probably even work on Windows with cygwin, but I haven’t tried. Just copy it to a file named dtdns-update somewhere on your system, update the HOSTNAME and PASSWORD variables to reflect your account, and chmod the file so that it is accessible only to the user that will run it:

chmod 700 dtdns-update

To test the script, call it from the command line:

/path/to/dtdns-update

The script will return whatever response it receives from the dtDNS update API, whether it is an error or success message. If nothing is returned it means that dtDNS already has the correct IP, so no action was taken.

Now we’ll set up a cron job to run the script periodically. To do this, enter the following on the command line:

crontab -e

A text editor will open. Add the following to the end of the file:

*/5 * * * * /path/to/dtdns-update >/dev/null 2>&1

This will run the script once every five minutes. You can adjust the interval as you feel is appropriate. Once you save the file, the new cron job will be installed and will begin running within a few minutes. Now you can rest assured that your IP address will always be up to date with dtDNS.

Protip: If your dtDNS hostname is too difficult to easily remember and you own a domain name, you can set up a hostname on your own domain that points to your dtDNS name. If you maintain your own DNS, create a CNAME record for whatever host name you want with your dtDNS hostname as the target. If you don’t maintain your DNS yourself, ask you host if they can configure this for you.

Domain Aliases with Exim 4

As I’ve noted before, I recently moved several of my sites to a server running VestaCP. One of the convenient features of Vesta is the ability to specify any number of domain aliases when setting up a new site. I’ve now learned, however, that Vesta only aliases these domains for web access, not for email.

One of the sites I host has the .com, .net, and .org variants of its domain. Being a not-for-profit organization, we use the .org as our primary domain and do 301 redirects to it on the web, but for historic reasons much of our email comes through several .net addresses, so it was important to me to keep email flowing through this variant.

We currently have around 100 email addresses hosted with this site, most of which are aliases. Since Vesta configures Exim, its standard SMTP server, to check per-domain text files for aliases, it would have been trivial for me to set up additional files for the additional domains. However, since the aliases change on a somewhat-regular basis, I wanted to avoid having to duplicate files. And since the domain name is part of the alias in these files, simple symlinks aren’t an option.

The Internet wasn’t much help here, either. I found several references on how to set up individual redirects on each of the domains, and references on how to create wildcard redirects that send all mail on a domain to a single address on another domain, neither of which works for what I want to do. I simply want to redirect any mail that comes in on one of the secondary domains to the same “local part” on the primary.

Not being an Exim expert, it took some work to get what I wanted. After some trial and error, I finally found a working solution. I’m not sure that this is the best way to handle this, but since the Internet seems to be devoid of this specific solution, I figured I’d share it here. I should also point out that I’m running this on CentOS 6; YMMV with other platforms and configurations.

To start, I created a file, /etc/exim/domain_aliases that simply maps secondary domains to the primary domain to which they should forward:

sudo vi /etc/exim/alias_domains

The contents of this file are as follows:

ourdomain.com: ourdomain.org
ourdomain.net: ourdomain.org

Next, I added a new router to /etc/exim/exim.conf. This can go anywhere after the “begin routers” section of the config as long as it falls before the “begin transports” section.

sudo vi /etc/exim/exim.conf

The configuration to add looks like this:

domain_aliases:
     driver = redirect
     data = ${extract{1}{:}{${lookup{$domain}lsearch{/etc/exim/domain_aliases}{"$local_part@$value"}}}}
     require_files = /etc/exim/domain_aliases

The most important line here is the one that starts with “data.” Since it’s rather complex, I’ll break it down.

The first part of this action is the “extract.” This takes a string, parses it at a delimiter, and returns a result based on whether or not a search string (in this case it’s the incoming domain name) is found. The general syntax of this directive is as follows:

${extract{search_parameter}{delimiter}{search_string}{return_string_success}{return_string_fail}}

* search_parameter is an integer that specifies which parameter position will be checked for the match. In this example, we want to check the first (leftmost) parameter.
* delimiter is the character used as the delimiter between columns. I’m using a colon.
* search_string is the full string being searched. This can be a simple string or, as in this case, a more complex expression. I’ll explain what I’m doing in more detail below.
* return_string_success is returned when a successful match is made. If this parameter is not specified, the result of the match (i.e. the string searched) is returned instead.
* return_string_fail is returned when no match is made. It defaults to an empty string when not specified.

In this implementation a second directive, a lookup, is inserted in place of the search_string. Lookups match a value to a key and take this form:

${lookup{key}search_type{path}{return_string_success}{return_string_fail}}

* key is a string value containing the key we want to find. In this case we pass in $domain which holds the domain name to which the incoming message was sent.
* search_type is the type of search we want to do. I’m using lsearch which allows searching in the individual lines of a file.
* path is the absolute path to the text file to be used for the lookup. This is he domain_aliases file created above.
* return_string_success and return_string_fail work the same way they do in the extract. In this case, if a match on the original domain is found we return a new email address, built from the $local_part (everything to the left of the “@“) of the original email address and the $value returned by the lookup, a variable containing the string result of the match. Again, if no match is made, an empty string is returned.

Together, these two directives provide exam with a new destination address for messages coming in via the secondary domains. Once the match is made, Exim restarts the lookup process, looking now for a handler for the message using the newly returned address.

There is still one more step, however, because Exim needs to know that it is able to accept mail for the secondary domains. To do this, the secondary domains need to be added to Exim’s local_domains and relay_to_domains lists. While there’s a number of ways to do this, I found it easiest to add another reference to my domain_aliases file.

First, find the lines in /etc/exim/exim.conf that start “domainlist local_domains” and “domainlist relay_to_domains.” Under Vesta, they’ll look something like this:

domainlist local_domains = dsearch;/etc/exim/domains/
domainlist relay_to_domains = dsearch;/etc/exim/domains/

Vesta stores each mail domain’s configuration in a directory named with the domain name, so it does a directory search (dsearch) and, if it finds a matching directory, it knows it can handle mail for that domain. We’ll add second option: another lsearch of the domain_aliases file, like so:

domainlist local_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases
domainlist relay_to_domains = dsearch;/etc/exim/domains/:lsearch;/etc/exim/domain_aliases

Finally reload Exim’s configuration and your secondary domains should start handling mail:

service exim reload

Again, as I mentioned above, I am far from being an expert at configuring Exim, so I could be missing something completely obvious. Still, this seems like it would be a pretty normal thing to do and unless my Google-fu has failed me, it seems not many others are doing it. Until I learn of a better way, this is how I’m configuring my domains. If you know of a better way to handle multiple domains, or you find this helpful, please comment. Exim, like most popular daemons, is a Swiss Army Knife of possibilities. Getting the most from it takes time, patience, and the generosity of those willing to share their struggles.

Vesta CP

I’ve been running most of my personal sites from a VPS running the Interworx control panel on CentOS for the past several years. After a long, stable run, my operating system reached end of life and it was time to upgrade.

I’m pretty comfortable with the Linux command line, I have a ton of experience configuring Apache, and I’m pretty good at keeping MySQL up and running, so I could probably get away without a control panel. On the other hand, I have very little experience with mail servers and I like the convenience of a point-and-click interface to handle most of my administrative needs. Happy with Interworx, I considered buying a new Interworx license for the new server, but I also wanted to shop the competition a bit as well. That’s when I discovered that there are quite a few open source control panels available.

I started downloading some of the open source panels and installed them on VMWare virtual machines on my laptop to try them out. Most, I found, either had questionable histories in terms of security, didn’t seem to be in active development, or had horrible user interfaces. Others, like ISPConfig, took perceived security a bit too far, forcing PHP into such a small sandbox that much of my code, which follows industry best practices in terms of structure and security, would not work without extensive modification.

I finally tried and settled on Vesta, a relatively new PHP-based control panel launched by a Russian development team in early 2013. Vesta offers the essential features for hosting, like web, database, email, and DNS, without a lot of unnecessary cruft. Installation was easy and while I do think the UI and UX could use a little work, Vesta’s web front-end is cleanly designed and easy to use.

Installation

Vesta is designed to be installed on a “bare metal” server with just an operating system installed. While I chose RedHat-clone CentOS for my server, Vesta will also run on Debian-based systems like Ubuntu. Setup is as easy as downloading a script and running it on the command line; the script then uses the OS’s package manager to download and install all of its components, making it easy to keep things up to date as the OS releases updated packages.

Accounts

Vesta accounts are standard Linux accounts which are created in /home. Each account can host multiple websites and, I believe, accounts can be configured as resellers, so that they can create new accounts as well. Several quotas are enforced, including disk space, number of sites, and number of email accounts per site. You can also manage the account’s shell from the web interface, including a “nologin” option if you want to disable command line access to the account (and allow FTP access only).

Vesta stores all of the config files used by an account in the account’s home directory, symlinking them to the locations where their respective applications expect to see them. This makes backing up a site is a snap—in fact, Vesta backs up every site automatically each night, keeping the last three backups available as a downloadable tar file.

When adding a site, DNS zone is created automatically. You can also specify any number of alias domains and these will be set up automatically as well.

I did not see a way to “jail” (chroot) an account, though this isn’t important to me as I am the only user with direct access to the server.

One thing that bugged me a bit was that vesta’s quota settings did not seem to allow for “unlimited” options. In Interworx I created an “unlimited package” to which I subscribe all of my sites, effectively disabling individual site quotas. In Vesta I simply set all of my limits extremely high, to the point that I should never exceed any of them.

Web

Vesta installs both Nginx and Apache web servers, and is the only control panel I’ve seen that does this. Nginx, known for being extremely fast, sits at the front end and handles most static content while proxying anything it can’t handle to Apache. This helps speed up response times for things like images and CSS files while still allowing most off-the-shelf web software, like WordPress and Drupal, to run without modification, since most of these packages are preconfigured to run on Apache.

There are several options available for Nginx, including the ability to use it as a reverse proxy cache. Caching requires a lot of memory, however, so I wouldn’t recommend trying this on a VPS.

Early on I was having an issue where Apache would keep consuming more and more memory until the server ran out of RAM, at which point MySQL would be shut down. Since MySQL is a requirement of most of my sites, this wasn’t acceptable, so I started trying to tune Apache to avoid it. After several failed attempts (I never did find a definite cause), I switched Apache to use Worker MPM (i.e. threaded processes) and the memory footprint of the server immediately dropped to almost zero. While Worker mode is not compatible with several Apache modules, experience has taught me that it can be a huge help in improving server performance. In fact, it probably negates the benefits of running Nginx now, but I don’t feel like trying to configure Nginx out of the picture.

Database

Vesta installs MySQL by default. It looks like a patch for adding PostgreSQL support is also available, but I haven’t tried it.

PHP

Vesta installs the OS’s PHP packages. By default, it runs PHP under mod_ruid which, by my understanding, is basically a variation of mod_php that runs scripts under the owner’s UID. This can be changed, on a per-site basis, to several other options including straight CGI and PHP-FPM.

I’ve decided to use mod_fcgid because this is what I have the most experience using and because it works well in the enterprise hosting environment I oversee at work. I did have to tweak the default settings a bit to get the best performance for my server’s resources, but I kind of expect that every server is going to need some degree of customization to balance available resources to desired performance level. With the tweaks in place, my PHP sites load quickly with minimal memory overhead.

CentOS 6 ships with PHP 5.3. After the installation I decided to upgrade to PHP 5.5 using the Webtatic repo. I basically did a “yum erase” on each of the installed “php” packages and installed the equivalent “php55w” package in its place (note that they are not a 1:1 match). So far I have not seen a single issue with this, though YMMV.

DNS

Vesta installs BIND 9 as its DNS server. The Vesta web interface makes it easy to configure manual DNS zones for domains not hosted on the server or for adding adding additional records to hosted domains.

With my old host, I had three IP addresses, one for the server and two to use as “separate” DNS servers. Without getting into the reasons why this is a bad idea, this is how I ran my DNS for the past several years, though it did burn me a couple times. My new host only allows one IP per server, so for a backup DNS I installed PowerDNS on another VPS I have with another provider in another datacenter. With PowerDNS’s MySQL storage engine and the concept of “supermasters” plus a tiny config change to Vesta’s main BIND configuration, the secondary server is updated automatically every time I add or change a domain in Vesta, making my NS2 server truly “set and forget.”

Mail

Vesta uses Exim as its MTA (SMTP) and Dovecot as its MDA (POP3 and IMAP).

Email accounts are configured in the Vesta web interface and can be set up with any number of aliases. Incoming messages can be forwarded elsewhere, with or without a copy being kept, and you can specify an auto-reply message to send when mail is received. RoundCube is installed for webmail.

Vesta will install SpamAssassin and ClamAV (clamd) automatically if a server has more than 3Gb of RAM. Mine does not, so I had to install them manually. SpamAssassin was not a problem, but on my first attempt on building the server, with 512Mb of RAM, I was not able to start clamd. After reconfiguring the VPS to have 1Gb of RAM, I was able to start clamd, but it consumed most of my available memory. At that point, I decided that I didn’t really need to virus scan my email on the server, so I disabled it. I enabled it again after seeing how little memory Apache was using after switching to Worker and I’m now consistently using a bit less 50% of my available memory when the server is at normal load. I could probably switch back to 512Mb, but I don’t plan to. For the email that I’ve received on the new server, only two or three spam messages have made it to my inbox.

What Vesta is missing is an easy way to create email forwarders that aren’t attached to an email account. One of the sites that I’m hosting makes extensive use of these. Fortunately I was able to locate where Exim stores aliases for the domains it manages and I added them all manually. I also tested to ensure that my manual edits would be safe when I make email changes in Vesta and so far they seem to be, but I’m being careful to keep a backup of that file, just in case.

Vesta also doesn’t include a mailing list manager. One of the sites I host relies heavily on mailing lists, currently with Mailman. I tried to get Mailman working with Vesta and thought I had a solution in place, but I ran into some complications when I started moving the lists over. To prevent delays in my migration, I created a subdomain on a cPanel server and used it’s built in Mailman installation to manage the lists for now. I still plan to continue working on getting Mailman running and who knows, I might submit my method to the Vesta team for future implementation if I’m successful.

Conclusion

While I’m not sure that I’d use Vesta as a control panel for hosting paying clients, as it still has some rough edges, I think it will meet my personal needs quite well. The product still has some bugs, but in the month or so since I’ve installed it, I’ve already seen several of them fixed. The development team seems to be focused on making a lightweight control panel that works well on small servers and VPSes, which is nice to see.

Vesta documentation is still somewhat lacking, consisting of mostly just an FAQ page right now. There is a user forum and a bug/feature request tracker, but being a Russian project, many of the posts are in Russian. Still, Vesta seems to be catching on, so it is hopefully only a matter of time before documentation improvements start to be made. Truth be told, I haven’t had much of a need for more documentation, but I’d suspect someone with less Linux administration experience might. The developers do offer paid support plans, but I have not purchased one.

My biggest gripe with Vesta is how it formats lists: lists of users, sites, domains, etc. are extremely verbose with all of the details of the list item presented, making the page difficult to scan or to find the links to administrative actions for a list item. Clicking to do simple, everyday actions, like modifying an email address, often takes several more clicks than seem necessary. I’d much rather see terser lists with clear calls to action, including the option to see more info about a list item when I need to. That said, this is a wart I can live with.

So far, after some tweaking, Vesta seems like a pretty good panel. It will be interesting to watch as it progresses over the next few years, I think it has a lot of potential.

<