From November 2009 till August 2010 I was developing and launching the emailing system.
The goal of the project is to create a system sending lots of email letters from the large number of server simultaneously. The system is designed to be used by multiple users through a web interface, achieving high delivery rate.
For example, it can be used by the web site owners who want to send newsletters to a large list of users and not willing to invest in their own infrastructure.
For the user the process of sending emails starts consists of registering servers in a system, adding domains, creating campaigns and monitoring the progress.
- Distributing the campaign among multiple working servers with multiple Ips.
- Automatic servers setup and deployment.
- Independence of servers location, minimal requirements for the hardware and connection.
- Almost linear scalability of the system.
- Precise tuning of delivery process, like limiting delivery rate per recipient domain – helps avoiding blocks by yahoo/gmail/etc for flood.
- Working with large lists of recipients - millions of records. Multiple white/black lists in a campaign can be used.
- HTTP API for recepients addition to the lists for integration with other applications.
- Rar and zip compressed lists are accepted for convenience.
- Multiple domains used in a campaign.
- Link masking with multiple user domains.
- Automatic domains management, sub-domains and mx records creation per IP.
- Clicks accounting.
- Bounce and unsubscribe tracking - creating the special black lists taken into account in the future campaigns.
- Web interface for users, aggregate stats and delivery speed graph for campaigns, admin area for user and system management.
- Using tags in templates, like “Hello [UserName]“
- Plain text, HTML or multipart (html+plain text) letters.
- encoding selection - 7bit, quoted-printable or base64.
- Selecting ips and domains for a campaign.
- Detecting offline servers.
There are several interesting aspects in the project implementation.
Internally the system is organized as a set of web services. Each working server runs an MTA (mail transport agent) to send the letters, and a lightweight web server for a web service. The central management server communicates with the workers by HTTP to send letters (instead of SMTP), gather status of delivery, subscription letters, and for the remote configuration management.
When pushing the letters to the working servers to be sent, the template and parameters are sent (addresses, names, links, etc). The individual letters are generates on the worker servers. The data sent to the working servers is encrypted.
This approach has many advantages:
- cuts the traffic
- provides high level of security
- makes it possible to use remote machines and VPSes as remote working servers
- unloads the central server
- makes use of standard widely used trusted technologies (XML, HTTP, OpenSSL, cURL)
- possible traffic compression
The software used is Nginx (web server) and PHP. No database server - just bundled SQLite used to store the queue of the letters to send and logs of the results. It works well – there is no concurrent requests, and
Best of all, the database or a web server requires no human attention.
- a clean CentOS 5 installation
- correctly configured networking
- free 25 port: no Plesk/Qmail/Postfix running
- 256 MB RAM
- date/time/zone and ntp synchronization recommended
The process of automating server setup and remote management came to be much harder then I considered initially. There is several dozens commands one needs to execute and files to upload to prepare a system. Finding out an exact list, exact order and exact file permissions to set wasn't a trivial task.
Moreover, in real world, there are many unexpected problems appearing, like /etc/resolv.conf is not configured, incorrect clock or time zone, port 25 is occupied by some software, autoconf is not installed, and so on.
All you can do is know where the server came from, and check it if you are not sure about it's configuration. After all, there is a detailed log written during a setup process.
When adding a server, user submits the root password which it is not saved anywhere, instead the public key is uploaded, and a key authorization is used to access worker servers further. The server setup routine is implemented as a daemon running on a management server and called by a web script. Most of configuration updates, like rate limit changes, domains management or clearing queues for a campaign canceled by the user, is done trough a web service with encrypted messages.
The system is written mostly in PHP. Version 5.3 has got a good memory management, and daemons are quite stable. I got no stability complains during several months of work and tens of millions of letters sent. At some moment the DB became filled with data from old campaings and required optimization.
The complexity of the forced me to do refactoring of core modules several times, cause new requirements appeared during the work as usually. Everything is written with an OOP design and MVC. It is not a web application at all, also.