Top of Page
June 19, 2012
(Original Japanese article translated on December 19, 2012)
In the first part of this article, we explained the roles expected of cache servers, and gave an overview of cache servers and cache engines. Here we go into further detail, and discuss tests carried out by IIJ comparing the Varnish Cache, Apache Traffic Server, and nginx cache engines.
First, here is a comparative chart listing the functions of Varnish Cache, Apache Traffic Server, and nginx.
Varnish Cache | Apache Traffic Server | nginx | |
---|---|---|---|
Thread based | Yes | Yes | No |
Multi-process | Yes | No | Yes |
Event driven | Yes | Yes | Yes |
Cache purging function | Yes | Yes | No |
Internet Cache Protocol(ICP) | No | Yes | No |
Edge Server Include(ESI) | Yes | Yes | No |
Request Consolidation | Yes | Yes | Yes |
Multiple origin servers specifiable | Yes | No | Yes |
Varnish Cache has the following characteristics, due to the VCL syntax used in its configuration files that was mentioned in the first part of this article.
First, Varnish Cache translates the configuration file written in VCL into C language. Once settings are translated into C, the configuration file is compiled into a shared library. Finally, Varnish Cache links to the shared library generated from the configuration file, and settings are applied. This kind of implementation makes it possible to create highly-flexible configuration files for Varnish Cache, just like creating a program. Because compiled results are linked, it also has the advantage of providing an overall speed improvement.
Furthermore, a variety of tools are available for Varnish Cache, enabling use of features such as cache hit ratio monitoring and a CLI console by simply compiling them from source code.
However, although other cache server products maintain cache persistence, there is no persistent cache in Varnish Cache. In other words, when a process is restarted in Varnish Cache, the data cached up to that point is lost. A persistent option for cache storage is available for implementing persistent cache, but it is not very convenient to use.
ATS has the following features in addition to those introduced in the first part of this article.
Split DNS enables you to configure separate DNS servers dedicated to ATS when specifying origin servers, rather than using the same DNS servers as the system. This increases name lookup overhead, but helps reduce operation load since it eliminates the need for ATS-side handling when changing origin servers. A function called Host DB has also been implemented to alleviate name lookup overhead.
Additionally, unformatted disk drives can be specified as the storage area for cached content by using them as RAW disks. Use of RAW disks eliminates various overheads, providing sustained processing speed and cache region capacity. The effect is most apparent when using high-speed devices such as SSDs as RAW disks.
ATS provides a basic Web UI for monitoring settings and operational status, and this can be used to search for and purge cached objects. One problem with ATS is that the management of settings can become complicated, because there are a number of different types of configuration file. The "traffic_line" CLI command resolves this by enabling you to change settings and update configuration files without stopping ATS. However, some settings do not support this command, making it necessary to rewrite the configuration file directly.
ATS is also the only product among those introduced here that supports the Internet Cache Protocol (ICP). ICP is a system for maintaining caches efficiently by sharing cache status between nodes belonging to the same peer group. A clustering function that provides further cache efficiency is also implemented.
This concludes our overview of the results of IIJ's comparative tests regarding Varnish Cache and Apache Traffic Server. We cover nginx in the last part of this article.
Author Profile
Michikazu Watanabe
Content Delivery Engineering Section, Core Product Development Department, Product Division, IIJ
Mr. Watanabe joined IIJ in 2011. He is involved in operations and development for the IIJ Contents Delivery Service, and lives by the motto, "do a lot with a little."
Related Links
End of the page.