NB: This is a guest article by Alexei Tumarkin, chief technology officer of CDNetworks.
The pressure and competition are enormous for travel and tourism businesses to convert Internet shoppers to actual bookings.
Where once brochure-like websites satisfied consumers’ information requirements, today’s consumers demand deeper insights regarding potential destinations through high levels of website interactivity, rich destination previews, and dynamic applications and content.
This compels travel companies to make their websites as engaging and dynamic as possible. And with consumers showing less and less patience for slow-performing sites, organizations are pressured to deliver all of this functionality without delay.
Fortunately, there exist some simple and inexpensive means for website technologists to improve the performance of their travel and tourism websites, including dynamic content and applications. This article presents practical hints and tips to do so in four ways:
- 1. Get your Web infrastructure right
- 2. Optimize your page design
- 3. Streamline the content of your pages
- 4. Measure your Web performance
1. Get your Web infrastructure right
A. Leverage a global content delivery network (CDN)
If you are like many travel & tourism websites, you have your site hosted in one, perhaps two, locations on the same continent, but your end-users are all around the globe.
While Internet speeds are fast, distance does matter. To get closer to your users, you have two options: establish hosting within multiple datacenters around the world and sync your data and applications between all of them, or leverage a CDN.
Which you choose depends on the geographic clustering of your target market and your appetite for risk.
While it is rare, some organizations may have a target market that resides in just 1 or 2 key countries. If your company fits that profile and has IT staff to support multiple data centers, then mirroring your website across several data centers would likely be your best option.
Most travel & tourism websites, however, serve customers across multiple geographies. This makes data center build-out very expensive and draining on your limited IT resources.
For such organizations, the fastest and most economical means of serving a dispersed user base is to leverage the pay-as-you-go services of a CDN.
A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content and applications more efficiently to users.
Getting onto a CDN is a non-intrusive change that will significantly improve your Web performance. Usually, the only modification that is required is a simple DNS delegation.
When choosing a CDN, keep in mind that each CDN vendor has its own strengths and weaknesses, and matching your needs to their areas of focus will help you to make the right choice for your website. Check to make sure that a CDN provider matches well with your needs in the following ways:
- Does the CDN vendor have a global reach, particularly in the regions you are targeting? This may seem obvious, but many customers become enamored by a specific vendor’s technology or use case and end up needing to enlist a second CDN provider to cover global network deficiencies.
- Make sure the CDN can handle the acceleration of dynamic Web transactions. Most CDNs can cache objects like gifs, jpegs and media objects. However, end users are still required to crawl over the Internet to grab the dynamic Web page underneath (e.g. php, asp etc.). You need to be able to accelerate all of it.
B. Use a Cloud based DNS service
DNS infrastructure is an often overlooked performance element, yet it represents the virtual doorway to your website and applications. The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers.
When you type your website URL into your browser, a query goes out through the distributed domain name system to resolve the name to an actual IP address. The browser can't download anything from this hostname until the DNS lookup is completed.
Yet DNS lookups require the traversal of several levels in the DNS serving hierarchy ― sometimes around the globe ― to find an authoritative server.
The best way to limit DNS-lookup latency from your application is to minimize the number of different DNS lookups that the client needs to make, especially lookups that delay the initial loading of a page (discussed in the following section).
There are many DNS services that offer authoritative DNS services in the cloud (Google search: “cloud dns”). These services distribute your authoritative DNS entries to strategic points of presence around the globe. This provides for improved performance, scalability and reliability.
An additional benefit comes from the security these services provide. Many DDoS attacks target a website’s DNS infrastructure, but these cloud services make the DNS infrastructure less vulnerable to a pointed attack.
2. Optimize your page design
A. Maximize caching (browser and proxy)
Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. To take advantage of the full benefits of caching consistently across all browsers, configure your web server to explicitly set caching headers and apply them to all cacheable static resources, not just a small subset (such as images).
Set caching headers aggressively for all static resources. It is important to specify one of the following headers: Expires or Cache-Control max-age, and one of the following: Last-Modified or ETag, for all cacheable resources.
B. Use versioning
If your objects are subject to change, do not use the same names for new versions. There are many possibilities to serve stale content to end-users when preserving the old names.
Browsers cache old files, ISPs use local caches, etc. The only way to ensure that the new content is served is to use different names for different content. Even if you leverage a CDN that provides means for actively purging old content in its caches, proper versioning is the only way of ensuring that your users always receive the right content.
C. Minimize HTTP requests
One of the simplest and most effective ways to deliver faster web pages is to reduce the number of HTTP Requests. You can do so by employing the following methods:
- Combine JavaScript: Combining external scripts into as few files as possible cuts down on round trip times (RTTs) and delays in downloading other resources.
- Combine CSS: Combining external stylesheets into as few files as possible also reduces RTTs and delays in downloading other resources.
- Combine Images using CSS Sprites: Combining images into as few files as possible using CSS sprites reduces the number of round-trips and delays in downloading other resources, reduces request overhead, and can reduce the total number of bytes downloaded by a web page.
D. Minimize redirects
Redirects (301s or 302s) trigger an additional HTTP request-response cycle and add round-trip-time latency. It's important to minimize the number of redirects issued by your application.
This is especially problematic for content needed for starting up a page. Therefore, restrict your use of redirects to only those cases where it's absolutely technically necessary.
E. Minimize cookie size
HTTP cookies are used for a variety of reasons, such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It's important to keep the size of cookies as low as possible to minimize the impact on the user's response time.
F. Use cookie-less domains for static objects
Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies.
This technique is especially useful for pages referencing large volumes of rarely cached static content, such as frequently changing image thumbnails, or infrequently accessed image archives.
G. Maximize parallel downloads while minimizing DNS lookups
The HTTP 1.1 specification states that browsers should allow at most two concurrent connections per hostname. You can get around this restriction by configuring multiple subdomains to CNAME to a single record and then reference objects with the different subdomains (ex: a.foo.com/pic.gif, b.foo.com/pic2.gif).
This "tricks" the browser into parallelizing downloads from the same host, which leads to faster page load times. However, this technique has the adverse effect of increasing DNS lookup latency and TCP connection latency. Therefore, special care and trial-and-error must be done to use this technique.
This trick is recommended for any page that serves more than 10 objects from a single host. The optimal number of hosts is believed to be between 2 and 5, depending on different factors.
Please note that the latest browsers, such as IE8 or Firefox 3.5 no longer comply with the HTTP 1.1 standard and use 6 simultaneous connections to the same domain.
H. Put stylesheets at the top, Javascript at the bottom
Moving stylesheets to the document HEAD makes pages appear to be loading faster, because it allows the page to render progressively, displaying content as it becomes available.
When the browser loads the page progressively, the header, navigation bar, and top logo all serve as visual feedback for the user. For an even better user experience, move as many scripts as possible to the bottom of the page. Browsers won’t start any parallel downloads while a script is downloading.
3. Optimize the content on those pages
A. Compress Content
Most web servers can compress files in gzip format before sending them over the network. This simple technique dramatically improves Web performance my minimizing the size of packets that must traverse the Internet.
B. Minimize payload whenever possible
This is also a simple and frequently overlooked technique, which can be achieved in three easy steps:
- Remove unused CSS to avoid unnecessary bytes that need to be transferred
- Compact and minimize CSS
- Compact and minimize your JavaScript and html
- Optimize images and serve correctly sized images
You should perform any and all optimization on images, including the following:
- Crop unnecessary space
- Reduce color depth to the lowest acceptable level
- Remove image comment and other metadata
- Use advanced compression programs
Furthermore, properly sizing and scaling images before they are served will save unnecessary bytes from going over the Internet.
You should also specify image dimensions to eliminate unnecessary repaints.
Part 4: Measure your Web Performance
Measure before and after making your changes. After employing all the different tips and tricks, you must monitor, measure and then improve your web performance.
You should also benchmark your site’s performance against that of your competitor’s performance and your industry’s performance.
There are many website performance measurement tools and services available from tier-1 providers like Gomez, Keynote or Webmetrics.
For those organizations on a tight budget, there are also some less expensive alternatives from smaller vendors, such as Catchpoint Systems, Inc. and Pingdom AB.
Moreover, by incorporating the above best practices into your website design and delivery infrastructure, it is not uncommon to realize performance gains of 20-50%.
For websites targeting the most challenging regions of the world, such as China and Russia, improvements of 50-200% are not unrealistic, particularly when serving dynamic content.
NB: This is a guest article by Alexei Tumarkin, chief technology officer of CDNetworks.