The Knowledge Graph is a two-year-old feature that summarizes information in boxes placed above some of Google's organic, or non-paid, search results. (See example for "St Lucia Tourism" in screenshot, above).
The latest change is that KG results can pull from regular sites in the index and display the quoted information above the organic results.
Given the pace of Google's Knowledge Graph, how can travel brands keep up with the changes, adjusting their content marketing tactics accordingly?
NB: This is a viewpoint by Matthew Barker of consultancy I&I Travel Media
NB: The examples shown in this article assume that you're using Google.com (US) and that you are signed out of your personalized search results. Your results may vary. Knowledge Graph is being rolled out slowly to other users worldwide. Pete Meyers of the SEO blog Moz says the latest tweaks to KG are live but are still low frequency, and mostly US-only, in how they appear.
Even if most users haven’t heard the phrase, they’ve probably seen Knowledge Graph (KG) in action in all the information and answer boxes that have been popping up among the search results. For example, type in "CEO of Expedia" and you might see this result:
This new format of search result is an early signal of a deeper change that has profound implications on publishers and content/SEO strategy alike.
Underlying this move is Google’s desire to deliver richer results based on entities and inferred context as opposed to query strings, or simply matching up relevant keywords.
In other words, Google’s technology can now comprehend real world objects rather than just their keyword analogies, and it can understand the intent behind search queries to help answer questions faster and more effectively. Earlier this month, KG was described by Google software engineer Amit Singhal as a “Swiss Army Knife” that's superior to publishers’ "corkscrews."
By connecting data to real world entities, the search engine is able to create a much richer set of results for its users, or in Google’s words:

The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query.
This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do.
It’s this technology that, for example, allows Google to take a search query like “Smithsonian opening hours” and instantly display this box right at the top of the page:
Why send users to another webpage (and lose potential ad clicks) when you could just answer their question instantaneously?
Much has been said about the threat that this poses to publishers, particularly the sites whose data is being scraped by the KG without their permission (i.e. Wikipedia), but also for a huge range of informational sites who may find their search traffic drying up as Google starts to answer their users’ questions for them.
On the other hand, the KG could also present significant opportunities for publishers who are able to adapt their content strategies and make it easier for Google to understand connections between their content and the entities it describes.
A signal of things to come has been the quiet inclusion of other sources of content in KG results. Google is no longer only pulling (scraping) from big reference sites like Wikipedia, it is starting to display content from websites in the rest of its enormous index too, and in a way that could revolutionise organic search.
In an article yesterday for Moz.com, Pete Meyers gives a few examples of how this might work.
Let me add another one: Say you type in "cheapest city in the world." You may see this answer box:
Note that the first KG result is pulling from a page on CNN.com, which previously sat at #5 in the organic results.
These types of KG results seem to be in early experimentation and so far we’ve been unable to reproduce them with any travel-specific queries, but the nature of the industry is such that we have a huge number of potential entities which could begin to appear in KG results: destinations, events, local businesses, personalities and more, they’re all on the KG radar.
Additionally, although KG results are still fairly new and likely (guaranteed) to change in coming months, it’s safe to assume that the KG will continue to focus on answering the short, quick and simple questions while Google will maintain its emphasis on quality, detail and authority in the “regular” search results.
The introduction of In Depth articles reflects this trend, as does the frequency that KG results offer “related topics” or “also searched for” queries.
Staying ahead of the curve
Given the experimental nature of these results it’s probably not wise to start trying to optimise for this new environment just yet. However there’s no doubt that these are all early, tentative steps towards a changing model for how search (and therefore SEO and content strategy) will work in the near future.
Many publishers have a justifiable grievance over the way Google is pulling this data from their sites (see this hilarious tweet for example) but the only alternative is to block Googlebot from your site entirely and wave goodbye to a massive chunk of organic search traffic. A more pragmatic course would be to think about how your existing and future content can be adapted to keep up with this changing search landscape.
AJ Kohn has written a detailed introduction to the nascent concept of “Knowledge Graph Optimisation” which delves deep into the theory as well as providing some practical steps publishers can implement now to prepare:
Use nouns in your content:
similar in concept to the use of keywords in traditional SEO – make it clear what entities your content describes by clearly and unambiguously using nouns and actual names.
Build contextual links:
forget the old concept of hoarding “link juice” – linking to other relevant and contextual sites allows search engines to build connections between related entities and understand your content better.
Markup your data:
old meta tags have evolved into “structured data” which allows you to provide the search engines with much more context on the specific entities that your content represents. Schema.org is a good starting point, see this previous article on applications for travel sites.
Get listed:
the KG’s big data sources (currently) are Google+, Wikipedia and Freebase, all of which allow you to create and maintain an informational page on your brand (some easier than others.) Include as many references and links as possible, particularly reviews on G+, links to other authority & relevant sites on your Wikipedia page and all your social media URLs on Freebase.
To that I would add one final, and rather inconvenient observation: so far KG results and In Depth articles have pulled exclusively from the highest authority sites. This obviously makes sense from Google’s perspective but it does put the dampener on any notion that the above points are all you need.
The above points are about providing context and subject to the search engine. There’s no doubt you’ll still need to graft your way to earning sufficient authority for Google to take notice of your content.
NB: This viewpoint is by Matthew Barker, founder of I&I Travel Media, a US-based digital media marketing agency.
EARLIER:Where is Google heading next? [VIDEO interview with CEO Larry Page]
ELSEWHERE:The Knowledge Graph: Should Your Content & Business Strategy Change? [SearchEngineWatch]
A very thorough analysis on the theory of the KG and tactical suggestions for optimization [SEO Skeptic]