Monthly Archives: June 2007

SEO Tip – Meta Tags – Part 5 – Other Meta Tags

  • Character Set
    • This tells the browser what character set to use to display the characters, or letters, on your web page
    • meta http-equiv=”Content-Type” content=”text/html; charset=iso-8859-1″
  • Language
    • This indicates to the search engines what language this content is associated to.
    • This helps search engines display language-specific versions of your page to the right users
  • Author
    • The Author Tag should contain the name of the company that owns the site. This tag will help you get a very high position for your company’s name.
    • META name=”author” content=”Bristol-Myers Squibb”
  • Expires
    • This tag should only really be used if you have a dynamically driven site, or if your content changes so frequently, you do not want it to be cached
    • META name=”Expires” content=”Mon, 22 Jan 1973 12:58:00 GMT”
  • Link

SEO Tip – Meta Tags – Part 4 – Robots

  • Allows or disallows indexing into search engines by robots or crawlers on a page-by-page basis
  • This is very different from the robots.txt file. The Robots Meta tags will not be seen if the robots.txt file blocks indexing, since the crawler will never get that far.
  • You can find out more about this meta tag at :
  • Here is a list of all the attributes for the robots tag:
    • NOINDEX – prevents the page from being included in the index.
    • NOFOLLOW – prevents crawlers from following any links on the page. (Note that this is different from the link-level NOFOLLOW attribute, which prevents crawlers from following an individual link.)
    • NOARCHIVE – prevents a cached copy of this page from being available in the search results.
    • NOSNIPPET – prevents a description from appearing below the page in the search results, as well as prevents caching of the page.
    • NOODP – blocks the Open Directory Project description of the page from being used in the description that appears below the page in the search results.
    • NOYDIR – sell Yahoo to not use Yahoo Directory information to make a title and/or description for your web page listings
  • Here is a standard sample of a tag that allows all robots to index the page (Note, this is actually the default if the tag was not included on the page)
    • META name=”ROBOTS” content=”INDEX,FOLLOW”
  • Here is a sample of robots tags that are targeted at each of the four major search crawlers:
    • meta name=”TEOMA” content=”NOINDEX”
    • meta name=”GOOGLEBOT” content=”NOARCHIVE”
    • meta name=”MSNBOT” content=”NOODP”
    • meta name=”SLURP” content=”NOFOLLOW”

SEO Tip – Meta Tags – Part 3 – Keywords

  • Words or categories that identify what the page is about
  • As time has progressed, the misuse of keywords has encouraged search engines to rely less and less upon them. There is still debate about how much of an impact keywords have on SEO.
  • Some SEO experts have recommended putting your important keywords first. This can’t hurt.
  • Google, and many other search engines, stores location information for all hits and so it makes extensive use of proximity in search.
  • Include common plural forms of your keywords
  • Include common misspellings of your keywords
  • A good rule of thumb for the keyword tag is 1000 characters or less
  • Repeating your keywords too many times can do more damage than good. Once or twice is fine, with different spellings or with misspellings.
  • <meta name=”keywords” content=”HTML meta tags metatags tag search engines internet directory web searching index catalog catalogue serch seach search engine optimization techniques optimisation ranking positioning promotion marketing”>

SEO Tip – Meta Tags Part 2 – Description

  • The Description Meta Tag should be a brief summary of the contents of the page
  • Keep this concise, as if it gets too long it could be truncated.
  • A good rule of thumb for the description tag is 200 characters or less
  • Here is a good example:
    • <META name=”description” content=”Search Engine Optimization Best Practices”>

SEO Tip – Meta Tags – Part 1 – Overview

  • Meta tags are page elements that help a search engine to categorize your page properly. They are inserted into the HEAD tag of the page, but a user cannot directly see them (other than by viewing the HTML source of the page).
  • Meta tags should be applied to each page, should be unique to the page, and should match the page’s contents
  • Any keyword phrases that you use that do not appear in your other tags or page copy are likely to not have enough prominence to help your listings for that phrase
  • Meta tags are not the be all and end all of SEO, and are not a magic bullet. However, they are one tool in an entire toolbox that you can use together to optimize your pages
  • Overuse or misuse of Meta tags can do more damage than good. Keep meta tags simple, relevant, and concise

Web Analytics Life Cycle – Phases

1. (Re)Define

While reading Avinash Kaushik’a Blog, I found this article on defining the business purposes of your web site –

Defining the business goals of your web site boils down to answering one simple question – What do you want them to do on the web site? Here are some questions to help you answer that question…

  1. Why does your web site exist?
    • E-commerce
    • Promotional material
    • Contests
    • Etc.
  2. What are your top three web strategies that you are working on?
    • paid campaigns
    • registered users
    • affiliates
    • updating content on the site
    • trying to get digg’ed
    • effective merchandising
    • etc.
  3. What do you think should be happening on your web site?
    • This is where you define your key performance indicators. Your KPIs need to correlate directly to the web strategies that you have defined.
    • I will spend more time talking about KPIs in another post, but here are three basic questions you should be answering with your key performance indicators:
      • How many visitors are coming to your web site?
      • Where are they coming from?
      • What are they actually doing?
    • Your key performance indicators also are an indication of how mature your web analytics process is. I will take the time in another post to discuss the Web Analytics Maturity Model.

When you go through this phase after the first iteration, take this opportunity to re-evaluate and re-define your business goals, your KPIs, and their definitions.

2. Collect

There are lots of tools to help you collect your Web Analytics data. There will be many decisions that you will have to make regarding the collection of your data. The KPIs should be at the heart of your decision on how to collect data. You will also need to keep in mind who your users are.

Tools are split into two major categories – web logs and site tagging. Web logs obviously measure the activity on the server, based on the requests of your site’s pages, images, PDFs, etc. Common tools for web log data analysis are WebTrends and ClickTracks. Site tagging measures actual user activity on the physical web site itself in their browser. Common tools for site tagging are Google Analytics and CoreMetrics. I will take a deeper dive into the differences between the collection methods and a review of the different tools at another time.

When you enter this phase of the process beyond the first iteration, take the time to re-evaluate whether your tools are satisfying your needs, and how the tool collects your data.

3. Analyze

Now that you have collected your data, you will need to analyze it. You should define reports that correlate back to your key performance indicators, and to your business goals. The first time you go through this cycle will be your benchmark. Future iterations should be geared to wards optimizing and improving your results.

During your analysis phase, you should review:

  • your business goals
  • which KPIs you collect
  • the definitions of the KPIs
  • whether the tools are right for your needs
  • how the tool collects your data
  • whether the results are better or worse than expected
  • how your data is presented
  • who sees your results.

Once all the analysis is complete, you should develop a list of recommended changes to each of these areas. These recommendations should be both technical and business in nature.

4. Adjust

In this phase, you should take each of the areas that were reviewed in the Analyze phase, and the recommendations that were made, and start to make adjustments as necessary. This could be redefining your business goals, adjusting your KPIs, making changes to your tool set, or rebuilding your reports.

Each iteration through the Adjust phase will be different. As you iterate through the lifecycle, the changes that are made in this phase will typically decrease in size and complexity.

Web Analytics Life Cycle

I got the idea one day in the car as I was driving home that Web Analytics is a continuous improvement process. This is not a profound idea, but struck me at the time as being very important. It is not a process that you go through once. The value of Web Analytics is to cycle through the process more than once. This is what makes your web sites better at achieving their goals. Going through the cycle just once and getting the results has almost no value.

I have looked online, and I have not found anyone who has defined a lifecycle for applying Web Analytics. So I have put one together here, very briefly. It is based on lots of other methodologies, such as the software development lifecycle and iterative development methodologies. The standard Deming Continuous Improvement Cycle phases are Plan, Do, Check, Act. I have mirrored these steps in my idea of a Web Analytics Lifecycle

  1. (Re)Define business goals
  2. Collect data to measure those goals
  3. Analyze the results of the metrics
  4. Adjust your strategy depending on your results

My next few posts will be discussing each of the phases in more detail.

Please leave feedback with your ideas about this fairly new concept. It is still in its infancy, and your constructive ideas are very important.

New Book – Web Analytics: An Hour A Day by Avinash Kaushik

Avinash Kaushik is a leading Web Analytics expert and practitioner. His first book has been highly anticipated and well received. You can go to the book’s web site at , or read reviews and buy the book on Amazon at

He is also the author of a famous Web Analytics bog called Occam’s Razor at I plan on both getting the book, and subscribing to the blog.

SEO Tip – The Title Tag

  • The title tag is one of the most important SEO tools in the toolbox.
  • Changing the title tag is one if the easiest changes to improve page rankings
  • The title of your page is stored in the HEAD tag of your HTML page
  • It should describe the specific contents of the page, and be as unique as possible
  • This will be the title of the page that is shown by the Search Engines to the users
  • Important things for the title tag to contain are Company names or Brand names
  • Other important things to include are keywords from the keywords meta tag that are relevant to the page that fit naturally in the title
  • Here is a good example:
    • <title>SEO Article – Make a title tag that search engines will like</title>

SEO Tip – Use robots.txt file

  • Robots.txt files tell Search Engines what should and should not be crawled
  • NOTE – This is very different from the Robots Meta Tag. The crawler will see this file before it tries to call the page, so this file will override the Robots Meta tags on the pages.
  • Robots.txt files should be stored in the root directory
  • Remember, the point of the robots.txt file is to exclude pages from being crawled. So if a page or directory is banned, it will never even get to see what code is on those page(s). Accordingly, no code of those pages could change the bots behavior to re-index the page. So, robots.txt will overwrite the meta and robots tags on the page.
  • More information regarding robots.txt can be found at
  • Sample robots.txt file to allow all pages to be crawled:
    • User-agent: *
  • With one minor adjustment, you can prevent all robots from indexing your site:
    • User-agent: *
      Disallow: /
  • Here is a sample that will not index a specific directory for the Googlebot crawler:
    • User-agent: googlebot
      Disallow: /seo/