Category Archives: SEO

SEO Tip – Meta Tags – Part 5 – Other Meta Tags

  • Character Set
    • This tells the browser what character set to use to display the characters, or letters, on your web page
    • meta http-equiv=”Content-Type” content=”text/html; charset=iso-8859-1″
  • Language
    • This indicates to the search engines what language this content is associated to.
    • This helps search engines display language-specific versions of your page to the right users
    • META HTTP-EQUIV=”CONTENT-LANGUAGE” CONTENT=”en-US”
  • Author
    • The Author Tag should contain the name of the company that owns the site. This tag will help you get a very high position for your company’s name.
    • META name=”author” content=”Bristol-Myers Squibb”
  • Expires
    • This tag should only really be used if you have a dynamically driven site, or if your content changes so frequently, you do not want it to be cached
    • META name=”Expires” content=”Mon, 22 Jan 1973 12:58:00 GMT”
    • META HTTP-EQUIV=”PRAGMA” CONTENT=”NO-CACHE”
  • Link

SEO Tip – Meta Tags – Part 4 – Robots

  • Allows or disallows indexing into search engines by robots or crawlers on a page-by-page basis
  • This is very different from the robots.txt file. The Robots Meta tags will not be seen if the robots.txt file blocks indexing, since the crawler will never get that far.
  • You can find out more about this meta tag at :
  • Here is a list of all the attributes for the robots tag:
    • NOINDEX – prevents the page from being included in the index.
    • NOFOLLOW – prevents crawlers from following any links on the page. (Note that this is different from the link-level NOFOLLOW attribute, which prevents crawlers from following an individual link.)
    • NOARCHIVE – prevents a cached copy of this page from being available in the search results.
    • NOSNIPPET – prevents a description from appearing below the page in the search results, as well as prevents caching of the page.
    • NOODP – blocks the Open Directory Project description of the page from being used in the description that appears below the page in the search results.
    • NOYDIR – sell Yahoo to not use Yahoo Directory information to make a title and/or description for your web page listings
  • Here is a standard sample of a tag that allows all robots to index the page (Note, this is actually the default if the tag was not included on the page)
    • META name=”ROBOTS” content=”INDEX,FOLLOW”
  • Here is a sample of robots tags that are targeted at each of the four major search crawlers:
    • meta name=”TEOMA” content=”NOINDEX”
    • meta name=”GOOGLEBOT” content=”NOARCHIVE”
    • meta name=”MSNBOT” content=”NOODP”
    • meta name=”SLURP” content=”NOFOLLOW”

SEO Tip – Meta Tags – Part 3 – Keywords

  • Words or categories that identify what the page is about
  • As time has progressed, the misuse of keywords has encouraged search engines to rely less and less upon them. There is still debate about how much of an impact keywords have on SEO.
  • Some SEO experts have recommended putting your important keywords first. This can’t hurt.
  • Google, and many other search engines, stores location information for all hits and so it makes extensive use of proximity in search.
  • Include common plural forms of your keywords
  • Include common misspellings of your keywords
  • A good rule of thumb for the keyword tag is 1000 characters or less
  • Repeating your keywords too many times can do more damage than good. Once or twice is fine, with different spellings or with misspellings.
  • <meta name=”keywords” content=”HTML meta tags metatags tag search engines internet directory web searching index catalog catalogue serch seach search engine optimization techniques optimisation ranking positioning promotion marketing”>

SEO Tip – Meta Tags Part 2 – Description

  • The Description Meta Tag should be a brief summary of the contents of the page
  • Keep this concise, as if it gets too long it could be truncated.
  • A good rule of thumb for the description tag is 200 characters or less
  • Here is a good example:
    • <META name=”description” content=”Search Engine Optimization Best Practices”>

SEO Tip – Meta Tags – Part 1 – Overview

  • Meta tags are page elements that help a search engine to categorize your page properly. They are inserted into the HEAD tag of the page, but a user cannot directly see them (other than by viewing the HTML source of the page).
  • Meta tags should be applied to each page, should be unique to the page, and should match the page’s contents
  • Any keyword phrases that you use that do not appear in your other tags or page copy are likely to not have enough prominence to help your listings for that phrase
  • Meta tags are not the be all and end all of SEO, and are not a magic bullet. However, they are one tool in an entire toolbox that you can use together to optimize your pages
  • Overuse or misuse of Meta tags can do more damage than good. Keep meta tags simple, relevant, and concise

SEO Tip – The Title Tag

  • The title tag is one of the most important SEO tools in the toolbox.
  • Changing the title tag is one if the easiest changes to improve page rankings
  • The title of your page is stored in the HEAD tag of your HTML page
  • It should describe the specific contents of the page, and be as unique as possible
  • This will be the title of the page that is shown by the Search Engines to the users
  • Important things for the title tag to contain are Company names or Brand names
  • Other important things to include are keywords from the keywords meta tag that are relevant to the page that fit naturally in the title
  • Here is a good example:
    • <title>SEO Article – Make a title tag that search engines will like</title>

SEO Tip – Use robots.txt file

  • Robots.txt files tell Search Engines what should and should not be crawled
  • NOTE – This is very different from the Robots Meta Tag. The crawler will see this file before it tries to call the page, so this file will override the Robots Meta tags on the pages.
  • Robots.txt files should be stored in the root directory
  • Remember, the point of the robots.txt file is to exclude pages from being crawled. So if a page or directory is banned, it will never even get to see what code is on those page(s). Accordingly, no code of those pages could change the bots behavior to re-index the page. So, robots.txt will overwrite the meta and robots tags on the page.
  • More information regarding robots.txt can be found at http://www.robotstxt.org
  • Sample robots.txt file to allow all pages to be crawled:
    • User-agent: *
      Disallow:
  • With one minor adjustment, you can prevent all robots from indexing your site:
    • User-agent: *
      Disallow: /
  • Here is a sample that will not index a specific directory for the Googlebot crawler:
    • User-agent: googlebot
      Disallow: /seo/

SEO Tip – Use Sitemap.xml files

  • Site Map files are a new standard for search engines. You can create an XML file as part of your site, search engine crawlers can find them, and the file helps define pages and their relation to each other
  • More information is available at http://www.sitemaps.org
  • The Sitemap file should typically go in the root directory of your site
  • Each Sitemap file that you provide must have no more than 50,000 URLs and must be no larger than 10MB
  • If you need to provide more than 50,000 URLs or the file goes over 10MB, you must provide more than one sitemaps file. You will then need to list each Sitemap file in a Sitemap index file. Sitemap index files may not list more than 1,000 Sitemaps and must be no larger than 10MB.
  • All URLs listed in the Sitemap must use the same protocol (http, in this example) and reside on the same host as the Sitemap
  • Some ideas on automating the creation of the sitemap.xml file
    • These pages can be generated dynamically as part of the build process
    • These files can also be submitted dynamically to the major search engines for indexing
    • These files are a huge benefit to Search Engine optimization
    • This may be something that can be built into the continuous integration process

SEO Tip – Hyperlinks

Hyperlinks are the nervous system of a crawler. Crawlers follow these links to determine which pages of your site should be crawled. If your hyperlinks are broken or unusable, this will prevent your pages from being crawled.

  • Keep the links on a given page to a reasonable number (fewer than 100).
  • JavaScript for navigation is bad, as I described in the JavaScript section. Dynamic links created by JavaScript cannot be followed by a crawler.
  • Image maps for navigation are also bad. Since the navigation is based on an image, crawlers may not be able to process the links properly, and will not follow them to their referring pages
  • Broken links are obviously bad as well. If the link is broken, that page will not be crawled or indexed, and will not be found by your search users. You should check your site with a Link Tester to prevent this from happening.
  • The search engines basically figure that if you are linking to something from your page, whatever it is you are linking to is likely to be closely related to the content of your page. For that reason some of the engines actually look for keywords in the hyperlinks and any text immediately surrounding the hyperlinks. What this means to you is that if you can you should include your most important keyword phrases in the link itself and possibly the surrounding text.
  • The text of your link should be natural content text, not “click here.” You should also try to incorporating your keywords in your hyperlinks, but do it without being artificial.
  • Changing the style of your link with a cascading style sheet will not affect crawling or Page Rank, however blue underlined text is a usability convention that if not used may confuse your users.
  • Just as the robots tags can prevent links from being followed, each individual link can do the same, using the rel=nofollow attribute of the link tag.

SEO Tip – Querystrings

The treatment of querystrings is a controversial topic amongst SEO experts. This should add a bit of insight onto how querystrings really affect SEO.

  • If Google and other search engines couldn’t traverse dynamic sites, then huge swaths of the Internet such as online databases, blogs, threaded discussion forums, and e-commerce sites (to name a few) would go unlisted
  • Querystrings were frowned on initially by search engines because querystrings could define an infinite number of pages, and the crawler could choke. Google has never had this problem, and querystrings, if managed correctly, can be used successfully on any site.
  • Google’s Webmaster Guidelines says not to use “&id=” as a parameter in your URLs, as they don’t include these pages in their index.
  • According to Matt Cutts, the number of querystring parameters should be limited to one or two