External links or Outbound Links are links from within your site pointing to external domains (other websites) and are not to be confused with back links, which we address in another article.

Although not an intrinsic component of all websites, there are nonetheless best practices to be adhered to or considered.

There are several types of ‘link’ in general:


Below is an example of the code used to link to another website:

<a href=””>Example Anchor Text</a>


There are two main several types and formats for external links, we have detailed and described these below:


A text link is the most common type of link and this is simply where some text has been turned into a link to another site. The code for this is show above. Unless specified otherwise, Google and other crawlers will follow these.


This is an image that has been turned into a clickable link; the example code for this is below:

<a href=””><img src=””>

There are other parameters we have omitted from this such as width, height and alt tags. Also consider that in responsive sites with responsive images the code will be different.


All of the code used above uses HTML, this is the most common way to implement a link. This also makes it readable and / or crawlable by search engines.


Neither JavaScript nor Flash can be crawled by search engines and as such Google will not follow these links. This can be used deliberately if you want to prevent links from being crawled, although there are more efficient ways of doing so, which we describe below.


There are a few components to a link that should be considered; we have detailed these below:


Anchor text is the text used (in a text link) to link to a target page. For example:

\<a href=””>Home Page</a>

In the above code you see >Home Page< this is the anchor text for that link and this is what will be visible. The styling of links varies between sites, but typically they are underlined either permanently or on mouse-over.

The anchor text used when linking out to another site is not as much of a concern to you compared to internal link and back link anchor text as this does not effect your website.


All links to external sites should be absolute links, using the full domain name and including the HTTP:// component. Omitting the domain could lead to confusion for crawlers that may think the link refers to an internal page.


Follow links leak PageRank (which we explain in more detail below), therefor the impact of having follow links from your site can be that it reduces your PageRank. Nofollow links to not pass PageRank on and hence there is no impact from this perspective.

We cover in more detail things like linking to untrustworthy sites and irrelevant content, but for the most part linking (when done properly) is something every site should do.


So much has been written about whether to ‘follow’ or ‘nofollow’ outbound links all guided by differing and opposing philosophies. We have set out here what we think and why we believe these are the best practices when it comes to follow or not to follow.


These allow PageRank to flow through to the target page. This is the default for links and doesn’t require any modification; when adding a link, unless specified otherwise, it will be a ‘follow’ link. Below is an example of a standard HTML link to the home page of would look:

<a href=””>Home Page</a> – This is the code.

Home Page = This is how it would appear on the page.


The do not allow PageRank to flow through to the target page. To make a link ‘nofollow’, you will need to add some code:

<a href=”” rel=”nofollow”>Home Page</a> – This is the code.


As is explained in more detail in the guide on back links; a back link to what the site being linked to calls the link, the site linking out, call it an outbound or external link. Broadly speaking, the more back links a site has the better… This contributes to PageRank and overall website authority and this is a very powerful direct ranking factor.

Consequently people will try to game this (to the max!), which lead to a world of pain for everyone. Spammers would try and get links from any location possible and with no limits…

The result was the introduction of nofollow links; a spammer has no interest in these, as they provide little to no value for the target site. Hence if all blog comment links are automatically converted to nofollow, this reduces the amount of spam comments and links.

This is true for all nofollow links.

The main areas where nofollow links should be used are:

  • Paid links
  • Comment / blog links (broadly speaking)
  • Forums
  • Linking to untrustworthy sites
  • Footer links

As with most things, there is nuance to this and these are broad generalisations, but a good rule of thumb.

Although they do not pass on PageRank, nofollow links are still a source of referral traffic and hence do provide value when used properly. They can also improve brand recognition and reputation if the links are on a good site and in a good location.

Follow links leak PageRank to the target site and as such if you do not set the nofollow parameter on a link, it will leak PageRank. Authority from back links is hard to get and so you want to be sparing when passing it on. Consequently, linking to an external site from a template region such as the footer or main menu will leak PageRank from every page of your site.

If the site is relevant, trustworthy, and a credible source of useful info to your users, and you are not linking out much, there is no reason not to make it a follow link.


There are a couple of things to consider when linking out to other content; we describe the most important factors below:


If someone clicks an external link on your website, this will take them away from your site, as such you may lose the user. Consider this when adding links.

To help mitigate this risk, the links should open in a new window, the code for this below:

When adding this to a standard text link we get the following code:

<a href=”” target=”_blank”>Home Page</a>

This at least means that a tab with your site remains in the user’s web browser.


You should only link to relevant sites to the content being linked to is relevant to the user reading your content. Linking to irrelevant content is both pointless and provides no benefit to you.

Linking to relevant content can be advantageous though; it builds overall relevance between the page on which the link is located and the nature of the content. It is not a direct ranking factor though.


This is very important, linking to poor quality sites or, worse still, spammy websites can damage your credibility as it associates you with an undesirable site. It could also annoy users who may end up on a website that they would not want to visit, thus damaging your reputation.


Having no links to other sites is not a great idea, although it may seem the ideal way to protect PageRank. Linking out to relevant credible sources of content makes your site a more credible source of information. A site with no external links is essentially the end of the road for the user, if they cannot find what they want, there is nowhere else to go.

Adding outbound links can make your site seem like a superior resource for web users.
That said there is a limit to this, having hundred or thousands of external links can appear spammy and is largely a waste of time as it doesn’t provide a useable resource to users.

Be rational and reasonable when linking out, do so where it improves the user experience or provides valuable information.


Broken links are links that target a page that either does not exist or is inaccessible for some reason. These have a number of implications, none of which are good for users, robots, and website performance. We discuss this in more depth in this article on broken links.


Internal links deliver PageRank and relevancy to target pages and are vital to internal website navigation, the user experience and also to demonstrate what the most important pages on a website are.

Improper use of internal linking can result in an algorithmic penalty from Google. At best poor internal linking means that a site will not benefit from the advantages of optimised internal links and may not be crawled properly.

Internal linking is not difficult to understand or get right, but there is some nuance to it that makes it well worth understanding properly.
There are several types of ‘link’ in general:

We look at just internal links in this article but you can click either of the links above to learn more about the other types of links. This article is part of a series on content, for more articles relevant to this topic, please follow links below:


Internal links provide five primary functions on a website, which are:


Probably most obvious are how internal links facilitate website navigation… Unless a website has just one small page, internal linking is required to provide the mechanism for users to move between pages. All website navigation relies on this to work; therefore, internal links are one of the most prevalent components throughout a website.


Links also assist robots or crawlers in crawling your website, allowing them to identify the location of all your web pages. Hence website navigation should address both the need of the human and machine.


Each link on a page allows PageRank to flow through it to the page being linked to, thus website authority is disseminated through its links to its pages.

Page authority (or PageRank) that flows through each link, is dived by the number of links on a page. So, if a page has 100 links, then each individual link passes on 1% of the available page authority. Or, if a page has 1,000 links, then each individual link passes 0.1% of the available page authority.

The reason why we state ‘available page authority’, is because if a page had only one link on it, this will be able to pass on between 85% to 90% of the PageRank. Meaning that the upper limit to how much PageRank can flow from one page to another is capped under 100%.

Hence if a page has 100 links, each link will pass on between 0.085% and 0.09% of the actual PageRank.

This is why internal linking strategies are so important, because the links act like valves and pipes distributing authority to where it is required.


In the similar way to how authority flows through a link, so does relevance. The difference is that there is not finite amount of relevance that can pass through a link. Instead of a volume of relevance, it is contextual and limited to what text (or Alt tag in images) is being used.

Using relevant anchor text (described later in this article) can assist in delivering relevance (to a keyword) to a target page. This is done by optimising anchor text to target keywords.


If, for the sake of simplicity, we consider a website with 14 pages we can construct some scenarios to describe how structure and hierarchy are helped to be defined by different internal linking configurations

Flat Structure: If every page of this 14 page site were linked to an equal number of times, this would indicate to Google that all of the pages are roughly of equal importance.

Pyramid: If the site were structured such that it had 3 category pages, each with 3 sub-category pages, and a home page… each set of pages had the following ratio of links:

  • Home Page 33% (33% per page)
  • Category Pages 33% (11% per page)
  • Sub Category Pages 34% (3.78% per page)

This would describe the home page as the most important page, then the category pages, then the sub category pages.

Reverse Pyramid: If the site were structured in the same way described above but instead each page type had the following percentage of the total links:

  • Home Page 5% (5% per page)
  • Category Pages 15% (5% per page)
  • Sub Category Pages 80% (8.89% per page)

This would describe the sub-category pages as the most important, then the category pages, then the home page.

Remember that what we are describing here is how the PageRank is distributed among the pages through links by looking at the number of links each page has as a percentage of the total internal links on the website.

There are other factors that will determine what the actual PageRank is of a page outside of this. So even with a reverse pyramid structure you may still find that the home page has the most authority.


As described above, the ratio of internal links to each page on a website helps to define the priority of webpages to crawlers. Mapping out all of the links to each page of a website helps to understand what this structure is and how it might be informing Search Engines of webpage priority.

There are a number of tools such as Raptor that can crawl a site and provide this information. Once you have this data, you can start to analyse it, putting this in a table (see example below) helps you to see what pages are linked to most.

The below table is a truncated and basic example of how to map out internal links.


*To calculate the ‘% of Links’ cells, simply divide the ‘No. Of links’ for a page by the total ‘No. Of links’.

It can also be useful on larger sites and ecommerce sites to group pages together and look at the aggregated data for these groups. For example, grouping pages into products, services, informational, blog, etc can provide a good top level view

It is also worth looking at the number of pages that link to each page of the site and mapping this against the previous data, for example:

In the above table we have also included a new metric ‘Ratio of Links to Pages’, which is calculated by dividing the ‘No. Of Links’ by the ‘No. of Pages Linking’ for each page, for example:

Home Page: 56/48 = 1.17

What this means is that, on average, each page that links to the home page has 1.17 links to the home page on it. As mentioned elsewhere in this article, Google only counts the first link to any one place on a page… So in a scenario where a page has a lot of links, but few linking pages, you would not rank this as a priority page.

The internal linking diagram below is a visual map of the internal linking structure of a site, these can become large and complicated the more pages are included. These can be useful when analysing a site’s navigational structure and to understand where links are coming from, this can often be difficult to show in any functional way in a spreadsheet.

Equally diagrams can become cumbersome for practical usage by an SEO after a point, thus a blend of visual diagrams and tables can help build a complete picture of the internal linking structure.

Once the site has been mapped out, you can begin the process of defining a new internal linking structure. Setting out the ratio and number of links to each of the pages on a website. Mapping and tracking these metrics can help you to identify the cause of certain issues that can arise. If for example you notice a drop in rankings for a group of pages and you can correlate this with a significant change in the ratio of links to those pages; you may be able to establish causation and implement a solution.


As described above, internal linking distributes PageRank throughout the site; and PageRank is a major ranking factor. Therefore, creating an internal linking strategy relevant to the website goals and objectives is critical to maximising website performance in the SERPs.

In short, well structured internal linking will improve rankings and poorly structured linking will impede rankings.


There are a lot of places from which you can link from on a website and also different ways of modifying links to produce a different effect, we describe each of these below.


All websites should have a main menu for navigation and / or a sidebar with additional navigational options. Typically, the main navigation is replicated on every page of the site and remains consistent throughout. Secondary navigation can vary to reflect the section of a site in which it is located.

Main navigation should ideally provide the most comprehensive navigation for users, often linking to all 1st tier pages.

For more information on website navigation please follow the link.


The footer often contains a limited number of important links to locations such as:

  • Sitemap
  • Home Page
  • Legal info
  • Category pages
  • Contact us
  • Find us
  • Social media pages (external links)

Please follow the link for more information on footer links.


Breadcrumb navigation typically sits underneath the header and above the main body of content on a page and provides a structured list of links showing where the current pages sits the site structure. For example, if we consider a product page, breadcrumb navigation could look like this:

HomeCategory> Product

Note that this defines where a page sits within the site and assists the user navigate (back) to related content one level up from where they are. There are a number of benefits to using breadcrumbs outside of the user experience, especially when using structured data to mark-up the breadcrumb code.

This can actually act as a description of site structure for Search Engines, even if the URL structure were flat (all page located after 

Please follow the links for more information on Breadcrumb Navigation or Breadcrumb Structured Data.


Contextual linking is when a link sits within relevant content on a page. As with this article you can see that we link out to other pages from within the main body of content… This is called contextual linking and it each link therefore sits within the context of the content that surrounds it.

The more relevant the content is to the page being linked to, the more relevance flows through to the target page.

The more relevant a link is the more powerful it is for improving rankings. Additionally, this also provides relevant information to users, at the point at where it is most relevant for them (if done well).


Follow: These allow PageRank to flow through to the target page. This is the default for links and doesn’t require any modification; when adding a link, unless specified otherwise, it will be a ‘follow’ link. Below is an example of a standard HTML link to the home page of would look:

<a href=””>Home Page</a> – This is the code.

Home Page = This is how it would appear on the page.

Nofollow: The do not allow PageRank to flow through to the target page. To make a link ‘nofollow’, you will need to add some code:

<a href=”” rel=”nofollow”>Home Page</a> – This is the code.

Home Page = This is how it would appear on the page (note that the nofollow tag does not change the appearance of the link).

Nofollow links are more often used for external links (links to other websites), there is little need to use nofollow links when linking internally. Here is what Google have to say about the no follow attribute. We discuss this topic in more detail in the external linking article of this knowledge base.

Nofollow internal links can be used to limit PageRank flow to pages that you do not need to deliver PageRank to like login pages or pages that you do not want indexed. Use these sparingly as there is little benefit to creating nofollow links on scale throughout a website. In 2009, Matt Cutts discussed this topic, you can watch the video here.


When the Alt Tag and Anchor Text are equally optimised for the target keyword; an image link will typically pass on less relevance to target page, but equal amounts of PageRank compared to text links. This is one of the main reasons why we recommend using text links in website navigation rather than images.

That said images often provide a different set of advantages, for example; images stand out on a page and draw people’s attention to offers, or other valuable content that you want to promote. As such image links are often implemented for different reasons, but they can also be coupled with a text link to the same page with optimised anchor text.


Sitemaps provide links to every page on a website (that you want to be either found, indexed or both). HTML sitemaps assist the user navigate and XML sitemaps assist robots crawl your website. Please follow the links for more information on HTML Sitemaps or XML Sitemaps.


Anchor text is the text used (in a text link) to link to a target page. For example:

<a href=””>Home Page</a>

In the above code you see “Home Page” this is the anchor text for that link and this is what will be visible. The styling of links varies between sites, but typically they are underlined either permanently or on mouse-over.


Identify keywords (targeted by web pages) throughout your website’s content and turn the text into a link to the page targeting that keyword. For example: when keyword relevant to another webpage is mentioned on a page, use that term to link to the relevant page.

This improves the relevancy of the target page to the keyword it targets.


An anchor text ratio is the number of iterations of text used in anchor text as percentage of all the anchor text used to link to a page. For example, if a page is linked to 100 times throughout a website, you could use analyse the amount of times anchor text is used as a percentage, for example;

This is a much more important factor when considering back links to your website rather than internal links within the website. Due to the nature of template regions of a site containing links, it is likely that the most commonly used anchor text to your main pages will be that used in the main navigation.

Google is easily able to understand this and thus it is factored into the algorithm. It is only in extreme cases where thousands of internal links are created, in an unnatural way to a page all using identical anchor text where it could become an issue. Matt Cutts confirms this in a video from 2013.

That said it is always useful to vary the anchor text used to include variations of keywords, if you are running the risk of overusing a keyword in anchor text consider using generic anchor text on occasion such as “click here” or “for more information”.

Only in extreme cases are you likely to incur some kind of algorithmic penalty from Google as a result of unnatural anchor text usage, though it can happen.


It is best practice to use ‘absolute’ rather than ‘relative’ links, meaning that the complete URL should be used when linking to a page, for example: – Absolute link (includes everything from ‘http’ onwards. – Relative link (omits the ‘http’)

/page.html – Relative link (omits domain)

Whether a link is absolute or relative is not a direct ranking factor, the primary reason for using absolute links is that relative links can cause certain problems for the SEO of your site. Although not a common issue, relative links can lead to indexing problems and crawling problems.

Because staging sites frequently use relative links due to the nature of a ‘staging site’ it is often overlooked when launching, but these should all be changed to absolute URLs.


There are a number of reasons why using absolute URLs is better than using relative ones, we have listed these below:

Crawler Time: Google wants to spend as little time as possible crawling a website, so the easier you make this for them the more likely you are to have your website crawled more efficiently.

Duplicate Content: Using absolute URLs can help prevent duplicate content issues, especially when linking from canonical tags. This also creates consistency across the site which helps to reduce multiple URL’s from being indexed for the same content.

Prevent Errors: Indexing errors can creep in when relative URLs are used, in the worst cases we have seen infinite loops and exponentially growing volumes of errors stacking up in Google Webmaster Tools. Absolute URL’s help prevent this type of error from occurring.


You should not use custom tracking parameters such as UTM code on internal links as this can cause all kinds of problems in Google Analytics.


Broken links are links that target a page that either does not exist or is inaccessible for some reason. These have a number of implications, none of which are good for users, robots, and website performance. We discuss this in more depth in this article on broken links.


As mentioned above, absolute URL’s should be used whenever linking to any page or resource on a website.


Before 2009 there was good reason to keep the number of links on a page to fewer than 100, due to limitations with Google’s ability to crawl more than 100Kb pages. This limit has increased with the increasing file size of web pages over the years. Hence you are unlikely to have a paged truncated when being indexed due to there being too many links.

As always there is the common sense component to this, thousands of links on a page are at best unnecessary and many may be ignored if the page is consequently too big. There is no advantage from an SEO or UX perspective to have this many links on a page.

We have previously discussed the way in which PageRank flows through links; the more links there are, the small the flow of PageRank through each link.

Therefore, it follows that keeping the number of links to just what is necessary for the User is a good rule of thumb.

Matt Cutts discusses this topic and supports what we say in this article, you can click here to see the video.


Website navigation is the gateway to your website’s content and as such is a critical component to the success of any website. Poor website navigation can impact the user experience significantly and in some instances can prevent robots from effectively crawling the site. Hence, proper and effective navigation can impact rankings and website performance on almost all levels.

Website navigation should reflect the site’s structure and hierarchy promoting the most important pages and providing a great user experience to site visitors.

This article looks at best practices and optimisation recommendations that apply to all website navigation. This article is a part of a wider range of guides on website navigation which we have linked below:


A breadcrumb trail is a set of links that can help a user understand and navigate a websites hierarchy. Breadcrumbs provide a great opportunity to provide additional structure and hierarchy to a website, even if the URL structure is flat breadcrumbs can be used to provide the structure missing in URL’s.

This has a significant impact on user experience and when implemented with mark-up data this can also have an impact on how a site appears within the SERPs.

This is discussed in more detail in another article in our Knowledge Base specifically about breadcrumb navigation.

    • Ensure that breadcrumb navigation is implemented for all pages on the site (except the homepage). Breadcrumbs should follow the folder hierarchy of the site and include the page name / menu link.


Arguably the most important component of navigation (certainly from an SEO perspective) is accessibility. Using JavaScript or Flash menus could make the site inaccessible by robots through the menu. Google cannot read JavaScript or Flash, so if a menu is entirely composed on JS of Flash, Google will not be able to follow or identify the links. Using JavaScript to control a menu will be ok given that the links are HTML.

Google cannot determine what Images are either, so using images instead of text as links can lose some of the benefit of using HTML text links. If images are used, ensure that they have Alt attributes that reflect the pages that they are linking to.

Images may not display properly, Flash and JavaScript may not load on some devices hence it is important to test whether a site’s navigation works properly with these components disabled.

  • Ensure site navigation and access to all critical content is possible with JavaScript, Images and Cookies disabled. To check this, click on the settings icon, select “Set User Agent” and choose “Googlebot”. Reload the page and analyse the results.

This is one of the most common of the issues that can cripple a website and is something we have seen on many times. Typically the impact of this has been that website accessibility has been crippled and consequently all pages have been removed from the index.

This may not be immediately obvious as from a user perspective links work and the site functions. It is only when traffic nose dives that the problem can come to light.


When linking to internal pages from the main navigation, ensure that the links are all working and that they are not linking to pages that are being redirected to another page.

Ensure that the navigational links match the actual page URL, as to ensure that there no unnecessary redirections taking place.

Ensure that URLs are constant with the canonical versions that are being forced by redirections or used on the target page’s canonical tags.


Ensure site navigation is consistent throughout the site to maintain a consistent core linking structure. This is to limit the potential for ‘section orphaning’ whereby a section of a website becomes isolated and inaccessible from most of the site.

Consistency also impacts upon the user experience, when a user clicks through to a page and menu changes or is removed they may leave the site or be unable to navigate to where they are trying to go.


These are discussed in more detail in the Footer Links article on the Knowledge Base.


Best practices for any given webpage used to be that a page should have no more than around 100 links (as the total number of internal and external links). This is no longer a recommendation by Google and hasn’t been since around 2008, however there are some other factors to consider.

For example a page divides the link authority it passes on by the number of links it has on the page, so having 50 links rather than 100 would mean that twice the amount of authority is passed on to those 50 pages. Linking out to a lot of pages (especially high up on a page) will leak a larger percentage of that authority to other websites.

Another consideration is the user experience; if there are 500 links on a page, it may be hard, without great navigation, for them to find what they want but also for the site to funnel them to where you want.

If a site is particularly spammy and has thousands of links on hundreds of pages, it may be another indicator to Google that the site is in need of a penalty. This is very unlikely and requires some work to achieve!


Try to use the target keyword of the destination page as the anchor text in a text link. Avoid the use of “Read More” or “Click Here”. Using “Read More” or “Click Here” is not preferable to keyword rich anchor text most of the time. However some website use these anchor texts to avoid unnatural anchor text ratios.

Varying anchor text throughout the site is now a very important component to your internal linking strategy. Read more about internal linking by clicking the link.


Check for instances of multiple links on one page pointing to the same page, as only the first link (and its anchor text) are counted. The only exception to this is where a link that exists in the main navigation is also present in the main body of content.

More is covered on this topic in the previously mentioned, internal linking guide.


It is best practice to use ‘absolute’ rather than ‘relative’ links, meaning that the complete URL should be used when linking to a page, for example: – Absolute link – Relative link

Whether a link is absolute or relative is not a direct ranking factor, the primary reason for using absolute links is that relative links can cause certain problems for the SEO of your site, also discussed in the internal linking guide.


To ensure that the homepage is viewed as the most authoritative page of the website according to Google, undertake a site:

Search “” in Google (using your domain name) and ensure that the homepage is ranked in the #1 position. If it is not this may indicate a major problem with the website’s structure and hierarchy as seen by Google.


Similar to the above, to ensure that a specific page is viewed as the most authoritative & relevant for a particular keyword, undertake a site: search and include a keyword (e.g. “ keyword1”) and ensure that intended page is ranked in the #1 position.


Footer links, in the past, have been a big opportunity for SEO’s to optimise, providing authority and keyword rich anchor text links to internal pages. Times have changed though and now the footer carries less benefit from a SEO perspective.

There are still quite a few things to consider when designing or reviewing footer navigation as part of a website audit, which we cover here.


The footer provides an opportunity for the site to link to internal website pages and primarily should be targeted towards improving the user experience. For example; links to the terms & conditions, sitemap, contact, privacy, and ‘find us’ pages are all valuable links to put into the footer for the user.

It may be that all pages of the site require legal information of disclaimers, if so these can be added to the footer without impacting the user experience.
Anything that can assist in the user journey or deliver a better user experience can reasonably be added to the footer.

Another user experience factor to consider with footers, as with all visible site components, is how they appear on different devices. Mobile and tablet traffic are higher than ever, constituting the majority of traffic to many websites and this is only increasing the technology becomes cheaper and more readily available.

Large footers with lots of links and text need to tested on smaller screens.


Additionally linking to popular content or categories on a website can provide a valuable navigation to users, but it is important to keep this natural and not too ‘exact match keyword’ targeted. For example if a popular page on the site exists that sells red widgets, good anchor text could be “Red Widgets”, spammy anchor text would be “Buy Red Widgets”.

Especially if this is located on hundreds or thousands of pages and is accompanied by additional anchor text like “Buy Yellow Widgets”, “Buy Cheap Widgets”, etc.

Another example; if you compare financial products, listing them in the footer as links to the most relevant page for each product is fine. E.g.

Home Loans
Credit Cards
Personal Loans

But adding “compare” or “comparison to each product could be considered to be keyword stuffing in the footer.

Do not use the footer simply to stuff keyword rich anchor text links to 30 pages targeting your top tier keywords. There should always be some consideration for SEO, but the user experience is a better guide as to what should be in a footer.


Ensure that the footer isn’t used as a ‘link-dump’ or sitemap, as this can be viewed negatively and dilutes the link authority around the website. All sites should have a primary source of navigation, therefore adding countless links into the footer is both unnecessary and potentially harmful.

Internal linking is a powerful tool, but adding 100 links to the footer will devalue the authority that each link passes on. See our article on Internal Linking here.


Do not link out to external sites from the footer with keyword rich anchor text, this will create potentially hundreds or thousands of identical exact match anchor text links. Google could see this as unnatural and the site being linked to could be negatively impacted.

Also consider adding a “nofollow” tag to external links to prevent leaking authority from every page of your website.


There is research to suggest that Google devalue footer links as part of their algorithm and as such the footer may not provide as much value as it once did.

Linking to pages that are linked to from the main navigation or elsewhere on the page will have less too no SEO value as Google primarily looks at the first link on a page. Thus adding pre-existing links need not be done for the purpose of optimisation.

Because the footer is, by definition, at the bottom of a page it will usually contain the last links that Google will crawl, as such the value of those links are significantly reduced.

Footer links also typically have a very low CTR (Click Through Rate) given that they are at the bottom of the page they are less likely to be seen and hence clicked.

The longer a page the fewer people will scroll to the bottom of it, thus the footer although useful is not a very sharp tool in the SEO’s tool box!


Ensure that the footer doesn’t obscure sidebars or content elsewhere on a page, especially when being viewed on mobile devices. Although unlikely to incur a penalty from Google, this does impact the user experience.


Broken links are links to pages that do not exist or for some reason, or combination of reasons, are not accessible. You will likely have experienced them at some point if you have ever clicked a link and ended up on an error 404 page, saying something like “Sorry the page you are trying to access cannot be found”.

Broken links can manifest on your website due to a number of reasons, such as; URL’s being changed, pages being moved, human error, and removing pages from your website. As such, it is not uncommon for a website, especially a large or growing site, to have at least some broken links.

Broken links are split out into two main categories; internal and external. We examine both of these in more detail in this article. We can further split ‘external broken links’ into two further categories; “from search engines” and “from other websites”, these are also covered here in detail.


Broken links are website errors, please refer to the error handling guide also located in this knowledge base. This contains important information about implementing safeguards in case of problems such as broken links.


Broken links are detrimental to any website, from a user experience perspective, but depending on what pages are missing, this can directly affect rankings. Having excessive amounts of broken links can also potentially harm your websites rankings even if the missing pages are not valuable pages.


We mentioned some ways in which they can occur but in more serious cases, more often seen on larger more complex sites, it is possible for pages or URL’s to be created in an automated and systematic process. We have seen cases where a CMS (Content Management System) gets stuck in an infinite loop, creating URL’s or links to pages that do not exist.

Consequently this can lead to a volume of broken links greater than the volume of accessible pages on the site. It is worth noting that broken links in some instances can be the symptom of a wider problem.

In cases where a site has too many broken links, Google’s crawlers (GoogleBot) can get caught following links to and attempting to index non-existent website content. This can prolong the time it takes for Google index your actual content.

Another pitfall of large quantities of broken links, especially now when user experience is of growing importance to Google, your site could be seen as unnavigable, which could directly affect your website’s rankings.


As mentioned, broken links can be an indicator of a wider problem. If your broken links are pointing to top level pages, product pages, or any page that ranks well for a target keyword; once Google detects that these pages are missing, you are on a timer to get them fixed (restored) as quickly as possible.

Google will not rank pages that are inaccessible as this provides their users with a poor user experience. If your pages are not restored, typically within two weeks of it being detected by Google that they are missing, it becomes exponentially more difficult to regain lost rankings.

In a worst case scenario it is possible to lose the investment of time and money spent in acquiring organic keyword rankings.

In any case, fixing broken links is a must for any website administrator.


Detecting these is easy with almost any crawler tool like Raptor. There are also various online tools that will crawl up to a certain amount of web pages in one off checks. Some CMS’s have built in functionality for alerting you to broken links.

Google Webmaster Tools / Search Console also have a number of features for identifying missing pages or pages with errors. It is worth noting though that if Google are telling you about the problem, you have lost the chance to fix it before they find out! We always recommend regular checks to ensure that a website is working as intended.


There are a few common mistakes that often cause broken links, and that can be easily overcome with some planning.

Please refer to the error handling article, also located in this knowledge base for information on custom error 404 pages.

Also refer to the articles covering moving or removing pages and site-wide changes for further safeguarding measures.


These are links on a website that point to pages within the same site, that are not accessible. These are the easiest to fix of all broken links, as they fall under the remit of the web developer or whoever manages your website.



If the missing page has become inaccessible for any reason, making the missing page accessible again will resolve the problem.

If you are experiencing widespread problems, you may need to implement a temporary solution if the actual fix is going to take days or weeks. In this scenario, you should typically implement temporary 302 redirects, redirecting the broken URL’s to the next most relevant page on your site.

For more information on redirects, please follow the link.


If the link is broken because a page has legitimately been removed, you should do the following things:

  • Remove links to the removed page on every page where one occurs
  • Remove references to the removed page in any sitemaps & resubmit the xml sitemap to Google, through Webmaster Tools.


If a URL has changed / a page has been moved to a new location, then it is vital that the following tasks are completed as quickly as possible, if you have any SEO investment in the page/s. Failure to do this, will typically result in loss of rankings and page authority.

  • Setup a permanent 301 redirect to the new URL of the moved page
  • Update all links to the page with the new URL
  • Update sitemap references to the page & resubmit the xml sitemap to Google, through Webmaster Tools.

Pages carry authority acquired through several ways, all of which take time. To preserve that authority and your investment in that page, it must be properly redirected.


It may be that pages that have been removed but are still indexed by Google are appearing within the SERPs. These pages once clicked will lead to an error 404 page of some kind and hence the user experience is impacted significantly. But in addition to this, any authority and ranking that the page has can be lost once Google reindexes the page.

These pages are very important to fix or redirect because the ranking and authority that they have will be lost after a few weeks of being detected as a missing page by Google.

Detecting these is more difficult and requires scraping search results and comparing in Excel (or similar tool) with known live web pages.


These can be harder to fix at source, as it requires identifying the broken link and then contacting the website admin or relevant person to request they change the link (given that you do not have access to the site to change the link yourself).

These can be detected by Google and identified within Google Webmaster Tools but can also be identified through analysis of live pages compared to live backlink profile links. This requires the use of a tool such as Raptor to get a picture of all existing back-links.


The simplest way to fix these is simply to remove the link on your site, or update it to the next best page on the net.


URL structure is important for a number of reasons and has several components that need to adhere to best practices, all of which are discussed in this article.
Website URLs should be logical, consistent, in-line with the site structure & hierarchy, and relate to the page content.

Hence you should reflect the site’s architecture within the site’s URL structure. This makes the site easier to crawl and helps build relevance between similar groups of content.


Because of some search engine’s abilities to crawl parameter driven URLs, it is recommended that URLs don’t contain more than 2-3 search parameters. Google Webmaster Tools offers some functionality on parameter handling but this is discussed in another article within the knowledge base.


Where possible, keywords should be used within the URL. They can be used at any level after the TLD, for example:

Using keywords within the URL will assist in ranking for those keywords, even if considered an indirect ranking factor; the keyword appears in the within the SERPs and can be bolded by Google, this can enhance CTR and hence drive rankings.

This also makes for a more useful URL for users, compared to a random alphanumeric string of characters or poor file names like ‘page-1’ for example.


There really isn’t a need to have a keyword more than once in an entire URL. Over optimised URLs can look spammy and could be targeted for over-optimisation by Google. Keep URL’s relevant, natural and avoid keyword stuffing. An appropriate URL could for example look like this:

A poor URL could look like this for example: /red-widgets-buy-red-widgets/


Breaking up words in URL’s provides a readability advantage both to users and to Google who see separators as a way of differentiating between words. For example: < this is hard to read, compared to: < this is easy to read

Ensure that hyphens have been used as word separators instead of underscores, Google ask you to do this here.


No page should be more than four clicks from the home page, although URL’s may reflect architecture that includes for example five sub-directories, the page at the bottom should be accessible within four clicks of the home page.


URL length should not exceed 100 characters where possible. Very long URL’s are harder for people to read, in some instances share on sites like Twitter (without using URL shorteners) but are not a direct ranking factor.

Also consider that if any AdWords PPC / SEM advertising is to be undertaken the maximum character limit for a URL is 35 characters. In cases where the TLD is longer than 35 characters exceptions can be made.


Avoid using numbers or irrelevant words within URL’s, instead use more verbose descriptive terms and include a keyword or a relevant descriptive word. This will improve relevance and ultimately provide more value to users. URL’s with long sequences of numbers as directories or page names provide no value to your site, rankings or the user and should therefore be avoided.


Groups of content on a similar topic can be grouped together by internal linking, structured with marked-up breadcrumb navigation but keeping content grouped by URL structure is the most common way of grouping / clustering content.

Clustering content creates a wider / broader relevance to a group of keywords across multiple pages / URLS. We cover the benefits of this more detail in an article on content strategy.


There is a large range of potential issues that could cause duplicate content problems on a site, any of which stem from poorly created URLs. We list some of these below and link to articles from our canonicalization and duplicate content section.


Session ID’s appear in URL’s as alpha-numeric strings attached to a URL which is unique to the ‘session’ of a specific user. Click the link for more info on Session IDs.


A common issue on sites, especially those that remove the document type suffix (.html .aspx .php) from URL’s is that trailing slashes can be present at the end of a URL.

There is no ‘SEO’ reason to force URL’s to include or not include trailing slashes, it just makes more so just chose which you prefer. Click the link for more info on trailing slashes.


The URL to any webpage should be created in lowercase characters and should include no uppercase characters. Click the link for more info on upper vs. lower case characters.


‘Breadcrumbs’ are a navigation path that illustrates to the site visitors where they are within the website’s hierarchical structure.  This is useful for navigation to other relevant pages higher up in the hierarchy.


Breadcrumb trails assist visitors in site navigation by providing links to previous hierarchical levels of navigation by allowing them to navigate up through relevant category pages. It is this improvement to the user experience that provides an indirect SEO benefit; because

Google look at bounce rate, time on site, exit rate and other engagement metrics and breadcrumbs can assist in improving engagement through improving the user experience, it is through this that an indirect benefit to rankings could be received.


In addition to this indirect benefit, breadcrumbs offer internal linking opportunities, they help crawlers read your site’s hierarchy better and Google can display breadcrumbs as rich-snippet-like links within the search results.

All of these factors have a direct or indirect effect on the overall SEO of your site. For these reasons it is highly beneficial to have optimised and efficient breadcrumbs throughout your site.


Breadcrumb navigation trails should be located at the top of the page underneath the header but above the main body of content

Although there are several reasons for this, the primary reason is usability, breadcrumbs more often than not exist in this location and hence users expect to find it here. Putting them at the bottom of the page or anywhere else may lead users to believe that there is no breadcrumb navigation.

If a page has a lot of content and is thus very long, consider adding a ‘back to top’ link so that users can scroll quickly to the top of the page rather than duplicating the breadcrumb navigation at the bottom of the page.


There are several models that can be used for creating and structuring breadcrumbs:

  • Location-based: displays the current page the user is on relevant to the entire site structure (recommended)

Home > Category > Page 1

  • Path-based: displays the current page the user is on based on the path taken to reach the page. These are dynamically generated based on the users navigation and does not necessarily reflect the true hierarchy of the site and its structure.

Home > Category > Page 1
Home > Page 1
Home > Page 2

  • Attribute-based: displays a list of attributes of the current page

Keyword 1 / keyword 2

Aside from the current page component of the breadcrumb trail, all of the other parts should be clickable links that lead through to the associated page within the site’s hierarchy. All breadcrumbs should start with the ‘Home’ page of the site. For example:

Home > Category > Page

It is also possible to have more than one breadcrumb trail, a good example of where this could add value to the user experience is by having a breadcrumb trail for a service page that has two obvious associations.


Keyword usage within breadcrumbs is not of paramount importance and as much as it could assist in building relevance it could also damage rankings if the breadcrumbs appear on hundreds of pages with the same exact keyword rich anchor text. For example the home page should appear as ‘Home’ within the breadcrumb navigation and not a primary home page keyword.


This is discussed in another article of this Knowledge base under the section.


Google have been automatically creating breadcrumb links within the URL component of a SERPs listing. They have been doing this for some time, often determining this from the URL structure. Originally this was reserved or large sites and ecommerce sites with vast amounts of products and categories.

They have however improved and expanded this and now they try to use this for most sites, even smaller sites without breadcrumb navigation.

You can see this is an example below (taken from Google Webmaster Central) where the left image is how t looked before and the right image is how it looked after the change:


It is not good to rely on Google to do this and like many Google features it can change.


The following is some basic advice on general best practices that should be adhered to when creating breadcrumbs:

  • Only use breadcrumbs that help the user
  • Do not link to the current page
  • Do not replace the main navigation or other primary navigation systems with breadcrumbs
  • Ensure the usage of breadcrumbs is consistent throughout the site
  • Do not use breadcrumbs in the title tag or Meta data
  • Do not present different breadcrumbs to Google or hide the breadcrumbs from the user
  • Mark-up the breadcrumbs to make them visible in search result pages (SERPs)
  • Do not stuff keyword into breadcrumbs


In this guide, we are looking at how authority delivered from your sites backlink profile flows through your site.

There are a number of issues that can heavily impact a site’s ability to allow authority to flow where you want it to go. So, we also cover common obstacles to authority flow and how to prevent authority leakage from your site in this article.


There are various metrics that exist to describe or attempt to quantify the ‘authority’ of a webpage or whole domain… Google used to provide a PageRank score which is now no longer used or updated.

Authority is one to two broad components used in Google’s algorithm (the other being relevance) to determine who should rank where and for what search. Consequently, authority is as an essential commodity for every website.

Website’s acquire authority through backlinks, and we cover this topic in more detail in another section of the knowledge base… What we want to look at is what that authority does once it enters your site.


Authority is like light in many ways, it doesn’t site still, so when it enters your site, for the most part, it keeps moving… Authority is passed on in a number of different ways, none of them are perfect, each are described below:


Redirects actually have a varied range of impacts, if you are using a JavaScript redirect that Google cannot follow, it will not pass on any authority to the target page.

Conversely if you are using a standard 301 redirect correctly, you could pass on the vast majority of the authority to the target page, we estimate this to be between:

90% to 99% of authority passes through a proper 301 redirect.

Redirects will typically override other components, for example a page with 100 links and a canonical tag pointing to another page will all be ignored if a redirect is in place. This is because the redirect will take the crawler away from the page, not allowing it to see the links or canonical tag.

We cover these concepts in more depth in our guide to redirections or our guide to 301 & 302 redirects.


Google have said in the past that canonical tags work like redirects for the purposes of authority, meaning that they pass between 90% to 99% of a page’s authority onto the target page (if not self-referential).

That said, we have seen non-canonical pages outrank their canonical counterparts, so this is just a hint and is definitely not as strong a factor as a redirect for that reason. Google will not rank a page that redirects to another if they detect the redirect.

Google will generally ignore the links on a page if it has a non-self-referential canonical link present. This is because you are telling Google that this not the canonical version of the content.

We cover this concept in much more depth in our guide to canonical tags.


Internal links pass between 85% and 90% of a page’s authority… But this is divided by the number of links on a page. For example; each link on a page with 10 links will pass 8.5% to 9% of the pages authority to the target page.

This means that the more links you have on a page the less valuable each one is, and obviously the converse is also true.

You can stop the flow of authority through a by adding the ‘nofollow’ attribute to the link, or adding a nofollow meta tag to a page to prevent the flow of authority through all links on that page.

We cover these concepts in more depth in our guide to internal linking.


Authority Distribution is essential to every site; this is the process of ensuring that the pages that need the most authority have it and those that don’t need it so much have less. Due to the influence that authority has on a page’s ability to rank for its target keyword/s, understanding this flow can be critical to optimising your site.

Backlinks are a huge part of this and we look at backlink distribution analysis in another guide, if you’re interested.


The benefits of good authority distribution can be a hugely significant factor in ranking well. The impacts of poor authority distribution can be catastrophic to a site in the most extreme cases. We cover some of the most common problems later in this guide, which will provide some context for this.


This all depends on what you want, what pages are the highest value, etc… Typically we want to see a strong correlation between the most valuable pages and the number of followed internal links that they have. More valuable pages should have a larger share of the internal links.

Although the reality is much more complex than just doing this, it is a good start. Raptor helps you to understand this with our authority distribution tool.


There are a number of techniques that we can employ to assess whether your site is distributing authority as desired. Depending on why you are reading this or what you are trying to achieve the following techniques should help.


The first thing to understand or to choose are the metrics, depending on what tools you use you will have a range of metrics. We are not going to get into all of these here as we discuss many of them in more detail in other guides.

The most common authority metrics are Domain and Page Authority (Moz) or Trust & Citation Flow (Majestic). You can also look at the number of backlinks to any given page and weight this by authority.


Our distribution analysis tool calculates how much authority is passed on (based on whether it’s a canonical link, internal link, nofollow, or a redirect (based on type). We assess the entry points for backlinks and the authority that those links pas into your site, we also calculate how much authority is lost due to problems or external links.

This is presented primarily in a visual interactive map of your site colour coded to show the high and low authority pages. Without a tool like ours this is a very complex and processor intensive task.

We look at distribution analysis techniques in much more depth in another guide.


Auditing your site on a regular basis can help to identify most of the problems that we detail in the following section. Again, Raptor automates this for you, but its more than possible to use crawl data, a few basic tools, Chrome plugins and a spreadsheet to identify these issues.


Looking at ranking and Google Search Console data can reveal things like; the wrong page ranking for a keyword, or a bunch of pages ranking for similar terms. This is a strong indicator that authority distribution is not working in your favour.

Review the section on solutions to see how to deal with this particular issue.


As mentioned, building a table and some charts to look at what the most and least linked to pages are can help to understand any systemic issues. Often these are caused by menus and templated navigational links or a lack in content (contextual) links.

We discuss link structures in more depth as part of another guide.


This is more off-page than on, but the lines are blurry in this area… Looking at what URLs have backlinks is essential to understanding where authority is entering your site. These pages should be noted and dealt with differently than other pages.

For example, retiring a page on your site that has 20% of the deep links (backlinks) would be a bad idea, if you just let it 404… Instead it would be better to update the page or canonicalize it to another page. Allowing a high value entry point page to return a status other than 200 can lead to people removing the links to that resource. If you saw that you had a bunch of links that pointed to error 404 pages or 301 redirects, you would (or should) consider updating or removing the links.

If you notice a drop in your rankings, it is worth checking your entry point pages first to see if they are still functioning properly.


Problems with authority distribution can result from a broad range of issues, typically those affecting larger numbers of page or the home page are the most severe. You work hard to earn backlink and authority, so it’s worth ensuring that you aren’t wasting that investment with poor flow or distribution.
Perhaps not exhaustive, this list is a pretty good list of the most common causes of authority distribution problems.


This represents a leak rather than a block or obstacle to authority flow & distribution. A canonical chain is where a series of pages are canonicalized, for example:

Page A – Canonically links to Page B
Page B – Canonically links to Page C
Page C – Canonically links to Page C

In the example above, if we assume that 95% of the authority is passed on through these canonical links we see a loss / leak of authority:

Page A – 100% authority
Page B – 95% authority (1 x 0.95 = 0.95)
Page C – 90% authority (0.95 x 0.95 = 0.9025)

So, you can see that around 5% of the original authority is lost in the chain; the more links in the chain the more authority is lost:

No of Links in Chain














A canonical link should only point to an indexable, canonical page.


This is the same as with canonical chains except that instead of canonical links we are talking about redirects. Assuming that the redirects are crawlable by Google and they pass on authority, we end up with the same problem of authority leaking out through unnecessary links in the chain.

A redirect should only point to an indexable, canonical page.


Just for fun there are variants of the above that will have similar problems such as:

  • A redirect pointing to a page that is canonically linking to a different page
  • A canonical link pointing to a page that redirects to another page
  • Extended combinations of the above


This is characterised as a chain (see any of the above), a canonical link or a redirect that points to an inaccessible page. There are many ways that you can create a dead, any canonical link or redirect, in a chain or not, that points to a page that does the following is bad:

  • Error page (any kind, 404, 501, etc)
  • A non-indexable page (noindex tag)
  • A page that can’t be crawled (disallowed from robots.txt)

In any of the above examples, the authority will be lost as it ends up on a page that cannot pass it on and does not require it.

You could create a legitimate dead end where the final page has a nofollow tag on it, and a self-referential canonical link, thus preventing any of the authority leaking out of the page. This could be done to create a one-way flow of authority to your top tier high value product pages for example.


There are as many types of loops as there are chains; canonical loops &redirect loops. The simplest form of a redirect loop is:

Page A – Redirects to Page B
Page B – Redirects to Page A

This would make both pages inaccessible (to users and robots), thus prevent them from passing on any authority as well as wasting the authority they have.

A canonical loop will at least allow both pages to be accessed by users and robots but will still create a black hole for authority… As well as potentially causing neither page to rank.

Loops can also include a chain component such as in the example below:

Page A – Redirects to Page B
Page B – Redirects to Page C
Page C – Redirects to Page A

Loops of all kinds should be avoided at all costs, if this only affects a couple of low value pages, the impact will be quite small. However, if this affects a high value page or affects access to a branch of the site, or if it affects the home page… This could cause major problems for your site both in terms of authority flow, distribution and indexation.


Conversely to the above, indexation issues can also affect the flow and distribution of authority throughout your site. Depending on the specific indexation issue, your pages may not be indexable or crawlable by Google. A page that is not indexed will not pass on authority as Google will not know about it.

Again, this may be relatively minor but I the case of a category page this could prevent a whole directory from receiving some link juice.

For more information on this topic read our guide to indexation.


As mentioned, canonical tags when implemented badly can adversely affect your site… Canonical duplication issues are characterised by content being accessible from multiple URLs. Failing to control how your content is canonicalized can result in leaking authority, we cover the principle mechanism for this in the next section on inconsistent linking.

For more information on this topic read the section of our knowledge base on  canonicalization.


The above issues are also often coupled with internal linking inconsistencies that allow them to become a problem. If a site is accessible with and without the www, but there isn’t a single internal link pointing to the www version of the URL; the problem will be pretty minor at worst.

However, if you are lining to non-canonical pages, you will be leaking authority as this qualifies as a kind of chain. If all of your links point to the http and www version of your site, they might pass through two redirects before getting to any page, this means you lose around 5% of your authority before you’ve even started!

Once you add in other redirects, you start to lose even more authority… Ensure link consistency throughout your site to maximise authority flow.


If linking to non-canonical, non-indexable or uncrawlable pages, you should always use a nofollow link to prevent losing that authority to a page that does not require it. This can be a major leak on large sites.

You should only link to resources that redirect under specific circumstances where it is unavoidable or has little impact… In general, you should always link directly to the resource you are pointing users to.


On large sites, it’s easy to end up with isolated pages that are either not linked to or only linked to from nofollow links, making them what we call ‘island pages’.
These pages will be unlikely to rank due to the lack of authority flowing into them, these pages do not leak or lose authority they are the product of a lack of authority.


This is more relevant to off-page SEO and backlinks but is worth a mention as it can have various effects on your site.

Most sites will typically have a higher ratio of links to the home page than to all of the other pages put together. If your site has almost no deep links (links to internal pages) you are relying on all of the authority to flow from the home page to the rest of the site.

The further a page is from the home page the less authority will flow into it, as we mentioned in the sections on canonical chains, the last page in the chain (in this case a click chain) gets a very small amount of authority.


If you have a lot of external links all set to follow on high value pages you could be unnecessarily leaking authority to external resources.

If you’ll recall we said that a link will pass through authority to other pages as a percent roughly equal to what percent it represents of all links… So, if you have a page with only a few links, those links can be sending authority away from your site in larger qualities.

There is nothing wrong with a (follow) external link, just be aware of where they are and how they affect the flow of authority off-page.


We have mentioned this tag before in this guide, if it applied to all or most pages it will prevent authority from flowing between them, essentially creating a site of island pages. You can legitimately use this tag on pages with lots of external links for example, but they should be used sparingly and only when necessary.


Due to the fact that on most sites the home page is the entry point for the vast majority of the authority the site receives; accessibility issues with the home page can prevent all or most of the authority from being passed on.

Obviously, an inaccessible homepage is a problem regardless, but if left unchecked this can have a knock-on effect to the rest of the site. Bear in mind that inaccessibility issues doesn’t just mean it returns a 404 error… A JavaScript or Flash menu could just as easily prevent the home page from passing on authority.


This is really just an extension of the above, any page that is inaccessible will not pass on authority or rank for its target keywords. In sites with directories and more complex structures, the accessibility of a few pages could prevent authority flowing into huge segments of the site.


We have already covered, the basics, common problems and some techniques for how to identify those problems… Here we discuss a few of the best practices that will help to prevent issues from creeping in and some solutions for how to fix them if they do.


Some tools (like Raptor!) will send you alerts to your phone or email if certain critical issues occur, this can help you to identify and resolve thee before they are picked up by Google. This is a great semi-preventative measure.


This is an audit of sorts, using ranking data and a URL to Keyword Map of your site you can see what pages are ranking for what keywords. Where find that there are many pages ranking for similar terms that shouldn’t or the wrong page is ranking… This is an indication that these pages are either more relevant or have more authority (or both) than the desired landing page. Hence this type of analysis is useful for identifying distribution and flow problems.

Consider consolidating this content using redirects and canonical tags to push a single preferred page above all of the others… This can often have a very positive impact on your rankings for head terms.


We cover this in more detail in our guide to canonical duplication and canonical tags, so read either of those guides for more information on this topic. For the purposes of this guide, just ensure that canonical tags and paginated series are setup correctly. As mentioned earlier; avoid canonical chains and loops.


Ensuring that you have a properly formatted and structured XML sitemap containing consistent URLs and only canonical, indexable, and crawlable pages. This helps to prevent some indexation problems.

Also ensure that you have a HTML sitemap listing all canonical, indexable, and crawlable pages as this helps to provide at least one HTML follow keyword rich anchor text link to every page. This ensures that every page that needs authority has at least some.


We have discussed these previously in this guide, but it is one of the simplest and easiest types of analysis to perform. Simply sum all follow links by type pointing to each page and sort from highest to lowest. You will easily be able to see some problem pages or an undesirable order emerging.

This is a good starting point to begin a wider and more granular analysis of authority distribution and flow around your site.


One of the most effective ways to mitigate problems, especially on larger sites; is to perform regular checks and audits. Often there is no better way to determine if or what problems your site has than a thorough technical audit.

Regular checks allow you to compare data to see if problems are increasing, if there is a pattern to them; and helps to determine whether things are getting better or worse.






Call Now