Technical SEO Audit Guide for Webmasters

technical SEO audit is an analysis do by an SEO professional or a webmaster to check the technical aspects of websites i.e On-page SEO. It basically includes checking the health of a website and finds out what fixes might be needed to improve it.

How do you conduct a technical SEO audit?

This guide will help SEO professionals, Agencies to complete an SEO technical audit & suggest recommendations based upon findings. The point of an SEO audit is to help clients identify errors or issues within their website that isn’t allowing search engines to efficiently access their site, negatively impacting organic search. The goal of a technical assessment is to increase site traffic, enhance the user experience, and improve crawl efficiency.

This is a technical SEO audit tutorial for any given website and guides analysis and recommendations for technical site health. Before search engines can rank site content for relevant queries, it must first be able to crawl or access that content, and it must be able to include the content in its index, the database from which search results are populated. This guide helps you examines issues with the site’s ability to be crawled and indexed, offers recommendations to resolve them.

Technical SEO Audit Approach

  • Crawl ability: This section examines elements that affect search engines ability to crawl your websites, such as robots.txt, header status, site architecture, sitemaps and markup.
  • Content: This examines how relevant your content currently is to user’s search. The elements impacting content are URL Structure, Canonical issues, Page robots head tags, Meta Tags, Page Content.
  • Authority: This examines Backlinks, Page load time, Mobile Friendliness and Social & Rich Media integration, Local SEO.

Crawlability

This section focuses on items that impact sitewide. Most of these recommendations will improve the search engine’s ability to index the site.

 Indexability:

IssuesHow to fix
JavaScriThe question is: Are bots able to find URLs and understand the website’s architecture?

There are two important elements here:

1. Blocking search engines from website JavaScript (even accidentally) can create problems,

2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end-user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Recommended:  Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Google bot is blocked from.

The easiest way to solve this problem is by providing search engines access to the resources they need to understand Website user experience.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an a hrefs=”www.example.com” HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s click events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

Not recommended:

The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.

Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:

Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).

Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.

Recommended: pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.

The discrepancy between the site: search on Google & a Screaming Frog crawl data

 

 Robots.txt:

How to check Robots.txt file:

Open the URL directly into the browser using below syntax:

file: http://www.example.com/robots.txt

Critical issues in Your Robots.txt:

IssuesHow to fix
Default robots.txt: allowing access to everything.Having no robots.txt file for your site means it is completely open for any spider to crawl. Recommended: It’s highly recommended to have a robots file on root directory and highlight pages on the website which needs to be disallowed.
Disallow allThe most damaging line of code in SEO is “Disallow: /” which means search engines can’t crawl any content on your site.

Recommended: Check in Google Search Console if the robots.txt blocks the entire site. Removing this directive will ensure that website is open for crawling.

Conflict in robots.txt & SitemapIf sitemap.xml file contains URLs explicitly blocked by your robots.txt, there will be a conflict.

Recommended: It’s important to remember that URLs blocked in robots.txt can still be indexed in the search engines if they are linked to either internally or externally. A robots.txt merely stops the search engines from seeing the content of the page. A ‘noindex’ meta tag (or X-Robots-Tag) is a better option for removing content from the index. There are two ways to find conflicting URLs. 1. This error is relatively easy to spot using Google Search Console. In Google Webmaster Tools, you can submit an XML sitemap and see a report of Google crawling it via the Optimization -> Sitemaps section of GSC. This report shows which pages have been blocked by the robots.txt file. 2. Screaming Frog can also help in exporting list of URLs blocked by robots.  This can be found under ‘Bulk Export > Response Codes > Blocked by Robots.txt Inlinks’ report.

Using the wrong wildcard charactersWildcard characters, symbols like “*” and “$”, are a valid option to block out batches of URLs. It’s a way to block access to some deep URLs without having to list individually, but this can sometimes cause issues. Blocking unknown areas may result in Google not being able to crawl valid URLs.

Recommended: Checking the complete list of URLs block from wildcards used in robots.txt.

Accidentally disallow wrong information from crawlersMake sure that there has no blocking of any content or sections of the website which should be crawled.

Recommended: Removing the disallow for these areas.

Wrong syntaxrobots.txt is a simple text file and can easily be created using a text editor. An entry in the robots.txt file always consists of two parts: the first part specifies the user agent to which the instruction should apply (e.g. Googlebot), and the second part contains commands, such as “Disallow”, and contains a list of all sub-pages that should not be crawled. For the instructions in the robots.txt file to take effect, correct syntax should be used.

Recommended: Like any other URL the robots.txt file is case-sensitive its recommended allowing/disallowing any URLs with proper case-sensitivity.

Missing sitemapA feature that can be utilized on the robots.txt file is the XML Sitemap declaration. Since search engine bots start crawling a site by checking the robots.txt file, it provides you an opportunity to notify them of your XML Sitemap.

Recommended to add sitemap to robots.txt. This will tell search engine robots the location of xml sitemap

Sitemap directive URLsNot including complete sitemap URL is again miss leading.

Recommended: e.g.using /sitemap.xml would be misleading, best practice: https://www.example.com/sitemap.xml

Robots.txt to block access to sensitive areasDo not use robots.txt to prevent sensitive data (like private user information) from appearing in the SERP. Because other pages may link directly to the page containing private information (thus bypassing the robots.txt directives on your root domain or homepage), it may still get indexed.

Recommended: If you want to block your page from search results, use a different method like password protection or the Noindex meta directive.

Subdomain robots.txtRobots.txt files should avoid including rules spanning different subdomains and protocols. Each subdomain and protocol on a domain requires its own separate robots.txt file.

Recommended: For example, separate robots.txt files should exist for both blog.example.com and example.com (at blog.example.com/robots.txt and example.com/robots.txt)

Important links blocked on disallow pagesLinks on pages blocked by robots.txt will not be followed. This means 1.) Unless they’re also linked from other search engine-accessible pages (i.e. pages not blocked via robots.txt, meta robots, or otherwise), the linked resources will not be crawled and may not be indexed. 2.) No link equity can be passed from the blocked page to the link destination.

Recommended: If there are pages to which you want equity to be passed, use a different blocking mechanism other than robots.txt.

Internal search results pagesThere can be an infinite amount of Internal search pages which provides poor user experience and wastes crawl budget.

Recommended: Disallow internal search pages to prevent these pages from showing up in the SERP.

PDF FilesPreventing search engines from indexing certain files on your website (PDFs). This can depend upon the requirements, sometimes there are important PDF files which a site owner does not want to be publicly available but put the PDF files on the website for selective users.

Recommended: In such case, disallow PDF files

Crawl delayThe Crawl-delay directive is an unofficial directive used to prevent overloading servers with too many requests. If search engines are able to overload a server.

Recommended:  adding Crawl-delay to your robots.txt file is only a temporary fix.

Blocking JS and CSS filesIf Googlebot and other bots are not allowed to read your JavaScript and CSS files, they won’t be able to render your site and give you full credit.  Google may see this as cloaking, especially if the JavaScript dramatically changes the user experience.

Recommended: Make sure you allow crawling of all JS and CSS files.

WordPress configurationsWhen WordPress sites are deployed, it’s very common for the developer to forget to uncheck the “Discourage search engines from indexing this site” option under Settings -> Reading Settings. This mistake causes WordPress to disallow all search engines to crawl the site.

Recommended: It’s always good practice to add meta noindex to prevent indexing for those pages which you do not want to be index rather blocking the entire website from indexing. If you are using WordPress and have issues with robots.txt, check the Reading Settings first

 

XML Sitemap

How to check sitemap:

Look for the following mistakes and based on the “How to fix” column, those mistakes can be resolved.

IssuesHow to fix
Empty sitemapThe XML sitemap of a website informs search engines about its structure and makes it easy to index. However, it doesn’t mean that the website won’t be indexed without a sitemap. It all depends on the website’s construction. if the website is complex and some of the subpages are deeply embedded, then it might have issues with indexing, so it’s a best practice to have a sitemap to make it easier for the search engine to index all the website pages.

Recommended: Ensure that the URLs are properly tagged in the sitemap.xml file, which should be present in the root directory.

HTTP errorThis simply means that the sitemap couldn’t be processed, or the bots came across some HTTP error that is not really critical especially when trying to have the sitemap downloaded. Note that some errors like 404 are critical but others might just show temporary errors like server problems

Recommended: Check if the sitemap URL or all child sitemaps (if present) are working correctly or not. Confirm is indeed the specified sitemap URL is accurate and is available at the location indicated. Once you are sure, simply resubmit the sitemap to Google Webmaster Tools

Relevant links vs non-relevant links in the sitemapHave all active relevant URLs in sitemaps. (Include canonical versions,

Recommended: Avoid duplicate links, Noindex pages and dead links: a best practice is to generate sitemaps at least once a  week or month (depending on website content generation) to minimize the number of broken links in sitemaps. Check for the links which return 404 and 302 and remove them.

Consolidate sitemapsAvoid too many XML Sitemaps per site Ideally, have only one sitemap index file listing all relevant sitemap files and sitemap index files,
XML Sitemap within robots.txtTell search engines where the XML sitemap(s) are located by referencing them in the robots.txt file.
Canonical LinksInclude only links which are rel=canonical link in your webpages. This is a strong hint to search engines your about preferred version to index among duplicate pages on the web.

Status Codes

How to check header status of the complete website:

You need to completely crawl the website, by using tools like Screaming Frog, Deepcrawl (for larger websites), or SEMrush (for small websites).

Common status code classes:

  • 1xx – Informational responses: The server is thinking through the request.
  • 2xx – Success! The request was successfully completed, and the server gave the browser the expected response.
  • 3xx –Redirection: You got redirected somewhere else. The request was received, but there’s a redirect of some kind.
  • 4xx – Client errors: Page not found. The site or page couldn’t be reached. (The request was made, but the page isn’t valid — this is an error on the website’s side of the conversation and often appears when a page doesn’t exist on the site.)
  • 5xx – Server errors: Failure. A valid request was made by the client, but the server failed to complete the request.

Look for the following mistakes and based on the “How to fix” column, those mistakes can be resolved.

IssuesHow to fix
Code 302A 302 redirect is similar to a 301 in that visitors and bots are passed to the new page, but it’s a temporary redirect, not passing link equity.

Recommended: Use 301 redirects for permanent changes; only use 302 redirects if the page will be down for a week or less.

Code 404This means the file or page that the browser is requesting wasn’t found by the server. 404s don’t indicate whether the missing page is missing permanently or only temporarily.

Recommended: Every site will have some pages that return 404 status codes. These pages don’t always have to be redirected; there are other options. One common misconception is that it’s an SEO best practice to simply 301 redirect pages that return a 404-status code to the homepage of the given domain. This is a bad idea for the majority of cases, because it can confuse users who may not realize that the webpage they were trying to access doesn’t exist.

If the pages returning 404 codes are high-authority pages with lots of traffic or have an obvious URL that visitors or links are intended to reach, you should employ 301 redirects to the most relevant page possible.

There should be no internal redirects.

Code 500A 500 is a server error and will affect access to your site. Human visitors and bots alike will be lost, and your link equity will loss.

Recommended: Search engines prefer sites that are well maintained, so you’ll want to investigate these status codes and get these fixed as soon as you encounter them.

 

Content

This examines how relevant your content currently is to user’s search. The elements impacting content are URL Structure, Canonical issues, Page robots head tags, Meta Tags, Page Content.

URL Structure:

IssuesHow to fix
Directory StructureThere should be a logical URL structure, which not only helps users understand how all the pages relate to each other, but help search engine spiders crawl the site efficiently.

Recommended: To have a proper hierarchical site structure, with logical “buckets” for each category in URLs. An added benefit of using well-crafted semantics is that Google can pull these into a SERP and display them in place of a sometimes-confusing URL string.

Dynamic AJAX ContentWhile looking into PDP pages, if the website using AJAX to generate pages with dynamic content using the hash parameter, so these URLs will never be indexed properly.

Recommended:  To allow dynamic pages to be indexed. the exclamation mark token (“!”) needs to be added after the hash (“#”) within an AJAX URL. Indexable AJAX URL: http://www.example.com/news.html#!latest

Secure HTTPS PagesA potential duplicate content issue that very often goes unnoticed is http and https pages rendering the same content.

Recommended: Use SEMrush site audit to identify URLs having such an issue. updating all internal links to HTTPS & 301 redirect all insecure URLs.

Category IDsMany sites utilize category IDs within their URLs, generated most of the time by their CMS. In a nutshell, a load of numbers, letters and symbols in a URL means absolutely nothing to either a human visitor or a search engine spider.

Recommended: In order to maximize the sites SEO impact and meet the advice of including keywords within the URL and logical semantics, these IDs need to be turned into relevant descriptive text.

Trailing Slash ConundrumThis is another duplicate content issue but a very subtle one. Again, many CMS platforms address this straight out of the box, but you need to be aware of it just in case. Both http://www.example.com/catagory/product and http://www.example.com/catagory/product/ render individually but will be exactly the same content.

Recommended: This can be addressed with a simple 301 redirect rule for all pages without a trailing slash, pointing to the version with a trailing slash.

Session IDsMany e-commerce sites track visitors’ activities, such as adding products to shopping baskets, by appending session IDs to the end of the URLs. These IDs are necessary for visitors to interact with functionality that is user-specific; however, they can result in duplicate content issues. As each ID must be unique to each visitor, this potentially creates an infinite number of duplicate pages.

Recommended: It can be solved by removing the session IDs from the URL string and replace them with a session cookie. This cookie will work in the same manner as the ID but is stored on the users’ machine, not impacting the URL.

Lack of Keywordsthe inner pages ranking more frequently for certain search terms rather than the homepage, of which was historically ranked due to its higher overall authority.

Good URL Example: http://www.jessops.com/compact-system-cameras/Sony/NEX-5-Black-18-55-lens/

Bad URL Example: http://www.jessops.com/online.store/products/77650/show.html

www and non-www URLsThere can be two different issues. The first is that the non-www URL is invalid, returning a 404 ‘page not found’ error. The second issue is that the non-www URL could render the same as the www version – this would effectively create two exact copies of the same page.

Recommended: Ensure that the non-www version is 301 redirected into the www version.

Index File RenderingA website will sometimes render both a root directory in the URL and the root appended with the index file (index.html, index.php, index.aspx, etc). When this happens, both get treated as an individual page by a search engine, resulting in duplicative content.

Recommended: A 301-redirect rule needs to be established from the file name URL to the clean URL.

Capital letters

 

Canonicals & Robot Head tags:

How to check Canonicals & robots head tags:

Go to the browser and open different template pages

After pages are loaded, open the source code by pressing Ctrl+U

Check whether the page has rel=canonical tag in it.

IssuesHow to fix
Noindex, nofollowThe NOINDEX value tells search engines NOT to index this page, so basically this page should not show up in search results, sometimes developers will add the NOINDEX, NOFOLLOW meta robots tag on development websites so that search engines don’t accidentally start sending traffic to a website that is still under construction. E.g.   If you’re redesigning your site, your designer may set up a “development” or “dev” site at a temporary location. That’ll let you see their design and make changes before it goes “live. So It’s really important that you keep Google (and other search engines) from indexing development, testing, or staging sites – otherwise, you may end up with pages from that domain in the search results. That can cause duplicate content issues

Recommended: Check the meta tags before launching a website.

Use tags cautiously: Nofollow, Noarchive, Noodp, NoydirThe meta robots tag is a powerful way to instruct search engines on how to deal with your content, it becomes very important to use this mechanism wisely. If you don’t want to have a page indexed by search engines use Noindex/Nofollow tags, but use it cautiously.  There should not be a situation where the pages have been blocked from indexing accidentally.
Broken Canonical LinkThis data can be found under Site Audit tool in SEMrush. This check is triggered if a rel=” canonical” link specified by you is broken and thus leads to nowhere.

Recommended: Update these links to the proper page.

Multiple Canonical URLsThis check is triggered if you’ve specified more than one rel=” canonical” link in your page’s markup. Seeing multiple canonical URLs, search engines can’t identify which URL is the actual canonical page, and will likely ignore all the canonical elements or pick the wrong one.

Recommended: Placing only one canonical URL in a single page.

Missing Canonical URLsThis is to check if there are no canonical tags present on the webpage. Seeing none canonical tags, search engines can’t identify which different version of the same URL which one is the primary URL.

Recommended: Placing canonical tags on every webpage of the website.

Metadata:

Look for the following mistakes and based on the “How to fix” column, those mistakes can be resolved.

 IssuesHow to fix
Missing metadataMetadata like Title tags, description tags, H1, H2 & Alt tags are what primarily helps visitors and search engines understand what the page is about.

Recommended: It’s highly recommended to have custom content added for all pages. Using tools that help create such content can be used for large websites. It’s important to do a good extensive keywords research before preparing meta tags. The title tags, description tags, H1 & image alt tags are some of the best places on a page to put these keywords. Note – using keywords in the meta description not influences rankings, although it has an interesting indirect impact: when visitors see bolded keywords in results they tend to click on those results more. This increases your click-through rate (CTR), which in turns plays a role in determining the rank for your page. A high CTR is seen by search engines as a signal that a page is very relevant for a query.

Metadata that are too short or too longMetadata that is too short may not contain relevant information and might be replaced by random text by Google. If the metadata is too long, it might be truncated.

Recommended: The size of metadata must be updated according to the standards specified by Google if any. title tag length: Minimum length: 30 characters to a maximum length: 60 characters

Description tag length: Minimum length: 70 characters to a maximum length: 155 characters

Duplicate metadataIf metadata is duplicate, these pages compete with each other in terms of keywords, decreasing CTR and increasing bounce rates by making it difficult for visitors to identify the page that they want to visit. For example:

www.example.com/my-favorite-content/

www.example.com/my-favorite-content-again/

If you have multiple unique URLs with the same meta tags, you may decide to:

Change the meta tags of pages to distinguish between the duplicate pages,

Recommended: Try to create unique metadata content for every page. If that is not possible, use metadata content that is autogenerated by Google or using similar tools

Missing brand name in title tagsAssociating brand name with the products/services of your website builds trust and popularity associated with it.

Recommended: Brand name must be included in all titles, by following a fixed syntax/structure throughout all the pages A good example how to use the brand name in title tags would be – (Primary Keyword – Secondary Keyword | Brand Name )

Meta description that lack logicThis implies to only for large websites.

Recommended: Identify a meta-description type/syntax for every page template to be implemented throughout the website. These can then be customized to suit the contents of each page.

Lack of relevant keywordsTitles and meta description must have the most relevant keywords that convey the summary of the page.
Keywords stuffingWriting unimpressive title tags and description tags or stuffing too many keywords will end up nowhere, there has been common mistake while create description tags, creating a meaningless copy will decrease users intent to click on your webpage but a creative copy that plays up to your user’s expectations will not only help you stand out in the crowd, but will also increase traffic, CTR, and conversions.

Authority

Page Load time:

How to check Page load time:

Below are the 4 free tools to check website load time:

IssuesHow to fix
High HTTP requestsReducing the number of requests will speed up site load time. Look through website files and see if any are unnecessary. Recommended: In Google Chrome, use the browser’s Developer Tools to see how many HTTP requests your site makes. There is a Time” column shows how long it takes to load each file, navigate and optimize files which take maximum time to load.
Minify and combine filesYou can reduce the number of files by minifying and combining. This reduces the size of each file, as well as the total number of files.

This is especially important if you use a templated website builder. These make it easy to build a website but can sometimes create messy code that can slow down your site considerably.

Recommended: Minifying a file involves removing unnecessary formatting, whitespace, and code

CSS and JavaScript filesScripts like CSS and JavaScript can be loaded in two different ways: synchronously or asynchronously.

If scripts load synchronously, they load one at a time, in the order, they appear on the page. If scripts load asynchronously, on the other hand, they will load simultaneously. Recommended: Loading files asynchronously can speed up your pages because when a browser loads a page, it moves from top to bottom. If it gets to a CSS or JavaScript file that is not asynchronous, it will stop loading until it has fully loaded that particular file. If that same file were asynchronous, the browser could continue loading other elements on the page at the same time

Large JavaScript loadingDefer JavaScript loading, deferring a file means preventing it from loading until after other elements have loaded. If you defer larger files, like JavaScript, you ensure that the rest of your content can load without a delay. Recommended: check out this tutorial on deferring loading JavaScript.
Image OptimizationThese are the top reasons why images are slowing down page load time: Too large image files, Synchronous loading of elements, Too many images & HTTP requests.

Recommended:

Too large images: Decide on an optimal width and height for image display on different end-user devices like desktop, laptop, tablet, smartphone. Do not use one large image and scale the display size with width and height attributes of the IMG element. Instead, generate and store the different sized files. Use conditional logic to serve the appropriately sized image file depending on the user device. Synchronously – Defer loading images. Instead, load them asynchronously after the necessary CSS and HTML has been rendered. The images may be placed below the fold and loaded conditionally using JavaScript when the visitor scrolls down to the relevant portion of the page. Too many images and HTTP requests – Reduce the number of images, use a CDN – place your files on servers distributed around the world. Image file requests from your web page go to external servers, and your server could be serving just the base HTML file.

Mobile-friendliness:

IssuesHow to fix
Mobile FriendlyThere are three ways you can optimize to have a site for mobile-friendly, 1. Create a Mobile Version of Your Current Site 2. Use a Mobile-first Responsive Design

3. Adaptive Web Design. Recommended: check if the website is mobile-friendly using the tool: Google’s mobile-friendly test. The best part about this tool is that if the website isn’t mobile-friendly, it provides reasons why. Alternatively, the Chrome browser itself will be an option to check website mobile-friendliness. In chrome browser window right-click and select Inspect. You will see the options to test on multiple viewports there.

Content does not fit the screenThe page’s dimension must be adjusted to suit the screen of any device. The contents of the page must also be scaled accordingly.

Recommended: Specify a viewport using viewport meta-tag

Touch elements not spaced wellEnsure that touch-stimulated functionalities such as buttons and links are appropriately spaced on the screen so that users do not face any issue tapping on them without accidentally tapping on the neighbouring elements.  Also, the size of these CTAs must be large enough to be comfortably identified and tapped using fingertips.
Optimize mobile-specific functionalitiesFunctionalities like prompting users to dial a number when a phone number present on the website is tapped improve user experience
Small font sizeSmall font sizes account for bad user experience, forcing users to zoom in to read the contents of the page.

Recommended: Set a font size that scales correctly with the size of the screen

Slow mobile pagesLoad time of mobile pages must be as short as possible. Higher wait times drives down ranking and increases bounce rates.

Recommended: Use recommendations from Google speed insights to optimize mobile content to improve page speed

Unplayable contentSome videos cannot be played on players available for mobile devices like Flash.

Recommended: Use video-embedding that is playable on all devices

Link Profile:

What to check in Link profile: –

  • The data point which needs to check – Broken Backlinks, Lost links, Broken links on the website, New & lost backlinks, referral domains authority, Anchors tags, outgoing broken links etc.

How to check Link profile: –

  • Login to ahrefs.com
  • Add a new project by entering domain URL
  • Check for broken links under backlink section on the left. The tab will give the number of broken links on external websites with the list of the same.
  • Now check for lost links under backlink section on the left. The tab will give the number of lost links on external websites with the list of the same.
  • To check broken internal links, click on broken links under Outgoing links section on the left;
  • Other data points can be viewed from top overview section on the left.

Look for the following mistakes and based on the “How to fix” column, those mistakes can be resolved.

IssuesHow to fix
broken backlinksBroken backlinks are broken inbound links from other websites to your site. It may be broken due to delete or move a page that has existing backlinks, linking site makes a mistake when linking to you (e.g., they may accidentally add an extra, unwanted character to the URL).

Recommended: How to fix broken links – 1. Redirect (301) the broken page to the new location, 2. Recreate and replace the content at the broken URL, 3. Redirect (301) the broken page to another relevant page on your website,

broken linksBroken links are those on your site that point to non‐existent resources—these can be either internal (i.e., to other pages on your domain) or external (i.e., to pages on other domains.)

Recommended: Replace the broken links with live links, Remove the broken links

Anchor textInbound Links: It is great when PR of the site you are getting links from is high but when the anchor text is “Click here!” or something like that, such a link is barely useful. Keywords in the anchor text are vital, so if the backlink doesn’t have them, it isn’t a valuable one.

Recommended:  Creating natural-looking anchor text and sometimes use keywords instead of the brand name or click here.

backlinks from a single domainA large number of backlinks from a single domain is not a healthy way to create backlinks.

Recommended: Create backlinks from quality websites and from different domains. 

Nofollow vs Do followA dofollow link will pass the SEO strength, or “PageRank” of the page to the site that it links to. A nofollow link, in theory, will not do this. However, it’s recommended to have a good ratio between the two. Even nofollow links coming from an authoritative website has a capacity to improve your website “authoritativeness”.

Recommended: A higher ratio of do-follow links should be there in the link profile.

A backlink from only high UR domains – ahrefsAhrefs’ URL Rating (UR) is a metric that shows how strong a backlink profile of a *target URL* is on a scale from 1 to 100

Recommended: Links with UR lower than 30 UR are basically very low authority, UR above 30 is considered good backlinks.

Link VelocityDon’t try to build a lot of links at a time as your site may seem a link farm in the eyes of the Search Engines.
Linking to sites with a bad reputationNever ever link to a site that had a bad reputation i.e., spam site, duplicate content site (auto-blogging, illegal sites, porn sites)

Recommended: Check the backlinks and its categories of pages. Check if that website ranks well on the search engine. This may clear if is a bad reputed website. You should also review who’s linking to your site. Link equity flows both ways. If you think spammy backlink to your site is negatively impacting you, you should create a manual disavow file and upload to Google Search Console periodically.

Lost linksIf the backlinks profile shows that there has been the high number of lost links in small intervals, this means the backlinks created earlier were not from a reputed website or it may be a spam website.

Recommended: In small intervals keep a check on lost links profile under SEMrush data and avoid creating links from such websites.

Link DiversityA site that had a backlink profile made up entirely of one type of link would certainly be at risk for a penalty in the search engines.

Recommended: The quality of links and the sources on which they’re built – heck, even the TLDs (top-level domain) of your referring domains and their various PageRanks – are all factors that play into maintaining good link diversity

Link EquityCommon link equity flow issues:

1.  A few pages on a large site get all the external links: When only a few pages are earning any substantial quantity of external links.

Recommended:

We have to identify the most important non-link earning pages. What are the pages that we wish to rank that are not yet ranking for their terms and phrases that they’re targeting?

We want to optimize our internal links. By using use Open Site Explorer and look at Top Pages, or Ahrefs or and look at pages, the ones that have earned the most links and the most link equity. In order to pass link equity, we need to create internal links between the most link equity pages and with low link equity pages.

2. Only the homepage of a site gets any external links.

Often a lot of small businesses, have this type of presence, and only the homepage gets any link equity at all.

Recommended:

Make sure that the homepage is targeting and serves the most critical keyword targets.

Consider creating new content pages, When you do not have much of link-worthy pages on the website.

3. Mid-long tail KW-targeting pages are hidden fro the site navigation

It happens on a large website, when pages that are targeting keywords that don’t get a ton of volume, but they’re still important. They could really boost the value that we get from our website because they’re hyper-targeted to good customers for us. In this case, one of the challenges is they’re hidden by the information architecture. So, the top-level navigation and maybe even the secondary-level navigation just doesn’t link to them. So they’re just buried deep down in the website.

Recommended:

I.            Find semantic and user intent relationships – Try creating a semantic relationship with the pages which are on top-level navigation. Creating a relationship with similar content pages will help users and spider navigates to those pages easily.

II.            Consider new top-level or second-level pages.

 

4. Subdomains dilute link equity:

A subdomain is equal and distinct from a root domain.  This means that a subdomain’s keywords are treated separately from the root domain.

When xyz.com is already a popular online platform for keyword “shoes” to seek kinship with other shoes. This happens because one domain has high external links with anchor text “shoes” and this given higher authority to that domain. But the link equity is not easily passed on to different domains. In such a case, searches that rank for xyz.com wouldn’t automatically rank for blog.xyz.com because each domain has its own domain authority. The lesson here is that link equity is diluted across subdomains.  Each additional subdomain decreases the likelihood that any particular domain ranks in a given search.  A high-ranking subdomain does not imply your root domain ranks well. Subdomains suffer from link equity

Recommended: Consider subdirectories over subdomains.  Boosting the authority of the root domain should be a universal goal of any organization. The subdirectory strategy concentrates the link equality onto a single domain while the subdomain strategy spreads the link equity across multiple distinct domains. In a word, the subdirectory strategy results in better root domain authority. Higher domain authority leads to better search rankings which translate to more engagement/traffic.

Page Markup – Rating & Reviews:

How to check Page markups:

  • org can be a reference point to check the markup data implemented correctly or not.
  • Use a Structured Data Testing Tool to Find Errors

Structured data tools help ensure that search engines understand your marked-up content. They’re also a great way to double-check your pages for valid markup and find errors that are easy to fix.

All of Google’s rich snippets, rich cards, and enriched results are all known by one umbrella term: “rich results. Rich results can include a wide variety of items, including blog posts, videos, courses, local businesses, music, product info, job postings, and more. Eventually, the Rich Results Testing Tool will be an easy way for you to determine whether your structured data is eligible to be displayed as a rich result.

Look for the following mistakes and based on the “How to fix” column, those mistakes can be resolved.

IssuesHow to fix
Missing or inappropriate structured dataThis is the kind of thing that can happen when not making the right choice for structured data. For example, using structured data designed for a Product when the company is offering a service or there is a completely missing from some pages.

Recommended: Implementing page markup as per the Schema.org guidelines.

Structured data doesn’t match on-page contentWhatever information is used as prices, it must be reflected exactly the same in the on-page content that’s visible to users.
Failure to Read Developer Pages for Specific Data TypesFailure to follow the guidelines for specific data types can lead to a variety of errors. Those errors can result in a manual penalty.

Recommended: It’s very important to double-check the examples on Google’s developer pages for what is and what is not appropriate for particular pages or website.

Product MarkupA product markup should include; Name, Image, Price, Aggregate rating, Description, In Stock. If any of these elements are missing, you should flag that and implement the same on all PDP pages.
ReviewsMisapplying Reviews and aggregate rating markup – Using Review markup to display “name” content which is not a reviewer’s name or aggregate rating. If your markup includes a single review, the reviewer’s name must be an actual organization or person. Other types of content, like “50% off,” are considered invalid data to include in the “name” property.

Recommended: The only UGC (User-generated content) review content you should markup is reviews which are displayed on your website and generated there by your users

Local SEO:

IssuesHow to fix
No Google My Business Profilewhen there is no Google My Business profile page, In such case the business will be missing out on the large number of users who are searching local business places.

Recommended: Alongside focusing on SEO best practices for your business website, set up a Google+ Local Listing page for your business as well. Ensure that you verify your business’s listing as it confirms your ownership of the business page. Consequently, no one can make any edits in the local list on your behalf except yourself.

Duplicate ListingsThe common mistake I see is having duplicate Google My Business profile pages. It creates a bad experience for users to see the same information twice in their results, and it’s a waste of Google’s resources to analyze and store duplicate information. That’s why creating duplicate listings is against Google’s terms of service.  Each business location should only have one Google My Business profile page

Recommended: Deleting all duplicate listings from Google listing.

Inconsistent NAP Information OnlineIncorrect information about – Name, Address, and Phone number.  it’s important to have the same contact information listed on both your website and your Google My Business page.

Recommended: Ensure that the contact details that you add on your business listings are the same as the ones on your website. Google tends to investigate your authenticity by comparing the contact info that you add at various places. Therefore, this information snippet needs to be accurate and consistent across all online pages.

Social & Rich Media Integration:

How to check Page markups:

Facebook:

  • Test your markup with Open Graph Object Debugger – a built-in tool by Facebook that will list all the implemented tags and show you a preview of your URL.
  • Post a link on Facebook – preview your link and all the extracted data

Twitter:

  • Go to Cards Validator and include a link from your website to see if the tags were implemented correctly.

Pinterest:

LinkedIn

  • LinkedIn doesn’t have a dedicated tool where you could test your snippets, but it is compatible with Open Graph. Consequently, there is no need to add some extra tags to your web pages.
IssuesHow to fix
Missing social media or rich snippet tagsCheck with the above tools if the tags are missing from the website. Recommended: Implementing social Media-rich tags.

Leave a Comment

Lingual Support by India Fascinates