Intermediate-Level SEO Best-Practices

SEO is fascinating because it gives an insight into how companies with the most effective data analysis and management in the world design their algorithms and plan their service. Something that any intermediate-level SEO student will know is that although Google has some clever tricks, to function best they need a lot of help from webmasters. Here are two best practices for webmasters looking to improve their efforts.

Schema Markup

Schema markup is a new form of optimization that became popular when Google started making the most out of rich snippets. Schema effectively informs the search engines about the meaning of the data on your website, allowing them to better categorize the information that you give them.

For example, if you had opening hours on your website, a crawler would be able to recognize the fact that they’re numbers related to the words “opening” and “hours”, and could therefore return your site if somebody searched for your niche and “opening hours [the hours you operate]”. If you use schema markup, search engines will know what time you are open and be able to display you in the SERP when somebody local searches for “open now”.

Schema markup involves knowing your code, but you can use a schema markup generator to generate JSON-LD & Microdata markups— effectively taking the hard part out of schema markup — so you can focus on the actual data you’re presenting.

Robots Meta Directives

You should know about the robots.txt file if you have a good understanding of beginner SEO. It’s that little piece of text that gives a bot a suggestion about how they should crawl through your site by telling them where to look. Robots meta directives are different; they can be meta tags that are part of the actual HTML or part of the HTTP header and are used to communicate with search engine crawlers.

You can use the meta name “robots” in your meta tags and issue various commands. It’s important to note that these don’t command crawlers, they merely guide them.

  • Follow – this command tells the crawler to follow all links on a page and pass link juice to the linked pages
  • Nofollow – this tells a crawler to not follow any links on the page
  • Noindex – this tells a crawler to not index a page
  • Noarchive – tells a crawler to never show a cached link to the page on the SERP.
  • Unavailable_after – this tells a crawler not to index the page after a date that you provide. This is very useful when you have a lot of content that gets changed on a regular basis.

These tags can also be used in the HTTP header, where it takes the form of an “x-robots-tag”, placed in the .php, .htaccess or server access file. This is useful when you want to do advanced directives like influencing how non-HTML content is indexed (e.g. if your page is video or flash-based). It can also add rules on whether a page should be indexed. This can be super helpful if you have a community on your site, as you can index certain contributors if they have passed milestones like contributing to 100 pages.

Share your comments here