fbpx

SEO 101: Site Crawling

As a business owner who is unfamiliar with the complexities of websites and search engine optimization, you are probably wondering how search engines discover websites and determine their ranks. This discovery process is known as crawling.

Search engines, such as Google, have computers running automated software called bots. These bots scour the internet constantly, discovering newly-registered domain names and changes to content on existing domain names.

As the bots crawl your website, they examine its contents. This includes text and image page content, meta tags, links to other websites, how the website is coded, and whether or not it is accessible when viewed from a smartphone or tablet.

After looking at everything on your website, they determine what your top keywords are then compare your website to other websites competing for those keywords. The purpose of these comparisons is to determine usefulness and uniqueness of websites, and rank them accordingly. Websites that appear less useful, less accessible, or less original (don’t steal content from other websites!) receive lower search rankings.

Websites that seem to update content or post new content frequently, such as blogs, tend to get crawled more regularly than websites that remain unchanged for long periods of time, so having an active blog on your website is definitely a good idea!