It’s the goal of every SEO and digital marketer to have their web pages rank on a search engine results page — preferably on page one so their content is more likely to be seen and clicked.
After content has been created and published, what does the process look like, exactly, for this to happen?
To answer this question, we must first look at the two types of agents that browse the web in search of information: humans and bots.
Human agents are, of course, people like you and me who use search engines to find information relevant to our search queries. Before we have the ability to search and receive that relevant information, bots, also known as crawlers, first have to navigate, or crawl, the internet to learn about and store that information.
So, what exactly is a crawler?
A web crawler is an online bot that explores web pages on the internet to learn about them and their content, all in order to serve this information to online searchers when they pose a query on a respective search engine.
Because the internet is also known as the World Wide Web, it’s fitting that a bot is known as a crawler — but other names include SEO spider, website crawler, or web crawlers.
When they peruse the internet and its web pages, they pick up information and add it to their index.
The index is a database of web pages that the crawler has discovered. This database is what a search engine like Google pulls its search results from. So, when you search for something on Google, the results you see aren’t being generated live — rather, the search engine is sifting through its existing index.
Search engines like Google are actually great examples of a crawler. The bot that Google uses is fittingly called Googlebot.
The organic search process can’t be complete unless a crawler has access to your site.
Remember your goal as an SEO is to have your web pages rank on a search engine’s results page. To be on the results page — in any rank position — a crawler needs to visit your site.
A crawler’s ability to access your site reveals if there are any search engine indexing issues present.
The importance of a crawler doesn't stop there: crawlers are directly connected to technical SEO and other factors that affect the overall user experience.
For example, crawlers reveal in a site audit duplicate content, status codes, the presence of no-indexing tags, redirect issues, and other HTML or page information.
These various site factors can be uncovered with an SEO audit — an evaluation of a site's technical performance — but a site audit can't be run without a crawler.
Without crawlers, the internet would be a jumbled, scattered mess of information. Crawlers sort through that information and categorize it appropriately so that users can have the best search experience when searching for online.
Because of their high importance, be aware of crawlability issues. These would alert you to SEO issues that prevent your site from being optimized to its full potential to rank.
As mentioned above, Googlebot (for desktop and mobile) is the Google crawler that most people are familiar with, but Google has a ton of other agent types, too:
It’s not just Google that has crawlers. Each search engine has its own respective crawlers that pertain to their index. There are crawlers like Bingbot (Bing), Slurp Bot (Yahoo!), and more.
A crawler is always looking to find new URLs on the web to become a part of the index (and the search results). So, how does a crawler navigate a website?
Once a crawler like Googlebot lands on a website, it uses that website’s internal links to navigate to other web pages on that site.
Internal links are those clickable (often blue text) links — there’s actually one in the previous sentence on the words “internal links”!
The crawlers pick up information along this page-to-page journey and add it to their indexes.
If your site is new and doesn’t have an established interlinking strategy, you can submit your URL on Google Search Console to have Googlebot come to your site. You’ll also want to create a sitemap and submit it to Google.
Although it’s important to give the crawler proper access to your site so you can appear in search results, it’s not always necessary to have the crawler access every page of your site.
Not every URL on your site needs to make it onto the SERPs. A log-in page, for example, is a niche page for users who have an account with your company, so there’s no need for a crawler to learn more about this page.
Plus, by blocking the crawler’s access to pages, you save the crawl budget.
What exactly is a crawl budget? A search engine bot's time and resources are finite, so the crawl budget defines how many pages a bot will crawl within a specific amount of time.
By limiting the access to non-important pages, you extend your crawl budget to further guarantee that your important pages (i.e. those that convert) will make it into the index.
The primary factors that Google considers when allocating crawl budget include:
There are a few ways to optimize the crawl budget:
“Noindex” Tag: A “noindex” tag informs search engine bots of which pages to not include in their index. The implementation of this tag will remove pages from the index, but there will still be a portion of the crawl budget reserved for them.
Canonical Tag: These tags will inform Google that a grouping of similar pages has a preferred version that you wish to be shown in the SERPs.
Robots.txt: Robots.txt is a file that a search engine bot will read before it crawls your site. This file sets parameters on which pages are and are not to be crawled.
Use seoClarity's customizable site crawler to conduct a site audit and find technical issues that impact user experience.