Crawlability ensures a smooth process for the search engines' spiders to crawl the website in order to get information about it and to index it.
Crawlability represents the easiness of search engines to crawl a website without missing essential content or having their crawler blocked. Crawlers or spiders represent the search engines' bots that crawl a website in order to gather information about its content and rank it appropriately.
If a search engines' bot crawls the website correctly and fetches all the information, the website and its pages will be successfully indexed.
However, if there are broken links or wrong sitemap setup, it may lead to a few crawlability issues, and the search engine's spider will not be able to access, crawl and index specific content on a site.
To ensure a proper and smooth crawling of a site, check this list of actions to avoid because they could prevent the spiders from crawling: