Crawling a website for SEO represents the spiders' process of gathering data about websites in order to index and rank them accordingly.
Crawling a website is the process of search engines' spiders to acquire data about a website by scanning their content, titles, images, videos, hyperlinks, anchor texts, keywords, alt tags, and site structure.
This helps the search engine understand more about the type of website, its relevance, and find updated content on the web: new sites or pages, changes in content or non-functional links.
All the collected information is processed and indexed by the search engines and then ranked in SERP accordingly.
If a website is not adequately crawled by the search engine spider, the pages cannot be indexed. This means they will not appear in the search engine result pages when someone types a query to search something on the same topic as the site.
To easily understand how crawling works, imagine a spider that fetches a few web pages, then follow the links on those pages and fetch them as well. And then follows the new links and fetches the pages they point to, and so on, until it gets an updated list of web addresses. This happens in the blink of an eye or even less time.
The crawling process begins with a list of crawls that was done in the past and sitemaps provided by website owners to the search engines.