A crawler is much like a spider except it is programmed to constantly surf the web, following any and all links it comes across. As it visits new websites, it checks its own database to see if the site is listed. If the site is already listed, it makes note of any changes and calculates a search engine ranking for the site. If the site has not been previously listed, the crawler will record all important information, add the website to the database, and assign a ranking to it.
Previous Web Hosting Term: CPU - Central Processing Unit
Next Web Hosting Term: Cronjobs