#robots.txt file #Created 5/15/23 #David White # This file is to prevent the crawling and indexing of certain parts of our site by web crawlers and spiders run by sites like Google. By telling these "robots" where not to go on our site, we save bandwidth and server resources. # Ensure that this file is always at the root. This file will be ignored within any other directory. User-agent: * Crawl-delay: 10 Sitemap: https://sanjac.edu/sitemap.xml #Sections Disallow: /noindex/ Disallow: /_showcase/ Disallow: /_resources/data/ Disallow: /_resources/_design/ Disallow: /_resources/api/ Disallow: /_resources/dmc/ Disallow: /_resources/external_xml/ Disallow: /_resources/includes/ Disallow: /_resources/js/ Disallow: /_resources/ldp/ Disallow: /_resources/ou/ Disallow: /_resources/php/ Disallow: /_resources/serach/ Disallow: /_resources/xsl/ Disallow: /programs/credentials/ Disallow: /about-san-jac/economic-impact/ Disallow: /programs/areas of study/health/nursing/next-steps