You have two options to work around this problem and ensure your audit is up and running.
Option 1 is to bypass the disallow in robots.txt and via the db telegram robots meta tag, which involves uploading the .txt file, which we will provide you, to the root folder of your website.
Option 2 is to scan with your credentials. In this case, all you need to do is enter the username and password that you would use to access the hidden part of your website. SemrushBot will use this information to perform the audit. The final step is to tell us how often you want us to audit your site. It could be weekly, daily, or just once. Regular audits are definitely a good idea to keep an eye on your site's health.
And that’s it! You’ve learned how to crawl a site with the Site Audit tool.
Examine Web Crawler Data with Semrush
All data about your web pages collected during scans is recorded and saved in the Site Audit section of your project.
Here you can find your site health score:
It also checks the total number of crawled pages that are "healthy" "broken" and "problematic". This view practically cuts the time it takes to identify and fix problems in half.
URLs from file: This is where you can get really specific and focus on exactly the pages you want to check. You just need to have the URLs saved as .csv or .txt files on your computer and you can upload them directly to Semrush.
This option is great when you don’t need a big picture overview. For example, when you’ve made spot changes to specific pages and just want to see how they’re performing. This can save you some crawl budget and give you the information you really want to see.
Step 5: Bypass Site Restrictions
-
- Posts: 81
- Joined: Sun Dec 15, 2024 3:29 am