Complete and up-to-date list of all Google crawler IP addresses and networks (CIDR ranges). Data is fetched directly from Google Search Central’s official API and updates daily.
| Network (CIDR) | Version | Type | Reverse DNS |
|---|---|---|---|
| 2001:4860:4801:10::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:12::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:13::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:14::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:15::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:16::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:17::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:18::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:19::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
| 2001:4860:4801:1a::/64 | IPv6 | Common Crawlers | crawl-***.googlebot.com |
Why Do You Need This List?
If you manage a website, application, or server, it’s important to identify legitimate Googlebot requests:
- Security - Block fake bots pretending to be Googlebot
- Optimization - Prioritize legitimate Google requests
- Monitoring - Identify Googlebot crawl patterns in logs
- Debugging - Troubleshoot indexing issues in Search Console
Google’s 3 Types of Crawlers
Google uses 3 main types of crawlers, each with its own purpose and IP list:
1. Common Crawlers (Regular Googlebot)
Regular crawlers used for Google products like Google Search. Always respect robots.txt.
- Reverse DNS:
crawl-***-***-***-***.googlebot.comorgeo-crawl-***-***-***-***.geo.googlebot.com - Quantity: 164 networks (128 IPv6 + 36 IPv4)
2. Special-Case Crawlers
Crawlers that perform specific functions (like AdsBot) where there’s an agreement between the site and the product. May ignore robots.txt.
- Reverse DNS:
rate-limited-proxy-***-***-***-***.google.com - Quantity: 280 networks (128 IPv6 + 152 IPv4)
3. User-Triggered Fetchers
Tools and functions where the end user triggers the fetch (like Google Site Verifier). Ignore robots.txt because they’re user-initiated.
- Reverse DNS:
***-***-***-***.gae.googleusercontent.comgoogle-proxy-***-***-***-***.google.com
- Quantity: 1,351 networks (724 IPv6 + 627 IPv4)
How to Use This List
Check Using the On-Page Tool
Enter an IP address in the search field above and the tool will automatically check if it’s in one of the official networks.
Check Using Reverse DNS
# Linux / macOS
host 66.249.66.1
# Windows
nslookup 66.249.66.1If it’s legitimate Googlebot, you’ll get a result like:
1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.comBlock/Allow in Firewall
If you want to block or allow only legitimate Googlebot, use the CIDR block list. Example for iptables:
# Allow Googlebot
iptables -A INPUT -s 66.249.64.0/19 -j ACCEPTUse in Nginx
# geo block to identify Googlebot
geo $is_googlebot {
default 0;
66.249.64.0/19 1;
# ... other networks
}How can I verify a request really comes from Googlebot?▼
How many IP addresses does Googlebot use?▼
What's the difference between Common Crawlers and Special Crawlers?▼
Does Googlebot always respect robots.txt?▼
Sources
Last updated: Data updates daily from Google’s official API.
