Complete Googlebot IP Address List - Updated Daily

Nati Elimelech·November 8, 2025·4 min read

Complete and up-to-date list of all Googlebot, Special Crawlers, and User-Triggered Fetchers IP addresses and networks. Includes interactive IP checker tool.

Summarize and ask questions with:
Table of Contents

Complete and up-to-date list of all Google crawler IP addresses and networks (CIDR ranges). Data is fetched directly from Google Search Central’s official API and updates daily.

Check IP Address
Network (CIDR)VersionTypeReverse DNS
2001:4860:4801:10::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:12::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:13::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:14::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:15::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:16::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:17::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:18::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:19::/64IPv6Common Crawlerscrawl-***.googlebot.com
2001:4860:4801:1a::/64IPv6Common Crawlerscrawl-***.googlebot.com
Total 1937 IP networks
Page 1 of 194

Why Do You Need This List?

If you manage a website, application, or server, it’s important to identify legitimate Googlebot requests:

  1. Security - Block fake bots pretending to be Googlebot
  2. Optimization - Prioritize legitimate Google requests
  3. Monitoring - Identify Googlebot crawl patterns in logs
  4. Debugging - Troubleshoot indexing issues in Search Console

Google’s 3 Types of Crawlers

Google uses 3 main types of crawlers, each with its own purpose and IP list:

1. Common Crawlers (Regular Googlebot)

Regular crawlers used for Google products like Google Search. Always respect robots.txt.

  • Reverse DNS: crawl-***-***-***-***.googlebot.com or geo-crawl-***-***-***-***.geo.googlebot.com
  • Quantity: 164 networks (128 IPv6 + 36 IPv4)

2. Special-Case Crawlers

Crawlers that perform specific functions (like AdsBot) where there’s an agreement between the site and the product. May ignore robots.txt.

  • Reverse DNS: rate-limited-proxy-***-***-***-***.google.com
  • Quantity: 280 networks (128 IPv6 + 152 IPv4)

3. User-Triggered Fetchers

Tools and functions where the end user triggers the fetch (like Google Site Verifier). Ignore robots.txt because they’re user-initiated.

  • Reverse DNS:
    • ***-***-***-***.gae.googleusercontent.com
    • google-proxy-***-***-***-***.google.com
  • Quantity: 1,351 networks (724 IPv6 + 627 IPv4)

How to Use This List

Check Using the On-Page Tool

Enter an IP address in the search field above and the tool will automatically check if it’s in one of the official networks.

Check Using Reverse DNS

# Linux / macOS
host 66.249.66.1

# Windows
nslookup 66.249.66.1

If it’s legitimate Googlebot, you’ll get a result like:

1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com

Block/Allow in Firewall

If you want to block or allow only legitimate Googlebot, use the CIDR block list. Example for iptables:

# Allow Googlebot
iptables -A INPUT -s 66.249.64.0/19 -j ACCEPT

Use in Nginx

# geo block to identify Googlebot
geo $is_googlebot {
    default 0;
    66.249.64.0/19 1;
    # ... other networks
}
How can I verify a request really comes from Googlebot?
Use the IP checker tool in the middle of this page - enter the IP address and check if it's in Google's official list. Additionally, perform a Reverse DNS lookup to verify the hostname ends with googlebot.com or google.com.
How many IP addresses does Googlebot use?
Google uses hundreds of different IP networks. The list includes approximately 1,800 prefixes (CIDR blocks) covering both IPv4 and IPv6. The list automatically updates from Google's official API.
What's the difference between Common Crawlers and Special Crawlers?
Common Crawlers (regular Googlebot) always respect robots.txt rules. Special Crawlers (like AdsBot) may ignore robots.txt in some cases, depending on the agreement with the site owner.
Does Googlebot always respect robots.txt?
It depends on the crawler type: Common Crawlers always respect robots.txt. Special-case crawlers may ignore it in certain cases. User-triggered fetchers ignore robots.txt because they're initiated by user request.

Sources


Last updated: Data updates daily from Google’s official API.

AUTHOR
Nati Elimelech
Nati Elimelech
Leading TECH SEO expert with 20+ years of experience. Former Director of SEO at Wix, where I built SEO systems serving millions of websites. I specialize in solving complex technical SEO challenges at enterprise scale and translating SEO requirements into language that product and engineering teams understand.