Looking for a Funda scraper in Python? The Dutch real estate market moves fast. Whether you're tracking price trends, monitoring competitor listings, or building a data pipeline, this Funda scraper Python guide shows you how to get structured property data at scale using ScrapingBot's API.
Table of contents
1. Why scrape Funda?
Funda is the first real estate platform in the Netherlands, with over 30,000 active listings at any given time. It's used by buyers, sellers, agents, and investors alike. Scraping it gives you access to:
- Price history and market trends by neighborhood
- Listing density and time-on-market data
- Rental vs. sale inventory ratios
- Competitor agent activity
2. Technical challenges
Before diving into the code, it's important to understand why scraping Funda isn't as simple as a basic HTTP request:
- Anti-bot protection — Funda detects headless browsers and blocks requests with suspicious headers.
- JavaScript rendering — Listing data is loaded dynamically, so basic HTTP requests won't capture it.
- IP rate limiting — Too many requests from one IP triggers temporary bans.
- Pagination — Search results span hundreds of pages that need systematic traversal.
3. How ScrapingBot handles them
ScrapingBot's Real Estate API abstracts all of this complexity: it rotates IPs automatically, renders JavaScript, and returns clean structured JSON — no browser automation needed on your end.
4. Step-by-step: build your Funda scraper in Python
Install the library
pip install requestsBasic request
import requests
# Your ScrapingBot credentials
USERNAME = "your_username"
API_KEY = "your_api_key"
# The Funda listing you want to scrape
TARGET_URL = "https://www.funda.nl/koop/amsterdam/huis-12345678/"
def scrape_funda(url):
api_url = "https://api.scraping-bot.io/scrape/real-estate"
payload = {"url": url}
response = requests.post(
api_url,
json=payload,
auth=(USERNAME, API_KEY)
)
if response.status_code == 200:
return response.json()
else:
raise Exception(f"Error {response.status_code}: {response.text}")
data = scrape_funda(TARGET_URL)
print(data)Scraping multiple listings (with pagination)
import requests, time
BASE_SEARCH = "https://www.funda.nl/koop/amsterdam/p{page}/"
def scrape_pages(n_pages=5):
results = []
for page in range(1, n_pages + 1):
url = BASE_SEARCH.format(page=page)
data = scrape_funda(url)
results.extend(data.get("listings", []))
time.sleep(1) # polite delay
return results
listings = scrape_pages(n_pages=3)
print(f"Collected {len(listings)} listings")5. Sample output data
Here's what a typical response looks like for a single Funda listing:
| Field | Example value | Type |
|---|---|---|
| address | Keizersgracht 123, Amsterdam | string |
| price | € 875,000 | string |
| price_per_m2 | € 7,291 | string |
| surface_m2 | 120 | integer |
| rooms | 4 | integer |
| listing_date | 2023-02-10 | string |
| energy_label | B | string |
| agent | Makelaar Amsterdam BV | string |
6. Going further
Once you have the raw data from your Funda scraper Python script, you can pipe it into a CSV with pandas, store it in a PostgreSQL database, or feed it into a price-trend dashboard. ScrapingBot also supports Airbnb, Zillow, and Rightmove with the same API interface.
Ready to try it? Get 1,000 free API calls when you sign up for ScrapingBot.
Try ScrapingBot for free →



