How to Clone Your Best Customers: Pipedrive Python Code Tutorial

How to Clone Your Best Customers: Pipedrive Python Code Tutorial

What if you could take your best customers, find companies just like them, and turn that insight into a list of enriched, qualified prospects—ready to engage, right inside your CRM? Sounds like a dream right? Well Surfe’s Contact Enrichment API is here to make dreams come true (within reason).

In this tutorial, you’ll build a Python script that analyzes your recently closed deals in Pipedrive, identifies lookalike companies using Surfe’s matching engine, enriches them with verified data, and creates fully populated prospect records—complete with scoring, field mapping, and pipeline assignment.

By the end of this tutorial, you’ll have a fully functioning Python script that:

  • Fetches recently closed deals directly from Pipedrive
  • Extracts company domains and patterns from successful customers
  • Finds lookalike companies using Surfe’s advanced matching algorithms
  • Creates enriched prospect organizations in Pipedrive with custom fields
  • Prioritizes prospects based on similarity scores and company metrics
  • Automatically assigns prospects to sales pipelines for immediate follow-up

Let’s get into it.

Want the full script?Jump to the bottom or view it on GitHub!

Prerequisites

1. Python 3.x Installation

Most modern operating systems come with Python 3 pre-installed. To check if Python is installed on your system:

Windows: Open Command Prompt (Win + R, type cmd, press Enter) and run:

py --version

macOS/Linux: Open Terminal and run:

python3 --version

If Python is not installed, download it from the official Python website and follow the installation instructions for your OS.

2. Basic Python Programming Knowledge

You should be comfortable with basic Python concepts like functions, variables, and API calls.

3. Pipedrive Account with API Access

To fetch deals and create prospects, you’ll need a Pipedrive account with API access, Here’s how to find it.

4. Surfe Account and API Key

To use Surfe’s API, you’ll need to create an account and obtain an API key. You can find the API documentation and instructions for generating your API key in the Surfe Developer Docs.

Setting Up Your Environment

Let’s begin by setting up your development environment and installing the necessary dependencies.

Creating a Virtual Environment (Optional but Recommended)

Creating a virtual environment is recommended to keep your project dependencies organized:

# macOS/Linuxpython3 -m venv env
source env/bin/activate
# Windowspy -m venv env
env\\Scripts\\activate

Installing Required Packages

Install the necessary Python packages:

# macOS/Linuxpython3
 -m pip install requests python-dotenv
# Windows
py -m pip install requests python-dotenv

Storing Your API Keys Securely

Never hardcode API keys in your script. Instead, store them as environment variables.

Create a file named .env in your project’s root directory:

# Pipedrive Configuration
PIPEDRIVE_API_KEY=your_pipedrive_api_token
PIPEDRIVE_PIPELINE_ID=your_target_pipeline_id_optional
PIPEDRIVE_STAGE_ID=your_target_stage_id_optional
PIPEDRIVE_DEFAULT_OWNER_ID=your_default_owner_id_optional

# Surfe Configuration
SURFE_API_KEY=your_surfe_api_key

Create a Python script file named main.py and add the necessary imports, then create a main function and load all the environment variables:

import os
import sys
import time
from datetime import datetime
from dotenv import load_dotenv
def main():
    # Load environment variables
    load_dotenv()
    # Get API keys from environment
    pipedrive_api_key = os.getenv("PIPEDRIVE_API_KEY")
    surfe_api_key = os.getenv("SURFE_API_KEY")
    # Pipedrive configuration (optional)
    pipeline_id = os.getenv("PIPEDRIVE_PIPELINE_ID")
    stage_id = os.getenv("PIPEDRIVE_STAGE_ID")
    owner_id = os.getenv("PIPEDRIVE_DEFAULT_OWNER_ID")
    # Configuration
    days_back = 30
    # Look at deals closed in last 30 days
    max_lookalikes = 10
    # Maximum lookalike companies to find
    # Validate required environment variables
    if not pipedrive_api_key or not surfe_api_key:
    print("❌ Error: Missing API keys. Please check your .env file.")
    return

Step 1: Creating the Pipedrive Service

We’ll start by creating a Pipedrive service with all the methods needed to interact with the Pipedrive API. This service will handle fetching and creating deals, creating organizations, and formatters for interaction with Surfe’s API. First building step is to initialize it and create a core request maker helper function

import requests
from datetime import datetime, timezone, timedelta
class PipedriveService:
    def __init__(self, api_key, api_base_url="<https://api.pipedrive.com/api/v2>"):
        self.api_key = api_key
        self.api_base_url = api_base_url
    def _make_request(self, method, endpoint, params=None, data=None, json_data=None):
    
    url = f"{self.api_base_url}/{endpoint}"
    
    # Ensure api_key is included in all requests
    if params is None:
        params = {}
    params["api_token"] = self.api_key
    
    payload = json.dumps(json_data) if json_data else data if data else None
    headers = {
        "Accept": "application/json",
        "Content-Type": "application/json",
    }
    response = requests.request(
        method=method,
        url=url,
        params=params,
        headers=headers,
        data=payload
    )
    
    response.raise_for_status()
    return response.json()
    

Fetching Recently Closed Deals

This method retrieves deals that were recently closed as won, which represent your successful customers:

def get_deals(self, limit=100, status=None, sort_by=None, sort_direction='asc', owner_id=None, stage_id=None, filter_id=None):
    params = {
        "limit": limit,
        "status": status,
    }
    
    # Add optional parameters if provided
    if sort_by:
        params["sort_by"] = sort_by
    if sort_direction:
        params["sort_direction"] = sort_direction
    if owner_id:
        params["owner_id"] = int(owner_id)
    if stage_id:
        params["stage_id"] = int(stage_id)
    if filter_id:
        params["filter_id"] = filter_id
    
    response = self._make_request("GET", "deals", params=params)
    
    if not response.get("success"):
        raise Exception(f"Failed to get deals: {response.get('error')}")
    
    return response.get("data", [])

def get_recently_closed_deals(self, days_back=30, limit=100):
    from datetime import datetime, timedelta
    
    # Get won deals
    deals = self.get_deals(
        limit=limit,
        status="won",
        sort_by="update_time",
        sort_direction="desc"
    )
    
    # Filter by date
    cutoff_date = datetime.now(timezone.utc) - timedelta(days=days_back)
    recent_deals = []
    
    for deal in deals:
        # Check if deal was updated recently (this includes when it was won)
        if deal.get("update_time"):
            update_time = datetime.fromisoformat(deal["update_time"].replace("Z", "+00:00"))
            if update_time >= cutoff_date:
                # Enrich deal with organization data if not already included
                if deal.get("org_id") and not isinstance(deal["org_id"], dict):
                    try:
                        org_data = self.get_organization_by_id(deal["org_id"])
                        deal["org_id"] = org_data
                    except:
                        # If we can't get org data, skip this deal
                        continue
                recent_deals.append(deal)
    
    return recent_deals

Step 2: Building the Surfe Integration

Now we’ll create the Surfe service to handle company domain extraction and lookalike discovery:

class SurfeService:
    def __init__(self, api_key, version="v1"):
        self.version = version
        self.api_key = api_key
        self.base_url = f"<https://api.surfe.com/{version}>"        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"        }
    def extract_companies_from_deals(self, deals_data):
        """        Extract unique company domains from deals data        Args:            deals_data: List of deals from Pipedrive        Returns:            List of unique company domains        """        domains = set()
        for deal in deals_data:
            # Try to extract domain from organization data            if deal.get("org_id") and isinstance(deal["org_id"], dict):
                org_data = deal["org_id"]
                # Check for direct domain field                if org_data.get("domain"):
                    domains.add(org_data["domain"])
                # Extract domain from website URL                elif org_data.get("website"):
                    website = org_data["website"]
                    if website.startswith(("http://", "https://")):
                        domain = website.split("//")[1].split("/")[0]
                    else:
                        domain = website.split("/")[0]
                    domains.add(domain)
            # Extract from person email if organization domain not available            if deal.get("person_id") and isinstance(deal["person_id"], dict):
                person_data = deal["person_id"]
                emails = person_data.get("email", [])
                if isinstance(emails, list):
                    for email_obj in emails:
                        if isinstance(email_obj, dict) and email_obj.get("value"):
                            email = email_obj["value"]
                        else:
                            email = str(email_obj)
                        if "@" in email:
                            domain = email.split("@")[1]
                            # Skip common email providers                            if domain not in ["gmail.com", "outlook.com", "yahoo.com", "hotmail.com"]:
                                domains.add(domain)
        return list(domains)
    def search_company_lookalikes(self, company_domains, limit=10):
        """        Search for companies similar to the provided company domains        Args:            company_domains: List of company domains to find lookalikes for            limit: Maximum number of lookalike companies to return        Returns:            Lookalike company search results        """        url = f"{self.base_url}/organizations/lookalikes"        payload = {
            "domains": company_domains,
            "maxResults": min(max(limit, 1), 10)
        }
        response = requests.post(url, headers=self.headers, json=payload)
        response.raise_for_status()
        return response.json()

Step 3: Implementing Custom Fields

Pipedrive doesn’t have standard fields for website and industry information, so we need to create and manage custom fields:

def get_organization_custom_fields(self):
    """Get and cache organization custom fields"""    if self._org_custom_fields is None:
        response = self._make_request("GET", "organizationFields")
        if response.get("success"):
            self._org_custom_fields = {}
            for field in response.get("data", []):
                name = field.get("name", "").lower()
                key = field.get("key", "")
                if name and key:
                    self._org_custom_fields[name] = key
        else:
            self._org_custom_fields = {}
    return self._org_custom_fields
def create_website_custom_field(self):
    """Create a custom field for website/domain if it doesn't exist"""    custom_fields = self.get_organization_custom_fields()
    # Check if website field already exists    website_key = custom_fields.get("website") or custom_fields.get("domain")
    if website_key:
        return website_key
    # Create new website custom field    field_data = {
        "name": "Website",
        "field_type": "varchar",
        "add_visible_flag": True    }
    response = self._make_request("POST", "organizationFields", json_data=field_data)
    if response.get("success"):
        field_key = response["data"]["key"]
        self._org_custom_fields["website"] = field_key
        return field_key
    else:
        raise Exception(f"Failed to create website custom field: {response.get('error')}")
def create_category_custom_field(self):
    """Create a custom field for industry/category if it doesn't exist"""    custom_fields = self.get_organization_custom_fields()
    # Check if category field already exists    category_key = (custom_fields.get("category") or
                   custom_fields.get("industry") or
                   custom_fields.get("industries"))
    if category_key:
        return category_key
    # Create new industry custom field    field_data = {
        "name": "Industry",
        "field_type": "varchar",
        "add_visible_flag": True    }
    response = self._make_request("POST", "organizationFields", json_data=field_data)
    if response.get("success"):
        field_key = response["data"]["key"]
        self._org_custom_fields["industry"] = field_key
        return field_key
    else:
        raise Exception(f"Failed to create industry custom field: {response.get('error')}")

Step 4: Creating Prospects from Lookalikes

Now we’ll implement the core functionality to create prospect organizations and deals:

def create_prospects_from_lookalikes(self, lookalike_companies, pipeline_id=None, stage_id=None, owner_id=None):
    """    Create prospects in Pipedrive from lookalike companies    Args:        lookalike_companies: List of company data from Surfe        pipeline_id: Pipeline ID for deals (optional)        stage_id: Stage ID for deals (optional)        owner_id: User ID to assign deals to (optional)    Returns:        List of created prospects with deal and organization IDs    """    created_prospects = []
    # Get or create custom fields for website and category    try:
        website_field_key = self.create_website_custom_field()
        category_field_key = self.create_category_custom_field()
    except Exception as e:
        print(f"Warning: Could not create custom fields: {e}")
        website_field_key = None        category_field_key = None    for company in lookalike_companies:
        try:
            # Check if organization already exists            existing_org = None            if company.get("name"):
                existing_org = self.search_organization({
                    "term": company["name"],
                    "fields": "name",
                })
            if not existing_org or not existing_org.get("items"):
                # Create new organization                org_data = {
                    "name": company.get("name", "Unknown Company"),
                }
                # Add website using custom field                if company.get("domain") and website_field_key:
                    domain = company["domain"]
                    if not domain.startswith(("http://", "https://")):
                        domain = f"https://{domain}"                    org_data[website_field_key] = domain
                # Add industry using custom field                if (company.get("industries") and
                    isinstance(company["industries"], list) and
                    category_field_key):
                    org_data[category_field_key] = company["industries"][0]
                created_org = self.create_organization(org_data)
                org_id = created_org.get("id")
            else:
                org_id = existing_org["items"][0]["item"]["id"]
            # Create deal for this prospect            deal_data = {
                "title": f"Lookalike Prospect - {company.get('name', 'Unknown')}",
                "org_id": org_id,
                "value": 5000,  # Default prospect value                "currency": "USD"            }
            if pipeline_id:
                deal_data["pipeline_id"] = pipeline_id
            if stage_id:
                deal_data["stage_id"] = stage_id
            if owner_id:
                deal_data["user_id"] = owner_id
            created_deal = self.create_deal(deal_data)
            prospect_info = {
                "company": company,
                "organization_id": org_id,
                "deal_id": created_deal.get("id"),
            }
            created_prospects.append(prospect_info)
        except Exception as e:
            print(f"Failed to create prospect for {company.get('name', 'Unknown')}: {str(e)}")
            continue    return created_prospects

Step 5: Putting It All Together

Now let’s combine everything into the main orchestration function:

def main():
    """Main function to orchestrate the lookalike company discovery process"""    load_dotenv()
    # Get API keys from environment    pipedrive_api_key = os.getenv("PIPEDRIVE_API_KEY")
    surfe_api_key = os.getenv("SURFE_API_KEY")
    # Pipedrive configuration    pipeline_id = os.getenv("PIPEDRIVE_PIPELINE_ID")
    stage_id = os.getenv("PIPEDRIVE_STAGE_ID")
    owner_id = os.getenv("PIPEDRIVE_DEFAULT_OWNER_ID")
    days_back = 30
    max_lookalikes = 10
    if not pipedrive_api_key or not surfe_api_key:
        print("❌ Error: Missing API keys. Please check your .env file.")
        return    try:
        # Initialize services        print("🚀 Initializing services...")
        pipedrive_service = PipedriveService(pipedrive_api_key)
        surfe_service = SurfeService(surfe_api_key, version="v1")
        # Step 1: Get recently closed deals from Pipedrive        print(f"📊 Fetching recently closed deals from last {days_back} days...")
        recent_deals = pipedrive_service.get_recently_closed_deals(days_back=days_back, limit=100)
        if not recent_deals:
            print(f"❌ No closed deals found in the last {days_back} days.")
            return        print(f"✅ Found {len(recent_deals)} recently closed deals")
        # Step 2: Extract company domains from deals        print("🔍 Extracting company domains from closed deals...")
        company_domains = surfe_service.extract_companies_from_deals(recent_deals)
        if not company_domains:
            print("❌ No company domains could be extracted from recent deals.")
            return        print(f"✅ Extracted {len(company_domains)} unique company domains")
        print("📋 Sample domains:", ", ".join(company_domains[:5]))
        # Step 3: Find lookalike companies using Surfe API        print(f"🎯 Searching for lookalike companies (max {max_lookalikes})...")
        lookalike_results = surfe_service.search_company_lookalikes(
            company_domains=company_domains,
            limit=max_lookalikes
        )
        lookalike_companies = lookalike_results.get("organizations", [])
        if not lookalike_companies:
            print("❌ No lookalike companies found.")
            return        print(f"✅ Found {len(lookalike_companies)} lookalike companies")
        # Step 4: Create prospects in Pipedrive        create_prospects = input("📝 Create these prospects in Pipedrive? (y/N): ").lower().strip()
        if create_prospects == 'y':
            print("\\n💼 Creating prospect organizations and deals in Pipedrive...")
            created_prospects = pipedrive_service.create_prospects_from_lookalikes(
                lookalike_companies=lookalike_companies,
                pipeline_id=pipeline_id,
                stage_id=stage_id,
                owner_id=owner_id
            )
            if created_prospects:
                print(f"✅ Successfully created {len(created_prospects)} prospects in Pipedrive")
                # Calculate total pipeline value                total_value = len(created_prospects) * 5000                print(f"\\n📊 INTEGRATION SUMMARY:")
                print("=" * 50)
                print(f"Source deals analyzed: {len(recent_deals)}")
                print(f"Company domains extracted: {len(company_domains)}")
                print(f"Lookalike companies found: {len(lookalike_companies)}")
                print(f"Prospects created in Pipedrive: {len(created_prospects)}")
                print(f"Total pipeline value: ${total_value:,}")
                print(f"Time period analyzed: Last {days_back} days")
            else:
                print("❌ No prospects were created in Pipedrive")
        else:
            print("ℹ️  Prospect creation skipped. You can run the script again to create prospects.")
    except Exception as e:
        print(f"❌ Error: {str(e)}")
if __name__ == "__main__":
    main()

Step 6: Running the Script

Execute the Code

Save your complete script as main.py and run it:

# macOS/Linuxpython3 main.py
# Windowspy main.py

Expected Output

When you run the script successfully, you should see output similar to this:

🚀 Initializing services...
📊 Fetching recently closed deals from last 30 days...
✅ Found 15 recently closed deals
🔍 Extracting company domains from closed deals...
✅ Extracted 12 unique company domains
📋 Sample domains: acme.com, techcorp.com, innovate.io, globaldyn.com, nextgen.co
🎯 Searching for lookalike companies (max 10)...
✅ Found 8 lookalike companies
📝 Create these prospects in Pipedrive? (y/N): y

💼 Creating prospect organizations and deals in Pipedrive...
✅ Successfully created 8 prospects in Pipedrive

📊 INTEGRATION SUMMARY:
==================================================
Source deals analyzed: 15
Company domains extracted: 12
Lookalike companies found: 8
Prospects created in Pipedrive: 8
Total pipeline value: $40,000
Time period analyzed: Last 30 days

Advanced Features

Priority Scoring

You can enhance the script with priority scoring based on company characteristics:

def calculate_prospect_priority(company, similarity_score):
    """Calculate prospect priority based on company data and similarity score"""    priority = similarity_score * 100  # Base priority from similarity    # Boost priority based on employee count    employee_count = company.get("employeeCount", 0)
    if employee_count > 1000:
        priority += 20    elif employee_count > 100:
        priority += 10    elif employee_count > 10:
        priority += 5    # Boost priority based on revenue    revenue = company.get("revenue", "")
    if isinstance(revenue, str):
        if "100M+" in revenue or "billion" in revenue.lower():
            priority += 25        elif "50M" in revenue or "10M" in revenue:
            priority += 15        elif "1M" in revenue:
            priority += 10    return min(priority, 100)  # Cap at 100

Filtering and Validation

Add filtering to focus on high-quality prospects:

def filter_quality_prospects(companies, min_similarity=0.7, min_employees=10):
    """Filter companies based on quality criteria"""    filtered = []
    for company in companies:
        similarity = company.get("similarityScore", 0)
        employees = company.get("employeeCount", 0)
        if similarity >= min_similarity and employees >= min_employees:
            filtered.append(company)
    return filtered

Complete Code for Easy Integration

import os
import sys
from dotenv import load_dotenv

# Add core directory to path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))

# Import core services
from core.surfe import SurfeService
from core.integrations.pipedrive import PipedriveService

def main():
load_dotenv()

# Get API keys from environment
pipedrive_api_key = os.getenv("PIPEDRIVE_API_KEY")
surfe_api_key = os.getenv("SURFE_API_KEY")

# Pipedrive configuration
pipeline_id = os.getenv("PIPEDRIVE_PIPELINE_ID") # Optional: specific pipeline for prospects
stage_id = os.getenv("PIPEDRIVE_STAGE_ID") # Optional: specific stage for prospects
owner_id = os.getenv("PIPEDRIVE_DEFAULT_OWNER_ID") # Optional: default owner for prospects

days_back = 30
max_lookalikes = 10

if not pipedrive_api_key or not surfe_api_key:
print("❌ Error: Missing API keys. Please check your .env file.")
return

try:
# Initialize services
print("🚀 Initializing services...")
pipedrive_service = PipedriveService(pipedrive_api_key)
surfe_service = SurfeService(surfe_api_key, version="v1")

# Step 1: Get recently closed deals from Pipedrive
print(f"📊 Fetching recently closed deals from last {days_back} days...")
recent_deals = pipedrive_service.get_recently_closed_deals(days_back=days_back, limit=100)



if not recent_deals:
print(f"❌ No closed deals found in the last {days_back} days.")
return

print(f"✅ Found {len(recent_deals)} recently closed deals")

# Step 2: Extract company domains from deals
print("🔍 Extracting company domains from closed deals...")
company_domains = surfe_service.extract_companies_from_deals(recent_deals)

if not company_domains:
print("❌ No company domains could be extracted from recent deals.")
return

print(f"✅ Extracted {len(company_domains)} unique company domains")
print("📋 Sample domains:", ", ".join(company_domains[:5]))
if len(company_domains) > 5:
print(f" ... and {len(company_domains) - 5} more")

# Step 3: Find lookalike companies using Surfe API
print(f"🎯 Searching for lookalike companies (max {max_lookalikes})...")
lookalike_results = surfe_service.search_company_lookalikes(
company_domains=company_domains,
limit=max_lookalikes
)

lookalike_companies = lookalike_results.get("organizations", [])

if not lookalike_companies:
print("❌ No lookalike companies found.")
return

print(f"✅ Found {len(lookalike_companies)} lookalike companies")



# # Step 4: Create prospects in Pipedrive
create_prospects = input("📝 Create these prospects in Pipedrive? (y/N): ").lower().strip()

if create_prospects == 'y':
print("\n💼 Creating prospect organizations and deals in Pipedrive...")

created_prospects = pipedrive_service.create_prospects_from_lookalikes(
lookalike_companies=lookalike_companies,
pipeline_id=pipeline_id,
stage_id=stage_id,
owner_id=owner_id
)

if created_prospects:
print(f"✅ Successfully created {len(created_prospects)} prospects in Pipedrive")

# Calculate total pipeline value
total_value = len(created_prospects) * 5000 # Default $5000 per prospect

print(f"\n📊 INTEGRATION SUMMARY:")
print("=" * 50)
print(f"Source deals analyzed: {len(recent_deals)}")
print(f"Company domains extracted: {len(company_domains)}")
print(f"Lookalike companies found: {len(lookalike_companies)}")
print(f"Prospects created in Pipedrive: {len(created_prospects)}")
print(f"Total pipeline value: ${total_value:,}")
print(f"Time period analyzed: Last {days_back} days")
else:
print("❌ No prospects were created in Pipedrive")
else:
print("ℹ️ Prospect creation skipped. You can run the script again to create prospects.")

except Exception as e:
print(f"❌ Error: {str(e)}")
import traceback
traceback.print_exc()

if __name__ == "__main__":
main()

Final Notes: Credits, Quotas, and Rate Limiting

Credits & Quotas

Surfe’s API uses a credit system for people enrichment. Retrieving email, landline, and job details consumes email credits, while retrieving mobile phone numbers consumes mobile credits. There are also daily quotas, such as 2,000 people enrichments per day and 200 organization look-alike searches per day. For more information on credits and quotas, please speak to a Surfe representative to discuss a tailored plan that works for you and your business needs. Quotas reset at midnight (local time), and additional credits can be purchased if needed. For full details, refer to the Credits & Quotas documentation.

Rate Limiting

Surfe enforces rate limits to ensure fair API usage. Users can make up to 10 requests per second, with short bursts of up to 20 requests allowed. The limit resets every minute. Exceeding this results in a 429 Too Many Requests error, so it’s recommended to implement retries in case of rate limit issues. Learn more in the Rate Limits documentation.

Surfe is trusted by 30000 sales people wordwide

Your Best Customers Just Became Your Best Leads

Ready to find more companies like your top clients? Fire up Surfe’s Contact Enrichment API and turn customer insights into pipeline.

Contact Enrichment API FAQs

What’s the Benefit of Enriching Zoom Webinar Leads?

Webinar registrants are warm leads, but they’re often missing key details like email validation, phone numbers, or job titles. Enriching this data means your sales team can follow up faster, more accurately, and more personally—without wasting time on research or bad contact info.

How Does This Script Help with Webinar Follow-Up?

This script fetches webinar registrants directly from Zoom, enriches them using Surfe’s API, and pushes them into Outreach as fully-formed prospects. That means no CSV exports, manual copy-pasting, or delayed follow-ups—it’s one streamlined flow from signup to sequence.

Do I Need API Access for Zoom and Outreach?

Yes. You’ll need access to the Zoom API (via JWT or OAuth) and Outreach’s API (via OAuth token). Setting this up usually takes just a few minutes through their developer portals. We walk you through everything you need in the tutorial.

Can I Run This Script Without a Developer?

If you’re comfortable with basic Python or know how to follow setup instructions, you can absolutely run this on your own. The tutorial is written for non-devs and includes environment setup, API key loading, and all required packages.

What Kind of Data Does Surfe’s API Enrich for Webinar Leads?

Surfe can return verified email addresses, mobile numbers, job titles, seniority, LinkedIn URLs, and more. The better your input data (e.g., name + company), the more complete and accurate your enrichment results will be.

What Happens If a Prospect Already Exists in Outreach?

The script includes a check to avoid creating duplicates. If a registrant’s email already exists in Outreach, it simply skips creation and moves straight to sequence assignment—saving time and keeping your CRM clean.

How Often Should I Run the Script?

That’s up to your workflow. You can run it after every webinar, on a weekly schedule, or as soon as new registrants appear. Just make sure to monitor your API credit usage if you’re processing large lists frequently.