Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessPakistan’s peace plan a ‘critical opportunity’ for US-Iran talks ahead of Trump deadlineSCMP Tech (Asia AI)Why Microservices Struggle With AI SystemsHackernoon AIAgentic AI Vision System: Object Segmentation with SAM 3 and QwenPyImageSearchWhy APEX Matters for MoE Coding Models and why it's NOT the same as K quantsReddit r/LocalLLaMAGoogle Study: AI Benchmarks Use Too Few Raters to Be Reliable - WinBuzzerGNews AI benchmarkNvidia Stock Rises. This Issue Could Hamper Its Next-Generation AI Chips. - Barron'sGNews AI NVIDIABroadcom's CEO Has Line of Sight to $100 Billion in AI Chip Revenue. Is the Stock a Buy? - The Motley FoolGoogle News: AI‘This is 160-million-year-old Jurassic clay’: inside Es Devlin’s bid to reshape AI ethics – through potteryThe Guardian AI‘This is 160-million-year-old Jurassic clay’: inside Es Devlin’s bid to reshape AI ethics – through pottery - The GuardianGNews AI ethicsI gave Claude Code our entire codebase. Our customers noticed. | Al Chen (Galileo)lennysnewsletter.comGoogle DeepMind and Agile Robotics Combine Robotics Platforms - Automation WorldGoogle News: DeepMindRoche Launches AI Factory with NVIDIA to Accelerate Drug Discovery and Diagnostics - The Healthcare Technology Report.GNews AI NVIDIABlack Hat USADark ReadingBlack Hat AsiaAI BusinessPakistan’s peace plan a ‘critical opportunity’ for US-Iran talks ahead of Trump deadlineSCMP Tech (Asia AI)Why Microservices Struggle With AI SystemsHackernoon AIAgentic AI Vision System: Object Segmentation with SAM 3 and QwenPyImageSearchWhy APEX Matters for MoE Coding Models and why it's NOT the same as K quantsReddit r/LocalLLaMAGoogle Study: AI Benchmarks Use Too Few Raters to Be Reliable - WinBuzzerGNews AI benchmarkNvidia Stock Rises. This Issue Could Hamper Its Next-Generation AI Chips. - Barron'sGNews AI NVIDIABroadcom's CEO Has Line of Sight to $100 Billion in AI Chip Revenue. Is the Stock a Buy? - The Motley FoolGoogle News: AI‘This is 160-million-year-old Jurassic clay’: inside Es Devlin’s bid to reshape AI ethics – through potteryThe Guardian AI‘This is 160-million-year-old Jurassic clay’: inside Es Devlin’s bid to reshape AI ethics – through pottery - The GuardianGNews AI ethicsI gave Claude Code our entire codebase. Our customers noticed. | Al Chen (Galileo)lennysnewsletter.comGoogle DeepMind and Agile Robotics Combine Robotics Platforms - Automation WorldGoogle News: DeepMindRoche Launches AI Factory with NVIDIA to Accelerate Drug Discovery and Diagnostics - The Healthcare Technology Report.GNews AI NVIDIA
AI NEWS HUBbyEIGENVECTOREigenvector

How to Automate Upwork Proposals with Python (Real Code Inside)

DEV Communityby Michael GarciaApril 4, 20269 min read1 views
Source Quiz

How to Automate Upwork Proposals with Python (Real Code Inside) Last month I sent 47 proposals on Upwork. I personally wrote 3 of them. The other 44 were drafted by Claude AI, filtered through a scoring algorithm I built over two weekends, and delivered to my inbox via Telegram before most freelancers even saw the job posting. My response rate on those AI-assisted proposals? 31%. Higher than my hand-written average from the previous quarter. This article shows you exactly how I built that system. The Real Problem With Upwork Proposals If you've freelanced on Upwork for more than a month, you know the grind. You refresh the job feed. You see something promising. You spend 20 minutes writing a tailored proposal. You hit submit. Nothing. Meanwhile, the client already hired someone who respond

How to Automate Upwork Proposals with Python (Real Code Inside)

Last month I sent 47 proposals on Upwork. I personally wrote 3 of them.

The other 44 were drafted by Claude AI, filtered through a scoring algorithm I built over two weekends, and delivered to my inbox via Telegram before most freelancers even saw the job posting. My response rate on those AI-assisted proposals? 31%. Higher than my hand-written average from the previous quarter.

This article shows you exactly how I built that system.

The Real Problem With Upwork Proposals

If you've freelanced on Upwork for more than a month, you know the grind. You refresh the job feed. You see something promising. You spend 20 minutes writing a tailored proposal. You hit submit. Nothing. Meanwhile, the client already hired someone who responded 4 minutes after posting.

The platforms reward speed and volume. A thoughtful proposal submitted 6 hours late loses to a mediocre one submitted in 6 minutes. That's not a hot take — it's arithmetic.

The naive solution is to write faster. The engineering solution is to build a system that monitors the feed continuously, filters out garbage jobs automatically, and generates a tailored first draft the moment something good appears.

Here's the architecture:

  • RSS feed monitor — Upwork exposes RSS feeds for saved searches. We poll these.

  • Scoring engine — Each job gets a score based on keyword match, budget range, and client history signals.

  • Claude AI proposal generator — High-scoring jobs get a tailored draft generated via the Anthropic API.

  • Telegram notifier — The draft and job details land in my Telegram within seconds.

I review, adjust, and submit. The system handles discovery and first drafts. I handle judgment and the final send.

Important note on Upwork TOS: Upwork's Terms of Service prohibit automated bidding — meaning you cannot auto-submit proposals programmatically. This system does not do that. It automates monitoring and drafting, not submission. You review everything before it goes anywhere. Know the rules, stay inside them.

Step 1: Parsing the Upwork RSS Feed

Upwork generates RSS feeds for your saved searches. Log into Upwork, save a search for your niche, and grab the RSS URL from the feed icon. It looks like:

https://www.upwork.com/ab/feed/jobs/rss?q=python+automation&sort=recency&paging=0%3B10&api_params=1&securityToken=YOUR_TOKEN&userUid=YOUR_UID&orgUid=YOUR_ORG

Enter fullscreen mode

Exit fullscreen mode

The token is tied to your session, so treat it like a password.

Here's the RSS parser and job monitor:

`import feedparser import hashlib import json import time import logging from datetime import datetime from pathlib import Path from dataclasses import dataclass, field from typing import Optional

logging.basicConfig(level=logging.INFO, format="%(asctime)s — %(levelname)s — %(message)s") logger = logging.getLogger(name)

SEEN_JOBS_FILE = Path("seen_jobs.json") POLL_INTERVAL_SECONDS = 300 # 5 minutes — don't hammer the feed

@dataclass class UpworkJob: title: str url: str description: str published: str budget: Optional[str] = None job_type: Optional[str] = None skills: list[str] = field(default_factory=list) job_id: str = ""

def post_init(self): self.job_id = hashlib.md5(self.url.encode()).hexdigest()

def load_seen_jobs() -> set: if SEEN_JOBS_FILE.exists(): return set(json.loads(SEEN_JOBS_FILE.read_text())) return set()

def save_seen_jobs(seen: set): SEEN_JOBS_FILE.write_text(json.dumps(list(seen)))

def parse_budget_from_description(description: str) -> Optional[str]: """ Upwork embeds budget info in the description HTML. Budget: $500.00-$1,000.00 or Hourly Range: $25.00-$50.00/hr """ import re patterns = [ r"Budget:\s*$?([\d,]+.?\d*)\s*[-–]\s*$?([\d,]+.?\d*)", r"Hourly Range:\s*$?([\d,]+.?\d*)\s*[-–]\s*$?([\d,]+.?\d*)", r"Budget:\s*$?([\d,]+.?\d*)", ] for pattern in patterns: match = re.search(pattern, description, re.IGNORECASE) if match: return match.group(0) return None

def parse_skills_from_description(description: str) -> list[str]: import re match = re.search(r"Skills?:\s*([^\n<]+)", description, re.IGNORECASE) if match: skills_raw = match.group(1) return [s.strip() for s in re.split(r"[,;]", skills_raw) if s.strip()] return []

def fetch_jobs(feed_url: str) -> list[UpworkJob]: feed = feedparser.parse(feed_url)

if feed.bozo: logger.warning(f"Feed parse warning: {feed.bozo_exception}")

jobs = [] for entry in feed.entries: description = entry.get("summary", "") job = UpworkJob( title=entry.get("title", "No title"), url=entry.get("link", ""), description=description, published=entry.get("published", ""), budget=parse_budget_from_description(description), skills=parse_skills_from_description(description), ) jobs.append(job)

logger.info(f"Fetched {len(jobs)} jobs from feed") return jobs

def monitor_feed(feed_urls: list[str], callback): """ Continuously polls feed URLs and calls callback(job) for new jobs. """ seen = load_seen_jobs()

while True: for url in feed_urls: try: jobs = fetch_jobs(url) new_jobs = [j for j in jobs if j.job_id not in seen]

for job in new_jobs: logger.info(f"New job found: {job.title}") callback(job) seen.add(job.job_id)

save_seen_jobs(seen)

except Exception as e: logger.error(f"Error fetching feed {url}: {e}")

logger.info(f"Sleeping {POLL_INTERVAL_SECONDS}s until next poll...") time.sleep(POLL_INTERVAL_SECONDS)`

Enter fullscreen mode

Exit fullscreen mode

A few things worth noting about this implementation:

feedparser handles malformed XML gracefully, which matters because Upwork's RSS occasionally has encoding issues — I've seen bozo_exception: on feeds that nevertheless parse fine. The hashlib.md5 job ID means you won't process the same listing twice even across restarts. And the 5-minute poll interval is deliberate — aggressive polling will get your IP rate-limited.

Step 2: The Scoring Algorithm

Not every job deserves a proposal. The scoring engine is where you encode your professional judgment into math.

My scoring weights are tuned for Python automation work. You'll adjust these based on your niche, but the structure transfers directly:

`import re from dataclasses import dataclass

@dataclass class ScoringConfig: must_have_keywords: list[str] nice_to_have_keywords: list[str] dealbreaker_keywords: list[str] min_budget_fixed: float min_budget_hourly: float max_budget_fixed: float # avoid scope monsters keyword_match_weight: float = 0.5 budget_weight: float = 0.35 recency_weight: float = 0.15

DEFAULT_CONFIG = ScoringConfig( must_have_keywords=["python", "automation", "api", "scraping", "bot", "pipeline"], nice_to_have_keywords=["anthropic", "claude", "openai", "fastapi", "postgresql", "aws", "trading"], dealbreaker_keywords=["wordpress", "shopify", "wix", "php", "react native", "unity", "c#", "java"], min_budget_fixed=300.0, min_budget_hourly=25.0, max_budget_fixed=50000.0, )

def extract_budget_value(budget_str: str) -> tuple[float, str]: """ Returns (mid_point_value, job_type) where job_type is 'fixed' or 'hourly'. """ if not budget_str: return 0.0, "unknown"

is_hourly = "hr" in budget_str.lower() or "hour" in budget_str.lower() numbers = re.findall(r"[\d,]+.?\d*", budget_str) values = [float(n.replace(",", "")) for n in numbers]

if not values: return 0.0, "hourly" if is_hourly else "fixed"

midpoint = sum(values) / len(values) return midpoint, "hourly" if is_hourly else "fixed"

def score_job(job, config: ScoringConfig = DEFAULT_CONFIG) -> dict: text = f"{job.title} {job.description}".lower() scores = {}

--- Dealbreaker check ---

for kw in config.dealbreaker_keywords: if kw in text: return { "total": 0.0, "disqualified": True, "reason": f"Dealbreaker keyword: '{kw}'", "breakdown": {} }

--- Keyword scoring ---

must_have_hits = [kw for kw in config.must_have_keywords if kw in text] nice_to_have_hits = [kw for kw in config.nice_to_have_keywords if kw in text]

must_have_ratio = len(must_have_hits) / len(config.must_have_keywords) nice_ratio = len(nice_to_have_hits) / max(len(config.nice_to_have_keywords), 1)

keyword_score = (must_have_ratio * 0.7) + (nice_ratio * 0.3) scores["keywords"] = round(keyword_score * 100, 1)

--- Budget scoring ---

budget_val, job_type = extract_budget_value(job.budget or "") budget_score = 0.0

if job_type == "fixed": if budget_val < config.min_budget_fixed: budget_score = 0.0 elif budget_val > config.max_budget_fixed: budget_score = 0.2 # red flag: scope too large or unrealistic else:

Normalize: sweet spot is $1k-$10k

normalized = min(budget_val / 10000, 1.0) budget_score = 0.4 + (normalized * 0.6) elif job_type == "hourly": if budget_val >= config.min_budget_hourly: normalized = min((budget_val - config.min_budget_hourly) / 75, 1.0) budget_score = 0.5 + (normalized * 0.5)

scores["budget"] = round(budget_score * 100, 1)

--- Composite score ---

total = ( keyword_score * config.keyword_match_weight + budget_score * config.budget_weight )

Recency handled upstream by feed sort=recency; give partial credit

total += 0.1 * config.recency_weight # baseline recency bonus

total_clamped = min(round(total * 100, 1), 100.0)

return { "total": total_clamped, "disqualified": False, "reason": None, "breakdown": { "keyword_score": scores["keywords"], "budget_score": scores["budget"], "must_have_hits": must_have_hits, "nice_to_have_hits": nice_to_have_hits, "budget_value": budget_val, "job_type": job_type, } }`

Enter fullscreen mode

Exit fullscreen mode

When I run this against a real job feed, output looks like:

`2024-01-15 09:23:11 — INFO — Fetched 10 jobs from feed 2024-01-15 09:23:11 — INFO — New job found: Python Developer Needed for Trading Bot Automation Score result: {'total': 78.2, 'disqualified': False, 'breakdown': {'keyword_score': 83.3, 'budget_score': 71.0, 'must_have_hits': ['python', 'automation', 'bot'], 'nice_to_have_hits': ['trading'], 'budget_value': 2500.0, 'job_type': 'fixed'}}

2024-01-15 09:23:11 — INFO — New job found: Shopify Theme Customization Score result: {'total': 0.0, 'disqualified': True, 'reason': "Dealbreaker keyword: 'shopify'", 'breakdown': {}}`

Enter fullscreen mode

Exit fullscreen mode

Jobs scoring above 60 go to the proposal generator. Jobs below that get logged and skipped. You can tune that threshold — I've found 60 catches genuinely relevant work without drowning me in noise.

Step 3: Generating Tailored Proposals with Claude

This is where the time savings stack up. The proposal generator takes the scored job, pulls relevant context from my profile template, and produces a draft that's actually specific to the posting — not a mail-merge.

`python import anthropic import os

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

MY_PROFILE = """ Name: Mike G. Core skills: Python automation, API integrations, web scraping, data pipelines, trading bots Years of experience: 8 Notable projects: Built a crypto arbitrage system processing 50k ticks/minute; scraped and structured 2M+ product records for an e-commerce client; built a Telegram trading signal bot with live P&L tracking Tone: Direct, technical, no fluff. I explain what I'll build and why my approach works. Availability: 20hrs/week. Based in US Eastern timezone. """

PROPOSAL_SYSTEM_PROMPT = """ You are writing a freelance proposal on behalf of a senior Python engineer.

Rules:

  • Open with a direct reference to the specific problem described in the job post. Never use generic openers like "I saw your posting" or "I would love to help."
  • Demonstrate you understood the technical requirements by briefly describing your approach.
  • Reference 1-2 relevant past projects (from the profile provided) that map to this job.
  • Keep it under 200 words. Clients skim proposals. Respect their time.
  • End with one specific clarifying question that shows you thought about scope.
  • Do NOT use bullet points. Flowing paragraphs only.
  • Do NOT say "I am a senior Python engineer" or state your title. Show, don't tell. """

def generate_proposal(job, score_result: dict) -> str: job_context = f""" Job Title: {job.title} Job Description: {job.description[:2000]} Budget: {job.budget or 'Not specified'} Skills mentioned: {', '.join(job.skills) if job.skills else 'Not listed'} Keyword matches: {', '.join(score_result['breakdown'].get('must_have_hits', []))} """

prompt = f""" My profile: {MY_PROFILE}

Job details: {job_context}

Write a proposal for this job following all rules in your instructions. """

message = client.messages.create( model="claude-opus-4-5", max_tokens=512, system=PROPOSAL_SYSTEM_PROMPT,`

Enter fullscreen mode

Exit fullscreen mode

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudemodelproduct

Knowledge Map

Knowledge Map
TopicsEntitiesSource
How to Auto…claudemodelproductplatformserviceintegrationDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 202 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products