How to Automate Upwork Proposals with Python (Real Code Inside)
How to Automate Upwork Proposals with Python (Real Code Inside) Last month I sent 47 proposals on Upwork. I personally wrote 3 of them. The other 44 were drafted by Claude AI, filtered through a scoring algorithm I built over two weekends, and delivered to my inbox via Telegram before most freelancers even saw the job posting. My response rate on those AI-assisted proposals? 31%. Higher than my hand-written average from the previous quarter. This article shows you exactly how I built that system. The Real Problem With Upwork Proposals If you've freelanced on Upwork for more than a month, you know the grind. You refresh the job feed. You see something promising. You spend 20 minutes writing a tailored proposal. You hit submit. Nothing. Meanwhile, the client already hired someone who respond
How to Automate Upwork Proposals with Python (Real Code Inside)
Last month I sent 47 proposals on Upwork. I personally wrote 3 of them.
The other 44 were drafted by Claude AI, filtered through a scoring algorithm I built over two weekends, and delivered to my inbox via Telegram before most freelancers even saw the job posting. My response rate on those AI-assisted proposals? 31%. Higher than my hand-written average from the previous quarter.
This article shows you exactly how I built that system.
The Real Problem With Upwork Proposals
If you've freelanced on Upwork for more than a month, you know the grind. You refresh the job feed. You see something promising. You spend 20 minutes writing a tailored proposal. You hit submit. Nothing. Meanwhile, the client already hired someone who responded 4 minutes after posting.
The platforms reward speed and volume. A thoughtful proposal submitted 6 hours late loses to a mediocre one submitted in 6 minutes. That's not a hot take — it's arithmetic.
The naive solution is to write faster. The engineering solution is to build a system that monitors the feed continuously, filters out garbage jobs automatically, and generates a tailored first draft the moment something good appears.
Here's the architecture:
-
RSS feed monitor — Upwork exposes RSS feeds for saved searches. We poll these.
-
Scoring engine — Each job gets a score based on keyword match, budget range, and client history signals.
-
Claude AI proposal generator — High-scoring jobs get a tailored draft generated via the Anthropic API.
-
Telegram notifier — The draft and job details land in my Telegram within seconds.
I review, adjust, and submit. The system handles discovery and first drafts. I handle judgment and the final send.
Important note on Upwork TOS: Upwork's Terms of Service prohibit automated bidding — meaning you cannot auto-submit proposals programmatically. This system does not do that. It automates monitoring and drafting, not submission. You review everything before it goes anywhere. Know the rules, stay inside them.
Step 1: Parsing the Upwork RSS Feed
Upwork generates RSS feeds for your saved searches. Log into Upwork, save a search for your niche, and grab the RSS URL from the feed icon. It looks like:
https://www.upwork.com/ab/feed/jobs/rss?q=python+automation&sort=recency&paging=0%3B10&api_params=1&securityToken=YOUR_TOKEN&userUid=YOUR_UID&orgUid=YOUR_ORG
Enter fullscreen mode
Exit fullscreen mode
The token is tied to your session, so treat it like a password.
Here's the RSS parser and job monitor:
`import feedparser import hashlib import json import time import logging from datetime import datetime from pathlib import Path from dataclasses import dataclass, field from typing import Optional
logging.basicConfig(level=logging.INFO, format="%(asctime)s — %(levelname)s — %(message)s") logger = logging.getLogger(name)
SEEN_JOBS_FILE = Path("seen_jobs.json") POLL_INTERVAL_SECONDS = 300 # 5 minutes — don't hammer the feed
@dataclass class UpworkJob: title: str url: str description: str published: str budget: Optional[str] = None job_type: Optional[str] = None skills: list[str] = field(default_factory=list) job_id: str = ""
def post_init(self): self.job_id = hashlib.md5(self.url.encode()).hexdigest()
def load_seen_jobs() -> set: if SEEN_JOBS_FILE.exists(): return set(json.loads(SEEN_JOBS_FILE.read_text())) return set()
def save_seen_jobs(seen: set): SEEN_JOBS_FILE.write_text(json.dumps(list(seen)))
def parse_budget_from_description(description: str) -> Optional[str]: """ Upwork embeds budget info in the description HTML. Budget: $500.00-$1,000.00 or Hourly Range: $25.00-$50.00/hr """ import re patterns = [ r"Budget:\s*$?([\d,]+.?\d*)\s*[-–]\s*$?([\d,]+.?\d*)", r"Hourly Range:\s*$?([\d,]+.?\d*)\s*[-–]\s*$?([\d,]+.?\d*)", r"Budget:\s*$?([\d,]+.?\d*)", ] for pattern in patterns: match = re.search(pattern, description, re.IGNORECASE) if match: return match.group(0) return None
def parse_skills_from_description(description: str) -> list[str]: import re match = re.search(r"Skills?:\s*([^\n<]+)", description, re.IGNORECASE) if match: skills_raw = match.group(1) return [s.strip() for s in re.split(r"[,;]", skills_raw) if s.strip()] return []
def fetch_jobs(feed_url: str) -> list[UpworkJob]: feed = feedparser.parse(feed_url)
if feed.bozo: logger.warning(f"Feed parse warning: {feed.bozo_exception}")
jobs = [] for entry in feed.entries: description = entry.get("summary", "") job = UpworkJob( title=entry.get("title", "No title"), url=entry.get("link", ""), description=description, published=entry.get("published", ""), budget=parse_budget_from_description(description), skills=parse_skills_from_description(description), ) jobs.append(job)
logger.info(f"Fetched {len(jobs)} jobs from feed") return jobs
def monitor_feed(feed_urls: list[str], callback): """ Continuously polls feed URLs and calls callback(job) for new jobs. """ seen = load_seen_jobs()
while True: for url in feed_urls: try: jobs = fetch_jobs(url) new_jobs = [j for j in jobs if j.job_id not in seen]
for job in new_jobs: logger.info(f"New job found: {job.title}") callback(job) seen.add(job.job_id)
save_seen_jobs(seen)
except Exception as e: logger.error(f"Error fetching feed {url}: {e}")
logger.info(f"Sleeping {POLL_INTERVAL_SECONDS}s until next poll...") time.sleep(POLL_INTERVAL_SECONDS)`
Enter fullscreen mode
Exit fullscreen mode
A few things worth noting about this implementation:
feedparser handles malformed XML gracefully, which matters because Upwork's RSS occasionally has encoding issues — I've seen bozo_exception: on feeds that nevertheless parse fine. The hashlib.md5 job ID means you won't process the same listing twice even across restarts. And the 5-minute poll interval is deliberate — aggressive polling will get your IP rate-limited.
Step 2: The Scoring Algorithm
Not every job deserves a proposal. The scoring engine is where you encode your professional judgment into math.
My scoring weights are tuned for Python automation work. You'll adjust these based on your niche, but the structure transfers directly:
`import re from dataclasses import dataclass
@dataclass class ScoringConfig: must_have_keywords: list[str] nice_to_have_keywords: list[str] dealbreaker_keywords: list[str] min_budget_fixed: float min_budget_hourly: float max_budget_fixed: float # avoid scope monsters keyword_match_weight: float = 0.5 budget_weight: float = 0.35 recency_weight: float = 0.15
DEFAULT_CONFIG = ScoringConfig( must_have_keywords=["python", "automation", "api", "scraping", "bot", "pipeline"], nice_to_have_keywords=["anthropic", "claude", "openai", "fastapi", "postgresql", "aws", "trading"], dealbreaker_keywords=["wordpress", "shopify", "wix", "php", "react native", "unity", "c#", "java"], min_budget_fixed=300.0, min_budget_hourly=25.0, max_budget_fixed=50000.0, )
def extract_budget_value(budget_str: str) -> tuple[float, str]: """ Returns (mid_point_value, job_type) where job_type is 'fixed' or 'hourly'. """ if not budget_str: return 0.0, "unknown"
is_hourly = "hr" in budget_str.lower() or "hour" in budget_str.lower() numbers = re.findall(r"[\d,]+.?\d*", budget_str) values = [float(n.replace(",", "")) for n in numbers]
if not values: return 0.0, "hourly" if is_hourly else "fixed"
midpoint = sum(values) / len(values) return midpoint, "hourly" if is_hourly else "fixed"
def score_job(job, config: ScoringConfig = DEFAULT_CONFIG) -> dict: text = f"{job.title} {job.description}".lower() scores = {}
--- Dealbreaker check ---
for kw in config.dealbreaker_keywords: if kw in text: return { "total": 0.0, "disqualified": True, "reason": f"Dealbreaker keyword: '{kw}'", "breakdown": {} }
--- Keyword scoring ---
must_have_hits = [kw for kw in config.must_have_keywords if kw in text] nice_to_have_hits = [kw for kw in config.nice_to_have_keywords if kw in text]
must_have_ratio = len(must_have_hits) / len(config.must_have_keywords) nice_ratio = len(nice_to_have_hits) / max(len(config.nice_to_have_keywords), 1)
keyword_score = (must_have_ratio * 0.7) + (nice_ratio * 0.3) scores["keywords"] = round(keyword_score * 100, 1)
--- Budget scoring ---
budget_val, job_type = extract_budget_value(job.budget or "") budget_score = 0.0
if job_type == "fixed": if budget_val < config.min_budget_fixed: budget_score = 0.0 elif budget_val > config.max_budget_fixed: budget_score = 0.2 # red flag: scope too large or unrealistic else:
Normalize: sweet spot is $1k-$10k
normalized = min(budget_val / 10000, 1.0) budget_score = 0.4 + (normalized * 0.6) elif job_type == "hourly": if budget_val >= config.min_budget_hourly: normalized = min((budget_val - config.min_budget_hourly) / 75, 1.0) budget_score = 0.5 + (normalized * 0.5)
scores["budget"] = round(budget_score * 100, 1)
--- Composite score ---
total = ( keyword_score * config.keyword_match_weight + budget_score * config.budget_weight )
Recency handled upstream by feed sort=recency; give partial credit
total += 0.1 * config.recency_weight # baseline recency bonus
total_clamped = min(round(total * 100, 1), 100.0)
return { "total": total_clamped, "disqualified": False, "reason": None, "breakdown": { "keyword_score": scores["keywords"], "budget_score": scores["budget"], "must_have_hits": must_have_hits, "nice_to_have_hits": nice_to_have_hits, "budget_value": budget_val, "job_type": job_type, } }`
Enter fullscreen mode
Exit fullscreen mode
When I run this against a real job feed, output looks like:
`2024-01-15 09:23:11 — INFO — Fetched 10 jobs from feed 2024-01-15 09:23:11 — INFO — New job found: Python Developer Needed for Trading Bot Automation Score result: {'total': 78.2, 'disqualified': False, 'breakdown': {'keyword_score': 83.3, 'budget_score': 71.0, 'must_have_hits': ['python', 'automation', 'bot'], 'nice_to_have_hits': ['trading'], 'budget_value': 2500.0, 'job_type': 'fixed'}}
2024-01-15 09:23:11 — INFO — New job found: Shopify Theme Customization Score result: {'total': 0.0, 'disqualified': True, 'reason': "Dealbreaker keyword: 'shopify'", 'breakdown': {}}`
Enter fullscreen mode
Exit fullscreen mode
Jobs scoring above 60 go to the proposal generator. Jobs below that get logged and skipped. You can tune that threshold — I've found 60 catches genuinely relevant work without drowning me in noise.
Step 3: Generating Tailored Proposals with Claude
This is where the time savings stack up. The proposal generator takes the scored job, pulls relevant context from my profile template, and produces a draft that's actually specific to the posting — not a mail-merge.
`python import anthropic import os
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
MY_PROFILE = """ Name: Mike G. Core skills: Python automation, API integrations, web scraping, data pipelines, trading bots Years of experience: 8 Notable projects: Built a crypto arbitrage system processing 50k ticks/minute; scraped and structured 2M+ product records for an e-commerce client; built a Telegram trading signal bot with live P&L tracking Tone: Direct, technical, no fluff. I explain what I'll build and why my approach works. Availability: 20hrs/week. Based in US Eastern timezone. """
PROPOSAL_SYSTEM_PROMPT = """ You are writing a freelance proposal on behalf of a senior Python engineer.
Rules:
- Open with a direct reference to the specific problem described in the job post. Never use generic openers like "I saw your posting" or "I would love to help."
- Demonstrate you understood the technical requirements by briefly describing your approach.
- Reference 1-2 relevant past projects (from the profile provided) that map to this job.
- Keep it under 200 words. Clients skim proposals. Respect their time.
- End with one specific clarifying question that shows you thought about scope.
- Do NOT use bullet points. Flowing paragraphs only.
- Do NOT say "I am a senior Python engineer" or state your title. Show, don't tell. """
def generate_proposal(job, score_result: dict) -> str: job_context = f""" Job Title: {job.title} Job Description: {job.description[:2000]} Budget: {job.budget or 'Not specified'} Skills mentioned: {', '.join(job.skills) if job.skills else 'Not listed'} Keyword matches: {', '.join(score_result['breakdown'].get('must_have_hits', []))} """
prompt = f""" My profile: {MY_PROFILE}
Job details: {job_context}
Write a proposal for this job following all rules in your instructions. """
message = client.messages.create( model="claude-opus-4-5", max_tokens=512, system=PROPOSAL_SYSTEM_PROMPT,`
Enter fullscreen mode
Exit fullscreen mode
DEV Community
https://dev.to/chiefmojo79/how-to-automate-upwork-proposals-with-python-real-code-inside-4kaoSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelproduct
Why APEX Matters for MoE Coding Models and why it's NOT the same as K quants
I posted about my APEX quantization of QWEN Coder 80B Next yesterday and got a ton of great questions. Some people loved it, some people were skeptical, and one person asked "what exactly is the point of this when K quants already do mixed precision?" It's a great question. I've been deep in this for the last few days running APEX on my own hardware and I want to break down what I've learned because I think most people are missing the bigger picture here. So yes K quants like Q4_K_M already apply different precision to different layers. Attention gets higher precision, feed-forward gets lower. That's been in llama.cpp for a while and it works. But here's the thing nobody is talking about. MoE models have a coherence problem. I was reading this article last night and it clicked for me. When

Why Microservices Struggle With AI Systems
Adding AI to microservices breaks the assumption that same input produces same output, causing unpredictability, debugging headaches, and unreliable systems. To safely integrate AI, validate outputs, version prompts, use a control layer, and implement rule-based fallbacks. Never let AI decide alone—treat it as advisory, not authoritative. Read All

qwen3.5 vs gemma4 vs cloud llms in python turtle
I have found python turtle to be a pretty good test for a model. All of these models have received the same prompt: "write a python turtle program that draws a cat" you can actually see similarity in gemma's and gemini pro's outputs, they share the color pallete and minimalist approach in terms of details. I have a 16 gb vram gpu so couldn't test bigger versions of qwen and gemma without quantisation. gemma_4_31B_it_UD_IQ3_XXS.gguf Qwen3_5_9B_Q8_0.gguf Qwen_3_5_27B_Opus_Distilled_Q4_K_S.gguf deepseek from web browser with reasoning claude sonnet 4.6 extended gemini pro from web browser with thinking submitted by /u/SirKvil [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Pakistan’s peace plan a ‘critical opportunity’ for US-Iran talks ahead of Trump deadline
As US President Donald Trump’s Tuesday deadline for reopening the Strait of Hormuz approached, Pakistan put forward a fresh proposal for an immediate ceasefire on Monday, offering what one analyst described as “a critical opportunity” for talks. The plan was brokered through overnight contacts between Pakistani army chief Asim Munir, US officials including Vice-President J.D. Vance and Iran’s Foreign Minister Abbas Araghchi, according to Reuters. It called for an immediate halt to hostilities...

![[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.](https://preview.redd.it/56hbny8rrjtg1.png?width=640&crop=smart&auto=webp&s=26d91255bcdd942aea5255c7d3ac259db5bebf23)
[PokeClaw] First working app that uses Gemma 4 to autonomously control an Android phone. Fully on-device, no cloud.
PokeClaw - A Pocket Version of OpenClaw Most "private" AI assistants are private because the company says so. PokeClaw is private because there's literally no server component. The AI model runs on your phone's CPU. There's no cloud endpoint. You can block the app from the internet entirely and it works the same. It runs Gemma 4 on-device using LiteRT and controls your phone through Android Accessibility. You type a command, the AI reads the screen, decides what to tap, and executes. Works with any app. I built this because I wanted a phone assistant that couldn't spy on me even if it wanted to. Not because of a privacy policy, but because of architecture. There's nowhere for the data to go. First app I've found that does fully local LLM phone control — every other option I checked either

Silverback AI Chatbot Introduces AI Assistant Feature to Support Structured Digital Communication and Intelligent Workflow Automation - Daytona Beach News-Journal
Silverback AI Chatbot Introduces AI Assistant Feature to Support Structured Digital Communication and Intelligent Workflow Automation Daytona Beach News-Journal


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!