Building a scoring engine with pure TypeScript functions (no ML, no backend)
<p>We needed to score e-commerce products across multiple dimensions: quality, profitability, market conditions, and risk.</p> <p>The constraints:</p> <ul> <li>Scores must update in real time</li> <li>Must run entirely in the browser (Chrome extension)</li> <li>Must be explainable (not a black box)</li> </ul> <p>We almost built an ML pipeline — training data, model serving, APIs, everything.</p> <p>Then we asked a simple question:</p> <p><strong>Do we actually need machine learning for this?</strong></p> <p>The answer was no.</p> <p>We ended up building several scoring engines in pure TypeScript.<br> Each one is a single function, under 100 lines, zero dependencies, and runs in under a millisecond.</p> <h2> What "pure function" means here </h2> <p>Each scoring engine follows 3 rules:</p> <
We needed to score e-commerce products across multiple dimensions: quality, profitability, market conditions, and risk.
The constraints:
-
Scores must update in real time
-
Must run entirely in the browser (Chrome extension)
-
Must be explainable (not a black box)
We almost built an ML pipeline — training data, model serving, APIs, everything.
Then we asked a simple question:
Do we actually need machine learning for this?
The answer was no.
We ended up building several scoring engines in pure TypeScript. Each one is a single function, under 100 lines, zero dependencies, and runs in under a millisecond.
What "pure function" means here
Each scoring engine follows 3 rules:
-
No I/O → no network, no DB, no files
-
Deterministic → same input = same output
-
No side effects → no global state, no mutations
This makes them:
-
Easy to test
-
Easy to reason about
-
Portable (browser, Node.js, anywhere)
Core pattern: weighted scoring
interface ScoringInput { qualityScore: number | null; profitScore: number | null; marketScore: number | null; riskScore: number | null; }interface ScoringInput { qualityScore: number | null; profitScore: number | null; marketScore: number | null; riskScore: number | null; }type Verdict = 'strong_buy' | 'buy' | 'hold' | 'pass';
function computeScore(input: ScoringInput) { const quality = input.qualityScore ?? 50; const profit = input.profitScore ?? 50; const market = input.marketScore ?? 50; const risk = input.riskScore ?? 50;
const overall = Math.round( quality * 0.3 + profit * 0.3 + market * 0.2 + risk * 0.2 );
let verdict: Verdict; if (overall >= 80) verdict = 'strong_buy'; else if (overall >= 60) verdict = 'buy'; else if (overall >= 40) verdict = 'hold'; else verdict = 'pass';
return { overall, verdict }; }`
Enter fullscreen mode
Exit fullscreen mode
Handling missing data (critical)
All inputs are nullable.
We default to 50 (neutral).
Why not:
-
Skip missing values → breaks comparability
-
Default 0 → unfairly penalizes
-
Default 100 → artificially inflates
Neutral = safest assumption.
Normalization + clamp
All scores must be 0–100.
function clamp(value: number, min: number, max: number) { return Math.max(min, Math.min(max, value)); }function clamp(value: number, min: number, max: number) { return Math.max(min, Math.min(max, value)); }const profitScore = clamp(marginPercent * 2, 0, 100); const marketScore = clamp(100 - saturationPercent, 0, 100); const riskScore = clamp(100 - rawRiskScore, 0, 100);`*
Enter fullscreen mode
Exit fullscreen mode
Without clamp:
-
values can exceed bounds
-
negative values break logic
-
NaN propagates silently
Choosing weights
Not all dimensions are equal.
We weighted:
-
Quality + Profit → higher (controllable)
-
Market + Risk → lower (external factors)
We considered user-configurable weights but dropped it: → too complex for non-technical users
Threshold calibration
Initial thresholds (75 / 50 / 25) were too optimistic.
We:
-
Scored hundreds of products
-
Compared with human judgment
-
Iterated
Lesson: Never guess thresholds — calibrate them.
Composition > monolith
We built multiple small engines:
-
Product score
-
Market score
-
Platform score
Then combine:
function computeFinalVerdict( productScore: number | null, marketScore: number | null, platformScore: number | null ) { const product = productScore ?? 50; const market = marketScore ?? 50; const platform = platformScore ?? 50;function computeFinalVerdict( productScore: number | null, marketScore: number | null, platformScore: number | null ) { const product = productScore ?? 50; const market = marketScore ?? 50; const platform = platformScore ?? 50;const score = Math.round( market * 0.4 + product * 0.35 + platform * 0.25 );*
const confidence = Math.round( Math.min(product, market, platform) * 0.8 + 20 );*
const reasons: string[] = [];
if (market >= 70) reasons.push('Favorable market conditions'); if (market < 40) reasons.push('Challenging market'); if (product >= 70) reasons.push('Strong product'); if (product < 40) reasons.push('Weak product');
return { score, confidence, reasons }; }`
Enter fullscreen mode
Exit fullscreen mode
Key ideas:
-
Confidence = weakest dimension
-
Reasons = explainability
Example
Input:
-
Quality: 75
-
Profit: 84
-
Market: 65
-
Risk: 80
Result:
-
Score: 77
-
Verdict: buy
If profit increases → score crosses 80 → strong_buy
This kind of reasoning is trivial with pure functions, impossible with black-box ML.
When you SHOULD use ML
Use ML if:
-
You analyze images or text
-
You need pattern discovery
-
You have high-dimensional data (50+ features)
Otherwise: → pure functions are simpler, faster, more transparent
Key takeaways
-
Start with pure functions
-
Default missing data to neutral
-
Always clamp values
-
Weight by controllability
-
Compose small engines
-
Calibrate with real data
No training data. No APIs. No latency. Runs in-browser in under 1ms.
Not for every problem — but for structured scoring, it’s hard to beat.
Curious: Have you used similar scoring patterns? Or did you go with ML instead?
DEV Community
https://dev.to/cs_alishopping/building-a-scoring-engine-with-pure-typescript-functions-no-ml-no-backend-3hclSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltrainingupdate
it looks like it will be soon 💎💎💎💎
https://github.com/ggml-org/llama.cpp/pull/21309 (thanks rerri ) from HF https://github.com/huggingface/transformers/pull/45192 [Gemma 4](INSET_PAPER_LINK) is a multimodal model with pretrained and instruction-tuned variants, available in 1B, 13B, and 27B parameters. The architecture is mostly the same as the previous Gemma versions. The key differences are a vision processor that can output images of fixed token budget and a spatial 2D RoPE to encode vision-specific information across height and width axis. this PR probably only applies to dense, so it must be separate for MoE submitted by /u/jacek2023 [link] [comments]

Google launches Gemma 4, its "most intelligent" open model family, purpose-built for advanced reasoning and agentic workflows, under an Apache 2.0 license (The Keyword)
The Keyword : Google launches Gemma 4, its most intelligent open model family, purpose-built for advanced reasoning and agentic workflows, under an Apache 2.0 license C O Group Product Manager, Google DeepMind Today, we are introducing Gemma 4 our most intelligent open models to date.

Gemma 4 released
Blog: https://deepmind.google/models/gemma/ Models: - Gemma4-2B: https://huggingface.co/google/gemma-4-E2B-it - Gemma4-4B: https://huggingface.co/google/gemma-4-E4B-it - Gemma4-26B-A4B: https://huggingface.co/google/gemma-4-26B-A4B-it - Gemma4-31B: https://huggingface.co/google/gemma-4-31B-it The GGUF versions can be found here: https://huggingface.co/collections/unsloth/gemma-4 https://preview.redd.it/j7c0107ewssg1.png?width=1552 format=png auto=webp s=1c47b1d9986c42a6cb1f81d73c142863586b1fd6 submitted by /u/garg-aayush [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI s great paradox: The industry s rise and investors collapse
The AI industry faces a paradox, promising transformational advances while investors risk substantial losses due to limitations of current technologies and potential quantum breakthroughs. The post AI’s great paradox: The industry’s rise and investors’ collapse first appeared on TechTalks .

It’s not easy to get depression-detecting AI through the FDA
For the past seven years, the California-based startup Kintsugi has been developing AI designed to detect signs of depression and anxiety from a person's speech. But after failing to secure FDA clearance in time, the company is shutting down and releasing most of its technology as open-source. Some elements may even find a second life [ ]


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!