How DevTrends Works
Every score, ranking, and insight on this platform is computed from real data. Here's exactly what we measure, how we weight it, and where we know we fall short.
What We Measure
DevTrends tracks technologies across four signal types — developer community activity, job market demand, ecosystem health, and open-source momentum. These signals are collected from 9+ sources daily and combined into a single composite score from 0–100.
The goal is to answer three questions a developer actually cares about: Is this technology growing or shrinking? Is it in demand from employers? Is the community healthy?
We don't measure quality, suitability for a specific project, or personal preference. A high score means a technology is widely adopted, actively discussed, and currently hired for — not that it's the right tool for your situation.
Data Sources
We collect data from 9+ sources, organized into four signal categories:
GitHub Activity
- GitHub API — Stars, forks, issues, contributors, commit velocity
Community Buzz
- Hacker News — Mentions, upvotes, sentiment
- Reddit — Posts, engagement, sentiment
- Dev.to — Articles, reactions
- NewsAPI — Tech news coverage
Job Market
- Adzuna — Job postings across regions
- JSearch — Aggregated job board data
- Remotive — Remote job listings
Ecosystem Health
- npm / PyPI / crates.io — Weekly download counts and growth
- Stack Overflow — Question count and recent activity
Adaptive Composite Scoring
Each technology gets a composite score from 0 to 100 using weights that adapt based on the technology's category and maturity. A programming language and a CSS framework shouldn't be weighted the same way — the language matters more in the job market, the framework matters more in community adoption.
Weight profiles by category
Languages & Databases
Jobs (35–40%) · Ecosystem (30–35%) · GitHub (15–20%) · Community (10–15%)
Frontend & Mobile Frameworks
GitHub (20–25%) · Community (20–25%) · Jobs (25–30%) · Ecosystem (25–30%)
AI / ML Tools
Community (30%) · GitHub (25%) · Jobs (25%) · Ecosystem (20%)
Maturity adjustments
New technologies (under 6 months of data) get boosted GitHub and community weights — because job market adoption lags behind developer interest by months. Mature technologies get boosted job and ecosystem weights, since those signals are more stable predictors of long-term relevance.
Score ranges
Every score includes a confidence grade (A–F) based on data completeness, sample size, recency, and source diversity. A score of 85 with grade A is more reliable than the same score with grade C — it means more sources agreed and the data is fresh.
Normalization & Smoothing
Raw numbers aren't comparable across technologies — a framework with 50,000 GitHub stars and one with 500 stars aren't in the same league. We normalize each signal using z-score normalization, which measures how far a technology sits from the average across the full tracked population.
We then apply Bayesian smoothing to prevent low-sample technologies from gaming the rankings. Without it, a project with 10 stars gaining 10 more would score higher than one with 10,000 stars gaining 500 (100% vs 5% growth). Smoothing applies a confidence penalty proportional to sample size — the less data we have, the more we pull the score toward the population mean.
GitHub Score
- •Star velocity (40%) — new stars gained, normalized across tracked repos
- •Fork count (20%) — indicator of real-world usage
- •Issue close rate (20%) — maintenance health and responsiveness
- •Contributor growth (20%) — expanding vs contracting developer interest
Community Score
- •Hacker News mentions (35%) — normalized mention count
- •Reddit posts (25%) — subreddit activity and engagement
- •Dev.to articles (25%) — content ecosystem health
- •Sentiment adjustment (±15 pts) — positive vs negative buzz weighting
Jobs Score
- •Adzuna postings (40%) — normalized across all tracked technologies
- •JSearch postings (40%) — secondary job board coverage
- •Remotive postings (20%) — remote job demand signal
Ecosystem Score
- •Package downloads (40%) — weekly downloads, normalized within each registry
- •Download growth rate (25%) — adoption trajectory over 30 days
- •Stack Overflow activity (35%) — question count and recent engagement
Momentum Analysis
The composite score tells you where a technology stands today. Momentum tells you where it's heading. We use three exponential moving averages running in parallel to distinguish short-term noise from genuine directional change.
Three-window analysis
7-day EMA — Breaking news, launches, viral moments
30-day EMA — Monthly adoption trend
90-day EMA — Long-term trajectory
When short-term momentum contradicts long-term momentum — a 7-day spike against a 90-day decline, for example — the system flags a potential inflection point. These are worth investigating further before drawing conclusions.
Trend labels
Sentiment Analysis
Mention count alone is misleading. "React 19 is incredible" and "Why I'm abandoning React" are both mentions — but they carry opposite signals. Our sentiment engine combines lexicon-based analysis with tech-specific context rules to weight mentions by their actual signal direction.
Positive signals
"production-ready", "battle-tested", "game-changer", "hiring for", "just shipped" — phrases that indicate real adoption rather than curiosity.
Negative signals
"deprecated", "legacy", "abandoning", "security vulnerability", "rewrite in X" — phrases that indicate decline or active rejection.
Sarcasm detection
Phrases like "yeah right" and "sure buddy" flip the polarity of the surrounding sentiment to prevent misclassification of ironic praise as genuine endorsement.
Sentiment adjusts the community sub-score by up to ±15 points. High mention volume with predominantly negative sentiment will score meaningfully lower than moderate volume with positive sentiment.
AI-Generated Insights
Scores and charts show you what's happening. The AI insights explain why — drawing on the underlying data to surface patterns, flag anomalies, and provide context a raw number can't.
Quality checks
Every generated insight passes through six automated quality dimensions: factual grounding, relevance to the technology, completeness, clarity, actionability, and consistency with the underlying data. Insights that fail are regenerated rather than surfaced.
Freshness
Insights are invalidated when the underlying data changes significantly. A cached insight about a technology that just saw a 40% score drop will be regenerated before being shown — you won't read stale analysis about current events.
Resilience
Insights are generated across multiple AI providers with automatic failover. If one provider is unavailable, the system routes to alternatives — insight availability is independent of any single provider's uptime.
Status Labels
Each technology receives a human-readable status label derived from its score, momentum direction, and confidence grade combined:
Update Schedule
Data collection and score recomputation run on a fixed schedule:
Limitations
We'd rather tell you where this breaks than have you discover it the wrong way.
- •Popularity ≠ quality. A high score means a technology is widely used, discussed, and hired for. It says nothing about whether it's the right tool for your specific situation.
- •Enterprise blind spots. Technologies dominant in enterprises (Java, C#, SAP, Oracle) may score lower on community signals because enterprise discussions happen behind firewalls, not on Hacker News.
- •English-language bias. All data sources are English-language. Technologies with large non-English communities — particularly in China, Japan, and Eastern Europe — may be underrepresented.
- •AI-generated insights can be wrong. Insights are quality-checked but not fact-checked by humans. LLMs can misinterpret data or miss context. Use them as a starting point, not a final answer.
- •Job data lags reality. Job postings are a lagging indicator. By the time a technology shows up heavily in job listings, the early-adopter window has often already closed.
- •Sentiment accuracy is imperfect. Our tech-aware sentiment analysis is more accurate than generic approaches, but sarcasm detection and context understanding aren't solved problems. Ironic or nuanced community discussion may be misclassified.
What's Tracked
DevTrends currently tracks technologies across 8 categories: Languages, Frontend, Backend, Databases, DevOps, Cloud, Mobile, and AI/ML. Rankings and scores update continuously as new data arrives.