The Mathematics Behind Valomapped
A comprehensive deep-dive into the mathematical models, algorithms, and statistical methods powering my Valorant competitive analytics platform.
Core Systems
Advanced Analytics
The core of this website is my custom Elo rating system modified from the standard Elo formula made famous in chess. Each team is given a completely independent Elo rating for each map. This gives my rating system significantly greater predictive power. This is because Valorant teams can have dramatically different skill levels across different maps.
A typical Elo formula first calculates the expected probability of winning the match for each player as follows:
expected_probability = 1 / (1 + 10^((opponent_rating - player_rating) / 400))Then the Elo rating is updated based on the actual outcome of the match.
new_elo_rating = old_elo_rating + k * (actual_outcome - expected_probability)K is a constant used to control the magnitude of the rating change.
Valomapped's Custom Elo Formula
expected_probability = 1 / (1 + 10^((opponent_rating - player_rating) / 1000))
elo_rating = elo_rating + 74 * margin_factor * (actual_outcome - expected_probability)The margin factor is calculated as follows:
margin_factor = ln(5.95 * sqrt(score_difference + 1))Parameters
There are essentially 3 main parameters that can be adjusted to fit the needs of the specific use case.
- 1. Rating Scale
- 2. Margin of Victory Adjustment
- 3. K-Factor
1. Rating Scale
The rating scale acts as the divisor that controls the spread of Elo ratings in the system. Increasing this value widens the Elo gap required to achieve the same win probability. In traditional chess Elo (400-point scale), a 400-point gap yields ~91% win probability (e.g., 1600 vs 1200). In my 1000-point scale, that same 91% probability requires a 1000-point gap (e.g., 1600 vs 600).
I decided to increase the scale because all VCT teams are professionals. The skill gap between the best and worst pro team is far smaller than between the best and worst player on Chess.com. The wider scale makes rating differences more visually apparent and meaningful for users interacting with the website.
Expected Probability = 1 / (1 + 10^((opponent_rating - player_rating) / 1000))Parameter Optimization: Grid Search Methodology
For the following parameters, rather than arbitrarily selecting parameter values, I used a systematic grid search approach to optimize the Elo system's predictive accuracy. This involved training multiple models with different combinations of parameters and evaluating their performance on unseen match data.
Grid Search Process:
- Used GridSearchCV from scikit-learn to test multiple parameter combinations
- Varied K-factor values (32, 48, 64, 74, 80, 96)
- Tested different margin of victory scaling factors (0-10)
- Compared multiple Elo model architectures (discussed in Alternative Methods section)
- Split dataset into training matches and future test matches
- Evaluated models using Brier score (lower = better predictions)
Optimal Results:
The best performing model achieved the lowest Brier error with a K-factor of 74 and a margin multiplier of 5.95 in the logarithmic scaling function. This combination balanced rating responsiveness with predictive stability, outperforming both more conservative and more aggressive parameter choices.
2. Margin of Victory Adjustment
The margin of victory adjustment adds critical context by incorporating the score differential into the rating calculation. This ensures that dominant victories yield larger rating changes than narrow wins.
margin_factor = ln(5.95 × √(score_difference + 1))The logarithmic scaling provides diminishing returns for larger score differences. For example, a 13-5 victory yields a greater reward than a 13-11 victory, but the marginal increase in reward for winning 13-10 vs 13-11 is higher than the increase for 13-5 vs 13-6. This prevents extreme rating swings from blowout matches while still recognizing dominant performances.
This logarithmic scaling ensures that dominant victories (13-0, 13-1) provide significantly more rating changes than close matches (13-11), while preventing extreme outliers.
3. K-Factor
The K-factor controls the maximum amount of rating change that can occur in a single match.
Tuning the K-factor involves finding a balance in how responsive the rating system is to new data.
A higher K-factor will result in more volatile rating changes, while a lower K-factor will result in more stable rating changes.
In my case, my grid search settled on a K-factor of 74. This is substantially higher than what is used in traditional systems like chess Elo.
My personal intuition for this is that because Valorant is a rapidly evolving game, unlike chess which hasn't been updated since the addition of castling, it is beneficial to update more quickly to additional data.
Final Formula
expected_probability = 1 / (1 + 10^((opponent_rating - player_rating) / 1000))
new_rating = old_rating + (74 × ln(5.95 × sqrt(score_difference + 1)) × (Actual_Result - Expected_Probability))Alternative Methods
There were several alternative methods that I considered for the Elo system that are worth noting for completeness.
Hybrid Global Offset
The hybrid global offset method is a modification of my custom Elo formula that creates a global Elo rating for each team and then adds a map-specific offset to the rating for each map, rather than having completely independent ratings per map. In testing, this method did perform slightly better than the completely independent method, however the prediction quality was exclusively better at the beginning of seasons where the global offset was able to more quickly update to a teams relative strength on maps with little or no data yet. The two models otherwise converged to the same results by the middle and end of each season. Because of this, I decided to stick with the completely independent method for the final model for simplicity for Users viewing the website.
Model K-Factor Updating Confidence
I also experimented with a K-Factor updating confidence approach. This means the model would dynamically adjust the K-Factor based on the amount of time between matches. So the K-factor would start higher at the beginning of the season and gradually shrink as it got more data. The model starts updating more quickly while it has low confidence in the team's skill level and then updates more slowly as it gets more data. The K-Factor would then increase again after long breaks within the season as the teams have time to make changes.
In testing though, this model did not perform as well as without this additional parameter confidence method.
I was quite surprised by this result, however my best guess as to why is some combination of factors: the data is sparse as is, and decaying the K-Factor was ultimately causing more harm than good. As well as that teams are able to make significant changes to their playstyle and strategy even in the middle of a season and our reduced K-Factor more slowly adjusted to these mid-season changes.
Regional Elo Multiplier
I plan on testing a regional Elo multiplier for each map in the future. This is because teams play so many of their matches regionally that the model is at risk of not fully accounting for differences in skill level between regions.
Differences between regions are currently sorted out between teams competing internationally where teams from stronger regions will win more often and therefore "bring back" the gained Elo to the regional events.
I however am not convinced this is sufficient to calibrate the regional differences fully. A thought I plan on exploring is giving each region a multiplier that is impacted by the result of each international match, weighting the Elos of every team in the region based on the result rather than just relying on individual team Elo gains/losses waterfalling down to the rest of the region. This would also be map-independent, as regions may have better strategies on some maps and not others.
Probability calculations for Best-of-3 and Best-of-5 matches using probability theory.
Single Map Win Probability
P(Team A wins) = 1 / (1 + 10^((Elo_B - Elo_A) / 1000))The probability of a team winning a given map is calculated directly from the win probability implied by our Elo ratings.
Best-of-3 Match Probability
P(BO3 Win) = P₁P₂ + P₁(1-P₂)P₃ + (1-P₁)P₂P₃Where P₁, P₂, P₃ are the win probabilities for each map in the match sequence.
Best-of-5 Match Probability
A team wins a BO5 by winning 3 maps before their opponent does. There are 10 distinct scenarios for achieving this victory:
3-0 Victory (1 scenario):
P₁P₂P₃3-1 Victory (3 scenarios):
P₁P₂(1-P₃)P₄ + P₁(1-P₂)P₃P₄ + (1-P₁)P₂P₃P₄3-2 Victory (6 scenarios):
P₁P₂(1-P₃)(1-P₄)P₅ + P₁(1-P₂)P₃(1-P₄)P₅ + P₁(1-P₂)(1-P₃)P₄P₅ + (1-P₁)P₂P₃(1-P₄)P₅ + (1-P₁)P₂(1-P₃)P₄P₅ + (1-P₁)(1-P₂)P₃P₄P₅Full Formula:
P(BO5 Win) = [sum of all 10 terms above]Where P₁, P₂, P₃, P₄, P₅ are the win probabilities for each map in the match sequence. Each scenario represents a unique path through the match where the team wins exactly 3 maps.
Game-theoretic approach to simulating realistic pick/ban processes in competitive Valorant.
Strategic Assumptions
- Teams ban maps where they have the lowest win probability
- Teams pick maps where they have the highest win probability
- Alternating selection order follows standard competitive rules
- Teams have perfect information about opponent strengths
BO3 Selection Process
1. Team A bans worst map (min probability)
2. Team B bans worst map (min probability)
3. Team A picks best map (max probability)
4. Team B picks best map (max probability)
5. Team A bans worst remaining map (min probability)
6. Team B bans worst remaining map (min probability)
7. Remaining map becomes deciderBO5 Selection Process
1. Team A bans worst map (min probability)
2. Team B bans worst map (min probability)
3. Team A picks best map (max probability)
4. Team B picks best map (max probability)
5. Team A picks best remaining map (max probability)
6. Team B picks best remaining map (max probability)
7. Remaining map becomes decider
Zero-Sum Optimality
This algorithm assumes a perfectly zero-sum game. For any given map, Team A's win probability is exactly (1 - Team B's win probability). This creates a critical property: your worst map is always your opponent's best map, and vice versa.
This perfect opposition means the greedy algorithm (always ban worst, always pick best) is provably optimal. There is no strategic scenario where deviating from this strategy would improve your expected win probability. When you ban your worst map (say 30% win rate), you're simultaneously banning your opponent's best map (their 70% win rate). Any other choice would help your opponent more than it helps you.
The only scenario where strategic deviation could be beneficial is if you expect your opponent to select suboptimally, but modeling opponent mistakes is beyond the scope of this current system, which assumes rational play from both teams.
A potential future addition would be to model each team's actual historical map choice patterns and provide an option to use predicted map selection based on past behavior, rather than assuming optimal selection.
With a model for each team's actual historical map choice patterns, we could create a predicted map pool for a given matchup for a user to view. We could also create an optimal map selection algorithm for a given team to exploit predicted suboptimal map selection by the opponent.
Large-scale statistical modeling of tournament outcomes using probabilistic match simulation.
Simulation Process
for i = 1 to N_simulations (default: 10,000):
for each match in tournament:
simulate_match_result(team1, team2, match_type)
record tournament outcomes
calculate final probabilitiesNow that we have a system that can predict the outcome of any match, we create a (simulate_match_result) function that simulates the map selection phase, assuming optimal map selection, and then gets the win probabilities for each map using our match probability calculator.
Then all that is left is to design the tournament structure and simulate the tournament using a Monte Carlo simulation approach.
For each iteration, we simulate each match using the given win probabilities from our simulate_match_result function. Using the calculated probabilities, we generate a random outcome from the distribution. We record that result for each match, moving on to the next until we complete the full tournament.
Once we simulate the tournament in its entirety N times (we currently use 10,000), we aggregate the number of times each team made it to each stage of the tournament, giving us the probability of each team making it to each stage of the tournament.
Accurate representation of complex tournament formats including group stages, playoffs, and bracket structures.
Bracket Traversal Algorithm
function simulate_bracket(teams, bracket_structure):
current_round = teams
for each round in bracket_structure:
next_round = []
for each match in current_round:
winner = simulate_match(match)
next_round.append(winner)
current_round = next_round
return current_round[0] # Tournament winnerGroup Stage Modeling
For tournaments with group stages, I simulate round-robin play within each group, then advance teams based on win-loss records and tiebreakers (head-to-head, map differential).
Dynamic Tournament Updates
My system can incorporate completed match results, updating probabilities in real-time as tournaments progress. This provides increasingly accurate predictions as more information becomes available.
Multiple Tournament Formats
- Single Elimination Brackets
- Double Elimination Brackets
- Swiss System Tournaments
- Round Robin Groups
- Hybrid Group + Bracket Formats
The inspiration for my player rating systems comes as an attempt to create a Valorant version of the NBA's DARKO metric.
The DARKO metric uses a combination of classic statistics and modern machine learning techniques that updates after each game. You can read more about it in the link above.
Specifically I am modeling after the DPM (DARKO Plus Minus) statistic.
Overview: VPM (Valorant Plus Minus)
VPM is a composite player rating metric that combines traditional box score statistics with time-series modeling to estimate a player's true skill level in relation to rounds won or lost for their team compared to an average replacement player.
The system processes every map a player has ever played and outputs a single number that represents their current skill level, normalized to a standard 24-round map. The rating updates after each game, adapting to recent form while maintaining historical context.
Step 1: Statistical Components
The foundation of VPM consists of 7 per-round box-score statistics that capture different aspects of player performance:
- KPR (Kills Per Round): kills / total_rounds
- DPR (Deaths Per Round): deaths / total_rounds
- APR (Assists Per Round): assists / total_rounds
- FK Attempt Rate: (first_kills + first_deaths) / total_rounds
- FK Win Rate: first_kills / (first_kills + first_deaths)
- ADR: average damage per round (already normalized)
- KAST: Kill/Assist/Survive/Trade percentage (0-1 scale)
These components were selected because they correlate strongly with winning rounds while remaining relatively independent from each other, capturing distinct skill dimensions.
Step 2: Exponentially Weighted Moving Average (EMA)
Rather than treating all games equally, the system uses time-decay to emphasize recent performance. Each component maintains its own decay factor (β) tuned to how quickly that skill changes:
EMA_component = (Σ weight_i × value_i) / (Σ weight_i)
weight_i = rounds_played × β^(days_since_game)Step 3: Linear Regression Model
The 7 EMA components are combined using Ridge regression weights trained on historical data. The model predicts team round-winning probability based on player statistics:
VPM_raw = w₁×EMA_kpr + w₂×EMA_dpr + w₃×EMA_apr + w₄×EMA_fk_att_rate +
w₅×EMA_fk_win_rate + w₆×EMA_adr + w₇×EMA_kastRidge regularization (L2 penalty) prevents overfitting and ensures the model generalizes well to unseen matches. The weights are trained to maximize predictive accuracy on out-of-sample data.
Step 4: Kalman Filtering & Smoothing
The raw VPM values are noisy due to small sample sizes and variance in individual game performance. A Kalman filter provides optimal smoothing by modeling player skill as a latent state that evolves over time:
State Evolution:
x_t = a × x_(t-1) + process_noiseObservation Model:
y_t = x_t + measurement_noiseKalman Gain:
K_t = P_prior / (P_prior + R_t)Update:
x_t = x_prior + K_t × (y_t - x_prior)The filter accounts for variable game lengths (longer games provide more information) and time gaps between matches (uncertainty increases during inactivity). The Rauch-Tung-Striebel (RTS) smoother then performs a backward pass to refine historical estimates using future information.
Step 5: Centering & Normalization
The final VPM values are optionally centered by subtracting the league-average on each date. This ensures ratings are comparable across different eras and accounts for meta shifts that affect overall scoring levels. A VPM of +2.0 means the player provides 2 rounds worth of value above average per 24-round map.
Model Parameters
EMA Decay Factors (β):
- KPR, DPR, APR: 0.992
- FK Attempt Rate: 0.990
- FK Win Rate: 0.985 (most volatile)
- ADR: 0.993 (most stable)
- KAST: 0.991
Kalman Filter Parameters:
- a = 1.0 (state transition coefficient)
- q = 0.05 (process noise per day)
- r₀ = 1.0 (base measurement noise)
- use_days = true (time-aware dynamics)
Advantages Over Traditional Metrics
- Temporal Awareness: Recent performance weighted more heavily than old data
- Uncertainty Quantification: Confidence intervals provided for each rating
- Sample Size Adjustment: Smoothing prevents overreaction to small samples
- Multi-dimensional: Captures combat, trading, impact, and consistency
- Predictive: Trained to maximize correlation with team success
- Adaptive: Model can be retrained as the game evolves
Future Improvements
While the current VPM system provides strong predictive performance, several enhancements are planned:
- Map-specific models: Different weights for different maps (Cypher better on Icebox, etc.)
- Agent adjustments: Normalize for expected agent performance (duelists vs sentinels)
- Additional Features: Incorporate additional features such as clutch percentage (1vX), Utility Efficiency, and in-game economy data.
Quantifying the quality of strategic map selection decisions using Elo-based optimality metrics.
Overview
Every competitive Valorant match begins with a pick/ban phase where teams alternately select and eliminate maps. These decisions are critical—a poor veto can cost a team the match before a single round is played. The pick/ban analysis system evaluates the quality of these decisions by comparing actual choices to optimal ones based on historical Elo ratings.
This creates a "draft score" for each team, measuring how well they maximize their competitive advantage during the veto phase. Teams that consistently make suboptimal picks/bans leave Elo rating on the table, reducing their match win probability.
Optimal Pick Strategy
When a team picks a map, they should select the map where they have the largest Elo advantage over their opponent from the available pool:
Optimal Pick = argmax(Elo_Team(map) - Elo_Opponent(map))
Elo Advantage = Elo_Team(picked_map) - Elo_Opponent(picked_map)If a team picks a suboptimal map, the Elo lost is calculated as the difference between the advantage they could have had and the advantage they actually gained.
Optimal Ban Strategy
When banning a map, teams should eliminate the map where they have the smallest Elo advantage (or largest disadvantage):
Optimal Ban = argmin(Elo_Team(map) - Elo_Opponent(map))
Elo Lost = (Actual_Advantage - Optimal_Advantage)Banning your worst map prevents your opponent from exploiting your weakness. Teams that ban mediocre maps while leaving their worst map available are making strategic errors that can be quantified in Elo points.
Analysis Process
For each completed match:
- Retrieve both teams' Elo ratings for all maps at the time the match was played
- Replay the pick/ban phase step-by-step in chronological order
- At each step, calculate what the optimal choice would have been from available maps
- Compare the actual choice to the optimal choice
- Calculate Elo lost if suboptimal (zero if optimal)
- Track cumulative Elo lost across the entire veto phase
- Store analysis results for aggregation and visualization
Cumulative Elo Lost Metric
The cumulative Elo lost metric sums up all suboptimal decisions throughout the veto phase, providing a single number that represents how much competitive advantage a team surrendered. A team with 0 Elo lost made perfect strategic decisions; a team with a large Elo lost made significant strategic errors that materially reduced their win probability.
Teams can be ranked by their average Elo lost across all matches, identifying which organizations have the best strategic preparation.