By Patricio Gallardo
Special thanks to research team colleagues whose contributions significantly shaped my ideas on this subject.
Tl;dr
Rootstock is getting faster, and that means better performance on the most secure Bitcoin DeFi layer.
Shorter block times will make transactions confirm more quickly and improve the overall experience for users and developers. But to make blocks arrive faster without causing issues, Rootstock needs to improve how it adjusts mining difficulty.
Right now, the network changes difficulty too often and based on too little information, which leads to delays and instability. It also gets confused by “uncle” blocks (valid blocks that didn’t make it into the main chain), causing unnecessary slowdowns.
The new proposal, RSKIP517, fixes this by using a smarter system: it looks at a group of recent blocks instead of just one, and only changes difficulty when there’s a clear trend. The result? More predictable performance, fewer delays, and a stronger foundation for future speed upgrades.
Introduction
Reducing average block time is a long-standing goal for improving Rootstock’s responsiveness and user experience. Faster blocks mean quicker confirmations, better throughput, and lower latency for transaction execution.
Through merged mining, Rootstock inherits Bitcoin’s decentralization and security properties—now supported by over 80% of Bitcoin’s total hashing power. As explored in our previous article, this shared security model provides a robust foundation, but block production stability depends on more than just hash rate.
Despite increased miner participation and improvements in block template refresh strategies, the difficulty adjustment still exhibits significant noise—fluctuating by more than 10% over the course of a single day, as illustrated in the chart below (Figure 1). These fluctuations are driven by short-term signals that do not always reflect sustained changes in network conditions, leading to instability in how difficulty evolves over time.

Achieving shorter block intervals requires a more stable and targeted adjustment mechanism—one that reduces transient noise and better aligns difficulty with network conditions. The proposed RSKIP517 aims to meet this objective.
How difficulty adjustment works
Rootstock adjusts mining difficulty after every block, based on a comparison between the most recent block time and a target. The core equation governing this update is:

Here, 𝐷 represents block difficulty and 𝐹 is a factor determined by whether block production was faster or slower than expected. The value of 𝐹 incorporates both block time and uncle rate. It is calculated as:

Where:
- 𝐹: Adjustment factor applied to the current difficulty.
- 𝛼: Fixed increment used to increase or decrease difficulty (0.0025).
- 𝑇: Target block time in seconds (14s).
- 𝐵𝑁: Last block.
- 𝑈𝐵𝑁: Number of uncle blocks referenced by last block.
- 𝑇𝐵𝑁: Actual time taken to mine last block.
If blocks arrive too quickly, difficulty increases; if they’re slower, it decreases. However, as described below, this approach introduces two important limitations.
Current limitations
Rootstock’s existing difficulty adjustment algorithm is effective for maintaining network stability under its originally intended conditions—namely, a relatively conservative block time target and modest uncle rates. However, when the goal shifts toward progressively reducing average block time, the current mechanism reveals two key limitations:
- Uncle rate conflation: The algorithm does not distinguish between main chain blocks and uncles—it treats both as indicators of block frequency. As a result, difficulty is adjusted based on the combined production rate of all block types, not just finalized main chain blocks. This design, while safe under normal conditions, introduces instability when attempting to reduce difficulty. Increased uncle production—often a consequence of lowered difficulty—is interpreted as excess block output, prompting premature difficulty increases. This feedback loop undermines efforts to achieve consistently shorter block intervals.
- Per-block sensitivity: Difficulty is adjusted using data from only the most recent block. This narrow view is statistically fragile: block times follow an exponential distribution, which is asymmetric and highly variable. Making adjustments based on single-sample deviations leads to overcorrections and persistent drift. Theoretically, this results in a systematic error, as the mean of an exponential distribution does not consistently align with the fixed target block time.
Simulation
To validate the theoretical “per-block sensitivity” limitation of the current adjustment model, we ran a simulation comparing two algorithms: Algorithm A, which uses a windowed average block time (as proposed in RSKIP517), and Algorithm B, which represents the current Rootstock mechanism that adjusts difficulty using only the most recent block. We simulated 20,000 blocks under constant hashrate conditions, sampling block times from an exponential distribution. The target block time was set to 14 seconds, and the base difficulty adjustment factor (𝛼) was 0.0025.
Difficulty was updated every 30 blocks in Algorithm A (the windowed model) and after every block in Algorithm B (the per-block model). In both cases, block times were sampled from an exponential distribution whose rate parameter was adjusted by the difficulty algorithm. We tracked the average block time produced over the full simulation to assess how closely each algorithm converged to the intended target. This allowed us to evaluate the long-term accuracy and stability of each adjustment method.
The results clearly illustrate the benefit of smoothing. As shown in Figure 2, Algorithm B (which adjusts difficulty after every block) exhibits a persistent deviation, producing an average difficulty block time of 20.07 seconds, substantially higher than the intended 14-second target. By contrast, Algorithm A (30-block window) converges closely to the target with an average difficulty block time of 14.05 seconds. This demonstrates that windowed averaging not only improves accuracy but also reduces volatility in the resulting difficulty values.
Figure 2: Comparison between Algorithm A (window-based adjustment) and Algorithm B (per-block adjustment). The three plots correspond to different starting difficulty values: 14 (top-left), 8 (top-right), and 25 (bottom). Despite varying initial conditions, both algorithms converge toward the same steady-state range.
Note: Readers not familiar with probability theory may skip the rest of the section, which provides a statistical explanation for the observed deviation.
The behavior of Algorithm B (average difficulty block time of ≈20s) stems from a statistical asymmetry inherent to the exponential distribution of block times. Specifically, the probability that a block is faster than the mean is approximately 1 − 1/e ≈ 63.2%, while the probability of being slower is only 1/e ≈ 36.8%. As a result, the algorithm more frequently interprets blocks as being “too fast” and responds with upward difficulty adjustments. This consistent bias toward overestimating hashrate leads to a feedback loop that gradually inflates block times.
Although the adjustment process is inherently stochastic, it tends to stabilize around a predictable value that differs from the intended target. In particular, the system converges toward a block time of approximately T / ln(2), where T is the nominal target. This value reflects the point at which the exponential distribution produces equal probabilities of adjustment in either direction. In other words, the adjustment mechanism reaches equilibrium not at T, but at a higher effective block time where the distribution of sampled times is symmetric with respect to the adjustment rule. For example, with T = 14 seconds, the equilibrium block time becomes approximately 14 / ln(2) ≈ 20.2 seconds—closely matching the simulation result.
On the other hand, the behavior of Algorithm A (converges closely to the target) is explained by the statistical property that when you average block times over a window, the resulting distribution becomes approximately normal, as described by the Central Limit Theorem. A normal distribution is symmetric, meaning it’s equally likely for the average block time to be slightly above or below the target. This symmetry reduces bias in the difficulty adjustment process, making it less prone to consistent over- or underestimations.
The proposal
The two limitations mentioned before form the core motivation for the new approach proposed in RSKIP517. It builds on ideas introduced in RSKIP77, which smoothed difficulty adjustments using a window, but goes a step further by decoupling uncle rate from routine difficulty updates. This windowed approach is not unique to Rootstock—Bitcoin itself uses a similar strategy, adjusting difficulty based on the average block time over the last 2,016 blocks.
RSKIP517 proposes averaging block time and uncle rate over a window of N blocks, with adjustments occurring every N blocks. This approach smooths out short-term noise and ensures that difficulty adjustments are based on meaningful trends rather than isolated fluctuations. Figure 3 illustrates the composition of the window used for difficulty calculation, showing the last N blocks and their referenced uncles.

In this model, block time becomes the primary driver of difficulty changes. Uncle rate is still observed but only affects the adjustment when it exceeds a predefined threshold, signaling potential network instability. Within normal ranges, uncles are effectively ignored to prevent overreaction.
The difficulty update factor 𝐹 is calculated as defined in Equation 3:

Where:
- Parameter α is the adjustment factor constant.
- Parameter C is the uncle rate threshold (maximum tolerance).
- Parameter T is the target block time.
- R is the average uncle rate over the last N blocks.
- A is the average block time over the last N blocks.
Difficulty increases when either the uncle rate exceeds threshold 𝐶, or when average block time 𝐴 exceeds the target T by 10 percent. It decreases when block time falls below the lower bound (10 percent below T), provided uncle rate remains acceptable. Otherwise, it remains stable.
By combining a window with conditional uncle sensitivity, the algorithm gains both stability and responsiveness. It remains adaptive to genuine network conditions while avoiding the feedback loops that previously made it difficult to lower block times without unintended side effects.
Suggested parameters
To evaluate the real-world behavior of the new model, the RSKIP-517 proposes conservative initial parameters:
- Target block time (T): 20 seconds
- Uncle rate threshold (C): 0.7 (i.e., 70 uncles per 100 blocks)
- Difficulty adjustment factor (α): 0.005
- Window size (N): 30 blocks
These values are chosen to reduce average block time while minimizing the risk of excessive uncle generation. With a target of 20 seconds, the network is expected to progress gradually toward shorter intervals—without triggering the reactive increases in difficulty seen in the current model. This setup improves the predictability of difficulty progression and paves the way for further refinements based on empirical data.
The window size of 𝑁=30 was selected based on simulation results evaluating how different values influence difficulty accuracy (see Figure 4 below). We measured both the mean absolute error (MAE) and root mean square error (RMSE) of the difficulty over time. The results show that as the window size increases, error metrics decrease significantly—indicating better stability. However, beyond 𝑁=30, the improvements become progressively smaller. This point represents a practical trade-off between responsiveness and smoothing, offering strong stability benefits without introducing excessive delay in the adjustment response.

To determine the optimal value for the difficulty adjustment parameter 𝛼, we simulated several configurations of Algorithm A with a fixed window size of 30 blocks. Each simulation began with an initial difficulty of 10 modeling a 40% increase in hashrate while targeting a 14-second block interval. We evaluated each configuration based on how quickly it converged, the long-term deviation from the target, and post-convergence stability.

As shown in Figure 5, 𝛼=0.005 reaches the target difficulty in approximately 3,990 blocks while maintaining a low mean absolute error (MAE = 3.89) and root mean square error (RMSE = 3.99). Although 𝛼=0.0025 results in slightly lower error (MAE = 3.80, RMSE = 3.94), it takes over 13,000 blocks to converge. Higher values (e.g., 0.01–0.025) achieve faster convergence but introduce more volatility after reaching the target.
With the final parameters selected for RSKIP517 (α = 0.005, window size = 30), the difficulty adjustment algorithm demonstrates the intended behavior under simulated conditions. As shown in Figure 6, the difficulty rises smoothly from an initial value of 10 and converges toward the target after approximately 4,050 blocks. Post-convergence, the adjustment remains stable with bounded oscillations around the target level—demonstrating responsiveness without overcorrection.

Future directions
While RSKIP517 introduces a more stable and time-centric difficulty adjustment, it also lays the foundation for future improvements in block production.
- Monitoring and lowering the block time target
With the new mechanism in place, we can safely monitor network behavior under the proposed parameters. If uncle rates remain within acceptable thresholds, we may gradually reduce the block time target (e.g., from 20s toward 10s) to further improve transaction throughput and responsiveness—while preserving security and finality.
- Enhancing mining coordination
Better block propagation and improved coordination among merged miners will complement the protocol-side improvements. Future work may explore incentives or protocol tweaks to minimize uncles, thereby improving the effectiveness of the proposed adjustment mechanism.
- Enhancing simulation fidelity
Future simulation work will incorporate chain reorganizations to better understand the risks introduced by lower block time targets. Modeling propagation delays and network-induced forks will provide more realistic assessments of finality and stability under accelerated block production.
Conclusion
RSKIP517 refines Rootstock’s difficulty adjustment by decoupling uncle rate from main block timing and applying changes based on statistically grounded trends. The new mechanism replaces reactive, noisy updates with a smoothed, time-centric strategy that enables more predictable and scalable block production. By selecting parameters grounded in simulation and statistical insight, this proposal prepares the network for faster block times without compromising stability.