Generating long-form content, such as minute-long videos and extended texts, is increasingly important for modern generative models. Block diffusion improves inference efficiency via KV caching and block-wise causal inference and has been widely adopted in diffusion language models and video generation. However, in long-context settings, block diffusion still incurs substantial overhead from repeatedly computing attention over an ever-growing KV cache. We identify an underexplored property of block diffusion: cross-step redundancy of attention within a block. Our analysis shows that attention outputs from tokens outside the current block remain largely stable across diffusion steps, while block-internal attention varies significantly. Based on this observation, we propose FlashBlock, a cached block-external attention mechanism that reuses stable attention output, substantially reducing attention computation and KV cache access without modifying the diffusion process. Moreover, FlashBlock is orthogonal to sparse attention and can be combined as a complementary residual reuse strategy. When integrated, it substantially improves model accuracy under aggressive sparsification by offsetting much of the performance loss induced by sparsity. Experiments on diffusion language models and video generation demonstrate up to 1.44× higher token throughput and up to 1.6× reduction in attention time, with negligible impact on generation quality.
We empirically analyze attention behavior in block diffusion and find a clear separation: Block-external attention (from tokens already generated) remains largely stable across diffusion steps, while Block-internal attention (tokens currently being denoised) varies significantly.
Method overview. We explicitly decompose attention into block-internal and block-external components. At the first diffusion step of a block, we compute and cache the output ($A_{\mathrm{out}}$) and log-normalizer ($L_{\mathrm{out}}$) from block-external tokens. In subsequent diffusion steps within the same block, we only recompute attention for the block-internal tokens ($A_{\mathrm{in}}$), reusing the cached block-external statistics.on in log-space.
FlashBlock consistently reduces per-step inference latency compared to the baseline, with the gap widening as context length increases. Remarkably, as the context length grows, the rate of latency increase is only half that of the original model, leading to a theoretical speedup upper bound of roughly 2x.
| Method | Block Size | TPS | GSM8K | MATH500 | AIME | MBPP | HumanEval |
|---|---|---|---|---|---|---|---|
| Trado-8B-Thinking | 4 | 312 | 93.25 | 86.00 | 33.33 | 25.60 | 50.61 |
| Trado-8B + FlashBlock | 4 | 451 | 93.12 | 85.80 | 33.33 | 33.60 | 51.22 |
| Trado-8B-Thinking | 8 | 532 | 91.74 | 82.00 | 26.67 | 29.00 | 54.27 |
| Trado-8B + FlashBlock | 8 | 674 | 90.22 | 81.80 | 26.67 | 32.00 | 53.66 |
FlashBlock is orthogonal to sparse attention. When combined, it significantly improves performance by recovering information lost due to sparsification (Table 2 of paper).
| Method | GSM8K | MATH500 | HumanEval | |||
|---|---|---|---|---|---|---|
| Acc. | Δ | Acc. | Δ | Pass@1 | Δ | |
| SparseD (d=20%) | 34.72 | +7.96 | 39.40 | +7.40 | 23.78 | +9.76 |
| SparseD + Ours | 42.68 | 46.80 | 33.54 | |||
| SparseD (d=30%) | 68.61 | +3.64 | 59.20 | +5.40 | 29.88 | +6.71 |
| SparseD + Ours | 72.25 | 64.60 | 36.59 | |||
| SparseD (d=40%) | 84.61 | +2.65 | 66.20 | +3.20 | 39.02 | +5.49 |
| SparseD + Ours | 87.26 | 69.40 | 44.51 | |||
Qualitative results show that FlashBlock maintains generation quality and temporal consistency while improving efficiency.