Given $$$k,m$$$ and $$$n$$$ numbers $$$a_i$$$, compute each $$$a_i\times k\bmod m$$$
I came up with a method that is almost twice as fast as the compiler implementation (when $$$m$$$ is a constant), which can effectively speed up the NTT and some DPs.
First of all
where $$$\{x\}=x-\lfloor x\rfloor$$$ is the fractional part function. The principle of equation $$$(1)$$$ is that $$$a_i\times k\bmod m$$$ is only related to the fractional part of $$$\frac{a_i\times k}m$$$.
Let $$$\frac km\approx\frac p{2^{64}}$$$, $$$p$$$ is either equal to $$$\lfloor\frac km\times2^{64}\rfloor$$$ or $$$\lceil\frac km\times2^{64}\rceil$$$, which one to choose, we will decide later, then
There are two place we need to choose whether round up or down:
- $$$p=\lfloor\frac km\times2^{64}\rfloor$$$ or $$$p=\lceil\frac km\times2^{64}\rceil$$$.
- The final formula in (2) isn't always an integer, so we need to consider whether we round it up or down.
We choose $$$p=\lceil\frac km\times2^{64}\rceil$$$, which will slightly enlarge the answer, while the final formula in (2) rounded down, which will slightly lessen the answer. We'll prove that this set of choice can give us correct answer just if $$$a_i\le\frac{2^{64}}m$$$.
const int P = 998244353;
void calc(int n, int k, int a[]) {
unsigned long long p = ((unsigned __int128)k << 64) + P - 1) / P;
for(int i = 0; i < n; i++)
a[i] = (unsigned)a[i] * p * (unsigned __int128)P >> 64;
}
A few notes.
- The code uses
unsigned __int128
so it can only be used in a 64-bit architecture. (unsigned)a[i] * p
will automatically be modulo $$$2^{64}$$$.* (unsigned __int128)P >> 64
is 64-bit multiplication retaining only the high 64 bits (in the rdx register), the same speed as multiplying twounsigned long long
s together.- Converting
a[i]
tounsigned
is becauseint
tounsigned
and then tounsigned long long
is faster than going directly tounsigned long long
, which requires sign extension.
Speed test.
It contains two parts.
- The first part is the reciprocal throughput, the time taken by the CPU to be highly parallel (modern CPUs can be parallelized at instruction level on a single core), containing a total of $$$50000\times50000$$$ modulo multiplications.
- The second part is the Latency, which is the time taken for each modulo multiplication to be performed sequentially without parallelism, containing a total of $$$50000\times25000$$$ modulo multiplications.
Possible output:
Throughput test(50000 * 50000):
Compiler's signed modulo: 1954.83 ms
Compiler's unsigned modulo: 1746.73 ms
My modulo: 1160.47 ms
Latency test(50000 * 25000):
Compiler's signed modulo: 4329.33 ms
Compiler's unsigned modulo: 3945.29 ms
My modulo: 2397.97 ms
By the way, a few general facts.
- Constant modulo multiplication is almost 4 times faster in parallel than serial (as is modulo multiplication of my invention).
int
tounsigned
then tolong long
is faster thanlong long
, but negative numbers will be wrong.unsigned long long
modulo constants is faster thanlong long
.
Comparison with other methods
Comparison with Barrett reduction and Montgomery multiplication:
The purpose of my method is to compute $$$a\times b\bmod m$$$ for fixed $$$m$$$ and $$$b$$$, while Barrett reduction and Montgomery multiplication compute $$$a\times b\bmod m$$$ for fixed m. But my method is faster than the other two methods.
The derivation of my method is similar to Barrett reduction. So They both work when $$$m < 2^{32}$$$, while Montgomery multiplication works when $$$m < 2^{64}$$$ and $$$m$$$ is an odd number.
Extensions
It is also possible to compute $$$(a_1b_1+a_2b_2+\cdots+a_nb_n)\bmod m$$$, but $$$\sum a_i$$$ cannot exceed $$$\frac{2^{64}}m$$$.
Let $$$p_i=\lceil\frac{b_i}m\times2^{64}\rceil$$$.
Hey, thanks for your blog post! May I suggest that you place your proof and code in the blog post, perhaps under the spoiler tag, rather than making them separate blog posts? It is more convenient for the reader to see them without moving to another page, and also this way it consumes less space in "Recent actions" section.
Have you compared your idea with other existing ones, such as Montgomery multiplication or Barrett reduction? pajenegod has a very comprehensive article about them here.
Thanks for your suggestion. I've changed the external links to spoiler tags and compared my method to the others.
Daniel Lemire came up with this method about four years ago, and I describe it in my book. I heard there were some efforts to replace the algorithms used in GCC and LLVM with it, but I don't know if they went anywhere.
You can also use it for faster division-by-constant and divisibility checks, but it has the drawback that it needs 4x the machine word width for intermediates (Barrett reduction-like methods typically need 2x). It is also a less efficient than Montgomery multiplication if you need to execute a chain of modular operations and not just one modular reduction. Many optimized number theory algorithms, including NTT, largely rely on (SIMDized versions of) Montgomery multiplication.
When k=1 my method is the same as Lemire Reduction, but there are some differences:
Maybe my method is an improvement of Lemire Reduction, which works for $$$m<2^{32}$$$ and is useful in competitive programming (common moduli are $$$10^9+7$$$ and $$$998244353$$$)
Auto comment: topic has been updated by platelet (previous revision, new revision, compare).
wow it is so strong. stO \platelet/ Orz