4

In Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman's book An Introduction to mathematical cryptography, 2nd edition, page 78, there is:

If we are working in the group $\mathbb F^∗_p$ and if we treat modular addition as our basic operation, then modular multiplication of two $k$-bit numbers takes (approximately) $k^2$ basic operations, so solving the DLP by trial-and-error takes a small multiple of $k^2\cdot2^k$ basic operations.

My question is: why does modular multiplication of two $k$-bit numbers take (approximately) $k^2$ basic operations?

kelalaka
  • 49,797
  • 12
  • 123
  • 211
fa william
  • 73
  • 4

2 Answers2

4

TL,DR: The quote is wrong. Repair it by changing $k^2$ to $k$, or by counting bit operations rather than modular additions.


We can perform multiplication modulo $p$ of two $k$-bit numbers $A$ and $B$ with at most $2k$ modular additions modulo $p$, as follows:

  • let $C:=0$
  • for $i$ from $k-1$ down to $0$
    • let $C:=C+C\bmod p$
    • if bit $a_i$ of $A$ is set, where $A=\sum a_i2^i$
      • let $C:=C+B\bmod p$

Note: further, by keeping track of when $C=0$, the number of modular additions is reduced to at most $2k-2$.

Hence, if in the question's quoted sentence we read the two occurrences of “basic operation” as the same thing (modular addition modulo $p$), then where the quote has “(approximately) $k^2\,$ we should read “(at most) $2k\,$; and where it has “a small multiple of $k^2\cdot2^k\,$ we should read “a small multiple of $k\cdot2^k\,$.

Note: A comparatively minor issue is that the quote does not bound the size of $p$. We could have $k$-bit numbers with $p$ much larger than $k$ bit, and then $k\cdot2^k$ no longer applies.


As an alternative probably better and closer to the intend, we can keep the quote's formulas and repair it's consistency by rewriting it as follows:

Assume we perform modular multiplication in the group $\mathbb F^*_p$ using modular additions modulo $p$. Assume $p-1$ is $k$-bit so that a member of the group has a $k$-bit representation. Then modular multiplication of two numbers takes at most $2k$ modular additions. Each such modular addition requires a small multiple of $k$ bit operations. Therefore modular multiplication performed using modular addition requires a small multiple of $k^2$ bit operations, and solving the DLP by trial-and-error takes a small multiple of $k^2\cdot2^k$ bit operations.

If in that revised quote we replace my “bit operation” with “basic operation” in the sense of computer instructions of common computers with $w$-bit words (e.g. $w=64$ for common computers), then the “small multiple of” is actually “significantly less than”. That's because modular addition modulo a $k$-bit number requires a small multiple of $k/w$ basic operation, thus modular multiplication decomposed as modular additions requires a small multiple of $k^2/w$ basic operations.

And that goes further down to a small multiple of $k^2/w^2$ basic operations if we use the textbook algorithm for modular multiplication, that is perform textbook multiplication then textbook Euclidean division and keep the remainder. In my opinion, that's what a textbook should consider rather than positing an inefficient and (thus) seldom used modular multiplication algorithm, as the quote explicitly does with “if we treat modular addition as our basic operation”.

Note: There are yet less costly ways to perform modular multiplication, e.g. by combining Karastuba multiplication and a one-time precomputation of $\left\lceil2^{2k}/p\right\rceil$. And there are algorithms for the DLP modulo $p$ that are not even exponential in $k$. But it's reasonable to not consider that in a textbook when introducing notation for the cost of algorithms. What's disputable is using modular addition as the unit for cost, as stated in the beginning of the quote. And it's wrong to silently get back to the more common bit (or is it fixed width) operation.

fgrieu
  • 149,326
  • 13
  • 324
  • 622
3
  • We want to find $x$ such that $ g^x = h$, i.e. the discrete logarithm of $h$ to the base $g$ in $\mathbb F_p^*$ with $g$'s order $n$.

    For trial-and-error, brute-force, we can start to find $g^x$ starting from $x=1,2,3,\dots,n$. This is the first algorithm that was mentioned in the book.

    They first considered that we have $n$ operations of $g^x$, since we iterate on the power $x$.

    Since there is no basic operation on the machines that can calculate $g^x$ for arbitrary bases, we need to consider machine operations, addition, multiplication, etc., and most of the time, we consider multiplication as the most consuming operation.

    Now, we consider $n$ and $x$ as $k$-bit numbers so their max value is $2^k-1$, we may consider them as $2^k$ for simplicity. If we consider the repeating square algorithm or variants (Section 1.3.2 of the book) then it will take a small multiple of $log_2 x$ modular multiplications. So, we can calculate the trial-and-error in $k\cdot 2^k$ multiplications.

    Now, the book turns the modular multiplication considers addition as a basic operation in $\mathbb F_p^*$, they use the fact that modular multiplication of two k-bit numbers takes approximately $k^2$ basic operations (addition).

So in total $k^3 2^k$ basic operations, we have contrast to the book.

My question is: why does modular multiplication of two -bit numbers take (approximately) $^2$ basic operations?

There are more efficient methods ( multiplication now $\mathcal{O}(n\log n)$ ), however, they considered school multiplication on the bit representation of the numbers.

Consider $\mathbb F_{7}^*$ where you want to mutltiply $a,b \in \mathbb F_{7}^*$

    110
x   101
-------    
    110
   000
+ 110
-------
  11110

We write the copies or zero of the multiplicand depending on the current bit of the multiplier starting from the rightmost bit of the multiplier. Most of the time the cost of this is omitted. The rest is the additions and there are $k^2$ such additions (consider it as a square to see that).

This is not the whole story. The book omitted the final or intermediate reductions to the modulo. Can you see that $11110 > 7$ in our example? The cost of the reduction is omitted there and said it costs (approximately) $^2$ basic operations. This is common for rough estimation for using the Big-Oh which the book uses implicitly.

The approximately can hide big constants or small powers. Well, to calculate the exact value is not required to show how big the brute force is, and usually the comparisons are based on the same settings. Sometimes the cost of the operation becomes important so that they may affect the overall performance then they are considered too.

For further studying keep this free book on your side for better algorithms.


Note: I've given an answer to why the book says approximately $k^2$. This is not optimal, actually the upper bound. There are much better methods;

Depending on the environment, one must be careful to select the correct algorithm. For example, for ASIC, I've used one-level extended Karatsuba, then Wallace-tree for multiplication, and Sklansky adder with the reduction where the base was special.

kelalaka
  • 49,797
  • 12
  • 123
  • 211