TL,DR: The quote is wrong. Repair it by changing $k^2$ to $k$, or by counting bit operations rather than modular additions.
We can perform multiplication modulo $p$ of two $k$-bit numbers $A$ and $B$ with at most $2k$ modular additions modulo $p$, as follows:
- let $C:=0$
- for $i$ from $k-1$ down to $0$
- let $C:=C+C\bmod p$
- if bit $a_i$ of $A$ is set, where $A=\sum a_i2^i$
Note: further, by keeping track of when $C=0$, the number of modular additions is reduced to at most $2k-2$.
Hence, if in the question's quoted sentence we read the two occurrences of “basic operation” as the same thing (modular addition modulo $p$), then where the quote has “(approximately) $k^2\,$” we should read “(at most) $2k\,$”; and where it has “a small multiple of $k^2\cdot2^k\,$” we should read “a small multiple of $k\cdot2^k\,$”.
Note: A comparatively minor issue is that the quote does not bound the size of $p$. We could have $k$-bit numbers with $p$ much larger than $k$ bit, and then $k\cdot2^k$ no longer applies.
As an alternative probably better and closer to the intend, we can keep the quote's formulas and repair it's consistency by rewriting it as follows:
Assume we perform modular multiplication in the group $\mathbb F^*_p$ using modular additions modulo $p$. Assume $p-1$ is $k$-bit so that a member of the group has a $k$-bit representation. Then modular multiplication of two numbers takes at most $2k$ modular additions. Each such modular addition requires a small multiple of $k$ bit operations. Therefore modular multiplication performed using modular addition requires a small multiple of $k^2$ bit operations, and solving the DLP by trial-and-error takes a small multiple of $k^2\cdot2^k$ bit operations.
If in that revised quote we replace my “bit operation” with “basic operation” in the sense of computer instructions of common computers with $w$-bit words (e.g. $w=64$ for common computers), then the “small multiple of” is actually “significantly less than”. That's because modular addition modulo a $k$-bit number requires a small multiple of $k/w$ basic operation, thus modular multiplication decomposed as modular additions requires a small multiple of $k^2/w$ basic operations.
And that goes further down to a small multiple of $k^2/w^2$ basic operations if we use the textbook algorithm for modular multiplication, that is perform textbook multiplication then textbook Euclidean division and keep the remainder. In my opinion, that's what a textbook should consider rather than positing an inefficient and (thus) seldom used modular multiplication algorithm, as the quote explicitly does with “if we treat modular addition as our basic operation”.
Note: There are yet less costly ways to perform modular multiplication, e.g. by combining Karastuba multiplication and a one-time precomputation of $\left\lceil2^{2k}/p\right\rceil$. And there are algorithms for the DLP modulo $p$ that are not even exponential in $k$. But it's reasonable to not consider that in a textbook when introducing notation for the cost of algorithms. What's disputable is using modular addition as the unit for cost, as stated in the beginning of the quote. And it's wrong to silently get back to the more common bit (or is it fixed width) operation.