The easiest way to summarize the difference is to look through the table of contents of both, and take the "diff" of these. As I'm sure you can do that (and it is hard to do here, as there is not a publicly, legally available version of Katz and Lindell's book online), I will omit it from here.
The biggest technical difference between the two texts is probably how they quantify security. There are several ways in cryptography to formalize the statement
The protocol $\Pi$ is secure against strong (but not too strong) attackers.
The easiest to work with is typically known as the asymptotic approach. Rather than working with a single protocol $\Pi$, one works with an infinite family of protocols $\{\Pi_n\}_n$, where $n$ is the "security parameter". One then quantifies the effectiveness of attacking a member of this family of protocols using a family of algorithms $\{A_n\}_n$ by some notion of "advantage" $\mathsf{adv}_{\{\Pi_n\}_n}(\{A_n\}_n)$. For example, a common notion of advantage for key recovery games is success probability.
We then say that $\{\Pi_n\}_n$ is secure against key recovery if, for any family of adversaries $\{A_n\}_n$ such that the running time $T(A_n) = \mathsf{poly}(n)$ is some polynomial function of the security parameter, the advantage $\mathsf{adv}_{\{\Pi_n\}_n}(A_n)$ is a "negligible" function, e.g. it is smaller than the inverse of a any polynomial. A common example is $1/2^n$, but $1/2^{(\log_2 n)^2}$ is also negligible under this definition (and only barely larger than the inverse of a polynomial function).
This is the approach used by Katz and Lindell, and it is relatively simple to use. Rather than tracking complicated expressions around (say some expression like $n^5 + 12n^3 + n\log(n)$), one can simplify this to $O(n^5)$, or even just $\mathsf{poly}(n)$.
It has the downside that it is not useful for setting parameters. If RSA is secure in the above sense, how large of semi-primes must one use in modern RSA implementations? The above notion of security implies that "large enough RSA semi-primes will yield secure crypto". What is large enough though?
Bellare and Rogaway, by contrast, employs what is known as concrete security (also known as $(t,\epsilon)$-security). Here, for one specific protocol $\Pi$, one lets $\epsilon(t)$ be the best advantage any adversary running in time $T(A) \leq t$ may achieve. This is mildly more annoying to work with (there is more bookkeeping and emphasis on proving tight bounds), but can lead to expressions that one can meaningfully set parameters using.
It is easy to see the difference between the two based on how they analyze any particular construction. I'll discuss Randomized CTR mode. The security of this is discussed in Theorem 5.14 of Bellare and Rogaway, and Theorem 3.30 of Katz and Lindell. I would highly recommend you read through both of these to get some idea of how the two formally differ. I'll briefly summarize that the two differ even in what the stated level of security one achieves.
The result proved in Bellare and Rogaway is below.
Theorem 5.14: Let $F : \mathcal{K} × \{0, 1\}^n → \{0, 1\}^n$ be a blockcipher
and let $\mathcal{SE} = (\mathcal{K}, \mathcal{E}, \mathcal{D})$ be the corresponding CTR\$ symmetric encryption scheme as described in
Scheme 5.6. Let $A$ be an adversary (for attacking the IND-CPA security of $\mathcal{SE}$) that runs in time
at most $t$ and asks at most $q$ queries, these totaling at most $\sigma$ $n$-bit blocks. Then there exists an
adversary $B$ (attacking the PRF security of $F$) such that
$$\mathsf{Adv}_{\mathcal{SE}}^{\mathsf{ind}\text{-}\mathsf{cpa}}(A) ≤ \mathsf{Adv}_{F}^{\mathsf{prf}}(B) + \frac{\sigma^2}{2\cdot 2^n}.$$.
Furthermore $B$ runs in time at most $t′ = t + O(q + n\sigma)$ and asks at most $q′ = \sigma$ oracle queries.
The result proved in Katz and Lindell is below
THEOREM 3.30: If $F$ is a pseudorandom function, then randomized
counter mode (as described above) has indistinguishable encryptions under
a chosen-plaintext attack.
Note that the result of Bellare and Rogaway contains strictly more information than that Katz and Lindell. For example, if we assume that $F$ is an (asymptotically) secure PRF, we get that for any $B$ that $\mathsf{Adv}_F^{\mathsf{prf}}(B) \leq \mathsf{negl}(n)$. We also have that the running time of $A$ is at least $\sigma$ (it must write down all of its queries), so $\sigma \leq T(A_n)\leq\mathsf{poly}(n)$.
It follows that the bound of Bellare and Rogaway implies that $\mathsf{Adv}_{\mathcal{SE}}^{\mathsf{ind}\text{-}\mathsf{cpa}}(A) \leq \mathsf{negl}(n)$, which is precisely what Katz and Lindell are claiming.
The above shows that Bellare and Rogaway's result implies that of Katz and Lindell. As mentioned, it says something strictly stronger as well. In particular, it states that a block-size $n$ primitive can only stay meaningfully secure up to $2^{n/2}$ queries (rather than $2^n$, or any other quantity one might guess). This is a consequence of the birthday bound, and is fundamentally important in symmetric cryptography.
Katz and Lindell's asymptotic security analysis does not highlight this birthday-bound insecurity, as an adversary who makes $2^{n/2}$ queries is a super-polynomial time adversary, which is not allowed in the asymptotic formalism. Note that their book discusses the birthday bound in other places, so this isn't something they forgot. It's just something that directly appears in concrete security statements, but doesn't really appear in asymptotic statements.
Note that both notions of security are heavily used in cryptographic research. In general, more theoretical communities use asymptotic notions of security, and more applied communities use concrete notions. There is a general thought that any asymptotic security result can be refined (with more work) to give a concrete bound.
These concrete bounds are not always meaningful though. Such reductions are known as "non-tight", and can be a big point of disagreement between theoretical cryptographers and applied cryptographers. See for example the paper Another look at Tightness 2. In particular, Section 6 has been a (large) point of disagreement among cryptographers regarding the usefulness of worst-case to average-case reductions in lattice-based cryptography. This is a very technical point though, so feel free to skip it, and only take away that "meaningful asymptotic security reductions do not always lead to meaningful concrete security reductions".