The companion matrix of a polynomial is a matrix having the polynomial as its characteristic polynomial.
The eigenvalues of a diagonalizable matrix are the roots of its characteristic polynomial. Be careful if one of the polynomials has a double (or higher multiplicity) largest root, since this breaks diagonalizability. (It turns out that multiplicity of the other, smaller, roots doesn't matter here.) Multiple roots can be detected: Let $f'$ be the derivative of the polynomial $f$. Then the polynomial GCD, $\mathrm{gcd}(f,f')$, has only the multiple roots of $f$ as roots. (Then use the method of this Answer to determine the GCD's largest real root to see if that's blinding the method. See here for more about detecting and possibly removing multiple roots.)
Power iteration can be used to find the largest eigenvalue, which is the largest root. Note that the companion matrix is sparse, so smarter than general purpose matrix multiplication can be used to find its products.
For your Example 1, the companion matrices are
$$ c_1 = \begin{pmatrix}
0 & 0 & 0 & 0 & 432 \\
1 & 0 & 0 & 0 & -784 \\
0 & 1 & 0 & 0 & 505 \\
0 & 0 & 1 & 0 & -147 \\
0 & 0 & 0 & 1 & 20 \\
\end{pmatrix} $$
and
$$ c_2 = \begin{pmatrix}
0 & 0 & 0 & 0 & 420 \\
1 & 0 & 0 & 0 & -769 \\
0 & 1 & 0 & 0 & 498 \\
0 & 0 & 1 & 0 & -146 \\
0 & 0 & 0 & 1 & 20 \\
\end{pmatrix} $$
Starting with the random vector $\begin{pmatrix} -4.83668 \\ -1.11942 \\ 0.47052 \\ -0.410958 \\ -1.86399 \end{pmatrix}$ and applying power iteration ten times, we get the vectors
\begin{align*}
v_1 &= \begin{pmatrix} -0.453067 \\ 0.768942 \\ -0.439173 \\ 0.102499 \\ -0.00891463 \end{pmatrix} \\
v_2 &= \begin{pmatrix} -0.448205 \\ 0.769188 \\ -0.443133 \\ 0.104923 \\ -0.00929481 \end{pmatrix}
\end{align*}
Then, dividing elementwise, and also comparing by (vector, Euclidean) norms,
\begin{align*}
\frac{c_1 \cdot v_1}{v_1} &= \begin{pmatrix} 8.5001{\dots} \\ 8.4999{\dots} \\ 8.4999{\dots} \\ 8.5003{\dots} \\ 8.5021{\dots} \end{pmatrix} \\
\frac{||c_1 \cdot v_1||}{||v_1||} &= 8.5000{\dots} \\
\frac{c_2 \cdot v_2}{v_2} &= \begin{pmatrix} 8.7098{\dots} \\ 8.7098{\dots} \\ 8.7098{\dots} \\ 8.7102{\dots} \\ 8.7116{\dots} \end{pmatrix} \\
\frac{||c_2 \cdot v_2||}{||v_2||} &= 8.70989{\dots}
\end{align*}
The agreement of the elementwise division tells us that ten iterations was sufficient. The norm tells us that the largest root of $P_1$ is near $8.50$ (limiting to the number of digits of agreement in the elementwise division). (The actual largest root of $P_1$ is $8.50828316{\dots}$.) Likewise, the largest root of $P_2$ is near $8.71$ (actual: $8.71565660{\dots}$). So the iteration has told us that $P_2$ has the largest real root.
Notes:
- We cannot guarantee that the elementwise division gives a range of values containing the root -- the ratios could all be to one side of the root. However, the root is no further from the ends of that interval than the square root of the degree of the polynomial times the width of that interval (so constrained in an interval $1+2\sqrt{6}$ times wider than the range of ratios for this example).
- One can re-use power iteration with its output to get a refinement of the eigenvector, which a corresponding refinement to the estimate of the largest eigenvalue (i.e., root). If the two estimated roots are too close (the interval of estimated locations overlaps), do this to refine the eigenvectors (to shrink the intervals until they don't overlap).
- The "random vector" was five random floating point numbers chosen independently and uniformly from the interval $[-5,5]$. There is nothing magical about this particular interval and pretty much any interval (symmetric around $0$) can be used.
- Power iteration can behave badly if the initial vector is perpendicular to the eigenvector with largest eigenvalue. One way to guard against this is to try a few random vectors (and make sure they're actually pointing in different directions).