Same as for integers, we can use the gcd Bezout equation to compute modular inverses, and the Bezout equation is computable mechanically by EEA = Extended Euclidean algorithm. As for integers, it is usually much easier and less error prone to not do EEA backwards but rather in forward augmented-matrix form, i.e. propagate forward the representations of each remainder as a linear combination of the gcd arguments (vs. compute them in backward order by back-substitution), e.g. from this answer, we compute the Bezout equation for $\gcd(f,g)\,$ over $\Bbb Q$.
$\!\begin{eqnarray}
[\![1]\!]&& &&f = x^3\!+2x+1 &\!\!=&\, \left<\,\color{#c00}1,\ \ \ \ \color{#0a0}0\,\right>\quad {\rm i.e.}\ \qquad f\, =\, \color{#c00}1\cdot f\, +\, \color{#0a0}0\cdot g\\
[\![2]\!]&& &&\qquad\ \, g =x^2\!+1 &\!\!=&\, \left<\,\color{#c00}0,\ \ \ \ \color{#0a0}1\,\right>\quad{\rm i.e.}\ \qquad g\, =\ \color{#c00}0\cdot f\, +\, \color{#0a0}1\cdot g\\
[\![3]\!]&:=&[\![1]\!]-x[\![2]\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\qquad\qquad\ \ \:\! x+1 \,&\!\!=&\, \left<\,\color{#c00}1,\,\color{#0a0}{-x}\,\right>\ \ \ \ {\rm i.e.}\quad x\!+\!1 =\, \color{#c00}1\cdot f\ \color{#0c0}{-x}\cdot g\\
[\![4]\!]&:=&[\![2]\!]+(1\!-\!x)[\![3]\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&\qquad\qquad\qquad\ 2 \,&\!\!=&\, \left<\,\color{#c00}{1-x},\,\ \color{#0a0}{1-x+x^2}\,\right>\\
\end{eqnarray}$
Therefore the prior line yields $\ 2\, =\, (\color{#c00}{1\!-\!x})f + (\color{#0a0}{1\!-\!x\!+\!x^2})g\ \ \ $ [Bezout equation]
Normalizing to a monic gcd: $\,\ \ \ 1\, =\, \dfrac{\color{#c00}{1\!-\!x}}{2}\,f \,+ \dfrac{\color{#0a0}{1\!-\!x\!+\!x^2}}2\,g\,\ $ by scaling above by $1/2$.
Computing modular inverses from the Bezout equation works the same as for integers, e.g. reducing prior Bezout $\!\bmod g\Rightarrow f^{-1}\equiv{\large \frac{\color{#c00}{1-x}}2}\pmod{\!g}$.
The proof is also the same as for integers - by descent using (euclidean) division with remainder.
The set $I = fR[x]+gR[x]$ of polynomials of form $\, a f + b g $ is an ideal, i.e. is closed under addition and scaling, so it is closed under remainder = mod, since that is a composition of such operations: $f_i\bmod g_i = f_i - q\, g_i.\,$ So the least degree $0\neq d\in I$ divides every $h\in I$ (else $0\neq h\bmod d\in I\,$ has smaller degree than $d).\,$ So $\,f,g\in I\,\Rightarrow\, d\,$ is a common divisor of $\,f,g,\,$ necessary greatest by $\, c\mid f,g\,\Rightarrow\, c\mid d\!=\! a f + b g,\,$ so $\,\deg c\le \deg d.\,$ To force $d$ unique (over a field) usually the convention is to scale it to be monic (lead coef $=1),\,$ as we did above.
This algorithm is an efficient way to search the set $I$ for a polynomial $\,d\neq 0\,$ of minimal degree, while also keeping track of each element's representation as a linear combination of $\,f\,$ and $\,g.\,$ The proof shows further that $\,d\,$ is the gcd of all elements of $I$.
The same ideas work for any Euclidean domain (i.e. enjoying division with (smaller) remainder).
Remark $ $ Generally (for hand calculations) the above method is much less error-prone than the alternative commonly presented "back-substitution" method (further it is simpler to memorize).
This is a special-case of Hermite/Smith row/column reduction of matrices to triangular/diagonal normal form, using the division/Euclidean algorithm to reduce entries modulo pivots. Though one can understand this knowing only the analogous linear algebra elimination techniques, it will become clearer when one studies modules - which, informally, generalize vector spaces by allowing coefficients from rings vs. fields. In particular, these results are studied when one studies normal forms for finitely-generated modules over a PID, e.g. when one studies linear systems of equations with coefficients in the (non-field!) polynomial ring $\rm F[x],$ for $\rm F$ a field, as above.