As already discussed in comments, your algorithm is correct and doesn't require knowledge of $p$. There is an improved algorithm, mentioned in a comment by user21820, that also doesn't require knowledge of $p$ and is optimal in the sense that it converts a stream of biased flips into a stream of unbiased flips at the maximal rate determined by the entropy of the biased distribution.
However, as you only want to go to the theatre or the cinema once (by the way: why not go to a movie theatre?), you're probably not interested in the long-term conversion rate of an infinite stream but in the expected number of flips required to make this one-off decision.
In the algorithm you propose, you essentially perform trials consisting of two flips each, with a success probability of $2p(1-p)$. Thus the expected number of flips is
$$
2\cdot\frac1{2p(1-p)}=\frac1{p(1-p)}\;,
$$
which is anywhere from $4$ for $p=\frac12$ to arbitrarily large numbers for $p\to1$.
The notes on the improved algorithm linked to above derive the expected time it requires to produce a bit. (The analysis is carried out for an algorithm that's not quite as sophisticated as the one that achieves the optimal flip rate, but the two algorithms coincide until the first unbiased flip is produced.) The result is
$$
2\prod_{k=1}^\infty\left(1+p^{2^k}+(1-p)^{2^k}\right)\;.
$$
This is a bit of an improvement, with values ranging from about $3.4$ for $p=\frac12$ to arbitrarily large numbers for $p\to1$.
But there's actually a simpler approach that does considerably better, in the spirit of arithmetic coding. Let's generalise the problem to using the biased coin with heads probability $p$ to simulate another possibly biased coin with heads probability $q$. We can then apply this with $q=\frac12$ to simulate a fair coin.
The basic idea is that for all $p$ and $q$ there is always one result $A$ of the real coin that is at most as probable as one result $B$ of the simulated coin, so we can stop with result $B$ when $A$ occurs and continue with a suitable new value of $q$ if it doesn't. The result can be defined recursively (with $0$ representing tails, $1$ representing heads, $X(q)$ a simulated result with heads probability $q$ and $Y(p)$ an actual result with heads probability $p$:
$$
X(q)=\begin{cases}
0&q\lt p\land Y(p)=0\;,\\
1&q\ge p\land Y(p)=1\;,\\
X\left(\frac qp\right)&q\lt p\land Y(p)=1\;,\\
1-X\left(\frac{1-q}{1-p}\right)&q\ge p\land Y(p)=0\;.
\end{cases}
$$
for $q\ge\frac12$, and $X(q)=1-X(1-q)$ otherwise. (This is to be read more like pseudocode for a recursive program than a mathematical definition, as the variables $X$ and $Y$ stand for different quantities in each iteration.)
I haven't been able to figure out the expected number of throws to produce the first unbiased flip with this algorithm, but here are the results of simulations (in red), graphed together with your simple algorithm (in purple), the more elaborate algorithm linked to above (in green) and the lower bound $-\log2/(p\log p+(1-p)\log(1-p))$ from entropy (in blue):

The improvement is substantial; you need roughly half as many throws on average. Here's the code for the simulations.