If you actually understood the explanation and definition of $p$-value, you would not be asking your question, and you would not be saying nonsensical statements such as
the small $p$-value corresponds to the higher probability of accepting $H_0$ when it is true.
This single statement is indicative of a fundamental misconception about hypothesis testing, one that is persistently committed by many students of statistics, as well as actual statisticians who should know better!
Let me be as clear as absolutely possible.
The frequentist model of hypothesis testing does not regard the conclusion of a test as a decision between the null and alternative hypotheses. The inference we draw from the data that is observed is never "$H_0$ is accepted."
The real decision to be contemplated whenever conducting a hypothesis test always pertains to the question, "does the data furnish sufficient evidence to suggest that the null hypothesis is not true?" Note that the question is not simply "is the null hypothesis not true?" This distinction is important, because the answer to the first question, "yes" or "no," doesn't actually mean the same thing as the answer to the second. Specifically, if the answer to the first question is "no," that does not mean we have found evidence to support that the null hypothesis is true; it only means that we lack evidence to show it is not. To repeat a familiar phrase:
Absence of evidence is not evidence of absence.
Think of it a bit like the concept of "innocent until proven guilty." The presumption of innocence is like the null hypothesis. If, even under that presumption, the evidence brought to trial suggests otherwise, then guilt can be established--a high standard of proof must be brought. But if not, then guilt cannot be established. The defendant may still well be guilty of the crime, but the burden of proof was not met to convict the defendant of said crime.
Now you should be able to understand why your original statement is nonsensical. The $p$-value is a conditional probability where the condition that is assumed is the truth of $H_0$. It is calculated under this assumption; therefore, it makes no sense to ask $$\Pr[\text{accept } H_0 \mid H_0 \text{ is true}]$$ because if you presume the truth of $H_0$ then the test statistic for whether to accept $H_0$ will not be a random variable--it will be fixed because you fixed it in your assumption.
The only decisions available to us are "Reject $H_0$," or "Fail to reject $H_0$." The second inference simply means we lack evidence to reject the null hypothesis, just like a prosecutor may lack evidence to convict the defendant, but it doesn't mean the defendant is truly innocent of the crime.
The $p$-value is the chance you could have observed a result as extreme as your data assuming that the null is true. For example, if I give you a fair coin--we know it is fair--and you flip it ten times and it comes up heads every time, the chance that of getting this exact result is $1/1024$. Not likely, but not impossible either. The small but nonzero probability reflects the notion that even a fair coin could produce such a result.
The two-sided $p$-value is $1/512$, because you would be just as "surprised" if the fair coin had given you ten tails out of ten tries.
If your significance level for the test that the coin is not fair is set very small--say, $\alpha = 0.001$--then there is no way to reject the fairness of the coin with only $10$ trials, because there are no outcomes for which you could conclude with such a high degree of certainty that the coin is not fair. The significance level is the maximum tolerance you have for concluding the coin is biased when in fact it is not.
If that seems unreasonable--the possibility of no outcomes being in the rejection region--suppose you only flipped the coin once. How then, could you with any confidence at all, draw an inference about the coin's fairness? There is data, but it is not sufficient to say anything about the coin's fairness.