Let's review some basics on which you may be confused.
In a p-value test we have a hypothesis, called the null hypothesis, from which probabilities are computable, then use a p-value to quantify how well the hypothesis "fits" some data. The p-value is the probability, conditional on the null hypothesis, that the data would be at least surprising, relative to the expectations of said hypothesis, as it in fact was. (When I say that, I'm glossing over the difference between 1- and 2-tailed tests; in 1-tailed tests, the p-value is the probability that the data would be at least this surprising, in the direction in which it is surprising.)
In this example, the null hypothesis is that the distributions are the same, so $p$ is already the p-value for that hypothesis. The only event that we know has probability $1-p$ is that the data would be less "surprising", again conditional on the null hypothesis. We certainly can't do another test in which the role of null hypothesis switches to the opposite of what it was before; "the distributions differ" doesn't allow us to calculate p-values.
I think that answers your second question. As for the first, the reason we talk about "failing to reject" the null hypothesis is because you can't prove it, only disprove it or be impressed it survived the effort. As for what you can do in this example, I suggest you double-check a p-value of 1. Such a p-value means the data is as consistent with the distributions being the same as it could possibly get. With data drawn from a continuous distribution, this is suspicious.