I am currently a PhD student focusing in Functional Analysis, but I have to take a course in Graph Theory this semester.
So, the problem I have with theorems and proofs in Graph Theory is that they all just seem so arbitrary and random. They all look like a randomly found algorithm, which just happens to work. In functional analysis, you don't have proofs like this - here, you proofs are a logical consequence of applying former knowledge (example: the proof for the Hahn-Banach Theorem).
To name an example in graph theory, let's take the proof for the theorem that if $G$ is a graph of order $n$, and $|E(G)| \le n-2$, then we can find a packing with a cyclical permutation.
The proof - like lots of proofs in Graph Theory - relies on induction. We first show it for $n=3, 4$ and then go to the second step of the induction. Here, we analyze two cases: if $G$ has two isolated vertices, and if $G$ has no isolated vertices at all. And then what we do is we remove certain vertices and get a permutation $\sigma'$ from our inductive assumption, which can be extended by adding a transposition concerning the removed vertices. The problem I have here is that for each different case we extend $\sigma'$ to $\sigma$ in a slightly different way, which seems very arbitrary and random for me. But hey, "it works".
And I feel like that about all of the proofs and theorems in Graph Theory. I simply don't see why we choose this specific algorithm to prove a theorem, and what would happen if we would choose another one. This is especially difficult to show, because you would need to spend lots of time drawing graphs and all the different examples in order to see why it might not work.
Is there another way to learn proofs in Graph Theory, especially if you come from a purely functional analysis background? Right now, it really feels like rote learning for me.