As you are probably well aware, it is provably impossible to reach consensus if a majority of the peers are faulty, malicious, or dishonest, if we assume they are all fully connected (as is typically the assumption for distributed systems).
If you are imagining that for some reason honest peers are better-connected than faulty/malicious peers, well, there are two problems with that model:
The first problem is that the model is not realistic. In practice, faculty/malicious peers are typically just as well-connected as honest ones. In the case of unintended software faults, there is no reason to expect that connectivity will be at all correlated to software failure rates. In the case of malice (e.g., peers that are compromised or dishonest), well, attackers can typically arrange to be as well-connected as the honest peers, and can often arrange to be even better-connected.
The second reason is that if we really did have such a separation, the problem would become trivial. For instance, we could just pick a threshold and ignore all peers whose connectivity is below that threshold. If this ensures that a 2/3 majority of the remaining peers are honest and non-faulty, then this reduces to the standard Byzantine consensus problem.
But ultimately I think it is the first problem that is the big one. It would only make sense to even think about such a setting if we had some sample application where there is evidence that your assumption is valid. Lacking that, there is not a clear reason to study such a model.
The web of trust doesn't actually seem to meet your assumptions, unless you are very careful about how you define them. An attacker can create one dishonest node $x$ (that is connected to a few honest peers), and then create lots of additional dishonest nodes $y_1,y_2,\dots,y_n$ and have $x$ endorse all of the $y_i$'s. The $y_i$'s can also endorse lots of honest nodes. Now, at least by some measures, the $y_i$'s are well-connected. There is also the risk of Sybil attacks, where the attacker creates many fake identities ("Sybils") and gets them connected to many honest nodes.
In the cryptographic world, you could look at the research directions under the catch-phrases "reputation systems" and "trust metrics". Some of them are designed to estimate the amount of trust one can have in a node, based upon its connectivity and the endorsements that nodes make in each other. However, academic research hasn't had a ton of impact in practice, because it is hard to make them work well in practice, and because Sybil attacks are hard to stop.