I've been thinking about ways of measuring code, and, quite frankly, I can't think of truly objective, semi-universal ways of evaluating the quality or "strength" of code to say, "Yes, this is better than that based on X, Y, and Z metrics" that are popular or exist. I've looked at SonarQube, and although I think their metrics are important, I feel like there are more fundamental concepts at work than what they have now.
But, of course, it's often said that certain languages are better than others because they have stronger guarantees or abstractions: C++ > C with abstraction, and Haskell > C++ with typing and purity and a whole slew of other features. But I was wondering whether it would be possible to actually quantify how much better one is than the other.
Are there existing papers that talk about ways of objective comparing coding paradigms or languages based on objective metrics?
A few of the resources I've found while search have to do with more business-y metrics like cost of quality or coverage of unit tests over development time. Also, measuring by number of lines seems fishy and not truly crucial (imagine if we measured math proofs' qualities by how many pages of paper it takes to print!):
- http://www.infosys.com/engineering-services/white-papers/Documents/comprehensive-metrics-model.pdf
- http://www.compaid.com/caiinternet/ezine/Gack-Effectiveness.pdf
- http://www.compaid.com/caiinternet/ezine/Gack-Efficiency.pdf
Apparently the list of top 10 papers are also not based on measuring the efficiency of design in code, but are more focused on specific areas: