In one of my courses, Big-O notation was used for defining what a sparse matrix is, under the context of qualifying for suitability for a particular set of linear algebra algorithms. I looked around on the net, and only found more such uses that I have an issue with.
To say a matrix is sparse if it has O(n) non-zero elements, for example, is illogical. The sentence doesn't even parse right? When we talk about Big-O, we are talking about functions. So it would make sense to say for example, as a way to describe a property of a set of matrices, maybe produced by a function that builds or bounds particular types of matrices, parameterized by the dimensions. In this case however, it really would not say anything about a particular matrix in the set in terms of sparsity. For example, in such a O(n) set, every matrix of size less than 1 trillion^2 could have no non-zero elements.
I have the same logical problem with qualifications for graphs sparsity.
But I cannot seam to find any precise explanation for why Big-O is used in such ways, or whether it is used the correct way or is an abuse of notation.