I recently began a study of Introductory Statistics by Sheldon Ross. In the book, he defined a percentile as follows:
The sample $100p$ percentile is that data value having the property that at least $100p$ percent of the data are less than or equal to it and at least $100(1 − p)$ percent of the data values are greater than or equal to it. If two data values satisfy this condition, then the sample $100p$ percentile is the arithmetic average of these values.
This definition appears different from that on Wikipedia, which defines the $k$th percentile as the smallest data value that is greater than or equal to $k$% of the data set, and which I understand the intuitively.
I suspect the textbook is giving the more general definition, but I cannot see how so. In particular, once we have included the “smallest value” criterion, we can uniquely locate the $100p$ percentile by referencing the values less than or equal to it, so why do we need to refer to the values greater than the $100p$ percentile?