Quanta Magazine's April 13, 2023 A New Approach to Computation Reimagines Artificial Intelligence starts with:
By imbuing enormous vectors with semantic meaning, we can get machines to reason more abstractly — and efficiently — than before.
Later on, during the explanation are the paragraphs:
The vectors must be distinct. This distinctness can be quantified by a property called orthogonality, which means to be at right angles. In 3D space, there are three vectors that are orthogonal to each other: One in the x direction, another in the y and a third in the z. In 10,000-dimensional space, there are 10,000 such mutually orthogonal vectors.
But if we allow vectors to be nearly orthogonal, the number of such distinct vectors in a high-dimensional space explodes. In a 10,000-dimensional space there are millions of nearly orthogonal vectors.
I remember reading previous questions here with high dimensions and dot products are discussed and seeing comments about how easy it is to get very small or even zero dot products in high dimensions, but I've never worked outside of one, two and three dimensional problems.
Question: What definition of "nearly orthogonal" would result in "In a 10,000-dimensional space there are millions of nearly orthogonal vectors"? Would it be for example dot product1 smaller than some number like 0.1?
1of the presumably normalized vectors