We are always looking for representations that can capture the meaning of information. However, most representations that compress information for retrieval are lossy. For example, embeddings are a form of lossy compression. Similar to the no-free-lunch theorem, no lossy compression method is universally better than another, since downstream tasks may depend on the specific information that gets lost. Therefore, the question is not which representation is perfect, but which representation is better aligned with an AI system. Because AI evolves rapidly, it is difficult to predict the limitations of the next generation of LLMs. For this reason, a good representation for information retrieval in future LLM systems should be closer to how humans represent knowledge.
When a human tries to retrieve information in a library, they first locate a book by category or by using a metadata keyword search. Then, they open the table of contents (ToC) to find the relevant section, and repeat this process as needed. Therefore, I believe the future of AI retrieval systems should mimic this process. The recently popular PageIndex approach (see this discussioin: https://news.ycombinator.com/item?id=45036944) also belongs to this category, as it generates a table-of-contents–like tree for LLMs to reason over. Again, it is a form of lossy compression, so its limitations can be discussed. However, this approach is the closest to how humans perform retrieval.
A follow-up question is: what is the lossless way to represent knowledge? That would mean reading all the knowledge at once, which is the most accurate but also the least efficient method. Therefore, for different applications, we need to find an appropriate trade-off between accuracy and efficiency. In systems like real-time recommendation, we prefer efficiency over accuracy, so vector-based search is suitable. In domain-specific QA, we prefer accuracy over efficiency, so maybe a table-of-contents–based search may be the better choice.
It is also worth mentioning that compression and generative AI are two sides of the same coin. I highly recommend the book "Information Theory, Inference, and Learning Algorithms" by David MacKay, which explores these deep connections.
Looks like, people use hybrid approach, as all these ToC, metadata, etc, are essentially semantic (executed on neural network), but just recognition of text and recognition of characters are neural.
Yeah, I guess another point is that with ToC, metadata representation is transparent—people know what information is lost. On the other hand, vector-based representation is a black box. Explainability and transparency are also important considerations in production-level AI systems.
Humans only retrieve information in a library in that way due to the past limitations on retrieval and processing. The invention of technologies like tables of contents or even the Dewey Decimal Classification are strongly constrained by fundamental technologies like ... the alphabet! And remember, not all languages are alphabetic. And embeddings aren't alphabetic and don't share the same constraints.
I recommend Judith Flanders' "A Place for Everything" as a both a history and survey of the constraints in sorting and organising information in an alphabetic language. It's also a fun read!
tl;dr why would we want an LLM do something as inefficiently as a human?
> Similar to the no-free-lunch theorem, no lossy compression method is universally better than another,
No lunch theorem only works, because they assume you care about every single value of noise. Nobody does. There's a free lunch to be had, and it's order. You don't care about a single pixel difference between two cat pictures, NFL does.
Lossy compression is precisely where NFL does not apply.
Researchers have discussed limitations of vector-based retrieval from a rank perspective in various forms for a few years. It's further been shown that better alternative exists; some low-rank approaches can theoretically approximate arbitrary high-rank distribution while permitting MIPS-level efficient inference (see e.g., Retrieval with Learned Similarities, https://arxiv.org/abs/2407.15462). Such solutions are already being used in production at Meta and at LinkedIn.
I don't think Mixture of Logits from the paper you link circumvents the theoretical limitations pointed out here, since their dataset size mostly stays well below the limit.
In the end they still rely on Maximum Inner Product Search, just with several lookups for smaller partitions of the full embedding, and the largest dataset is Books, where this paper suggests you'd need more than 512 embedding dimensions, and MoL with 256-dimensional embeddings split into 8 parts of 32 each has an abysmal hit rate.
So that's hardly a demonstration that arbitrary high-rank distributions can be approximated well. MoL seems to approximate it better than other approaches, but all of them are clearly hampered by the small embedding size.
Could somebody suggest good introduction to simulation of complex behavior with neural networks?
I mean, I hear, about experiments of running Turing machine simulation on NN, or even simulation of some physics on NN, but I have not seen any good survey on these topics, and they could be very interest on subject.
In the theoretical section, they extrapolate assuming a polynomial from 40 to thousands of dimensions. Why do they trust a polynomial fit to extrapolate two orders of magnitude? Why do we even think it's polynomial instead of exponential in the first place? Most things like this increase exponentially with dimension.
In fact, I think we can do it in d=2k dimensions, if we're willing to have arbitrarily precise query vectors.
Embed our points as (sin(theta), cos(theta), sin(2 x theta), cos(2 x theta)..., sin(k x theta), cos(k x theta)), with theta uniformly spaced around the circle, and we should be able to select any k of them.
Using a few more dimensions we can then ease the precision requirements on the query.
In practice you're actually hitting further problems because you don't have those synthetic top-k tasks but rather open-domain documents and queries to support.
And if you hope to get better than "just" having the top-k correct and instead get some sort of inclusion/exclusion boundary between what should be matched and what should not be matched, you'll hit the same bounds as apply to context length limitations for kq dimenionality in a transformer's attention layers, as I mentioned about 6 weeks ago: https://news.ycombinator.com/item?id=44570650
I'm not following your construction. In the k=2 case, how do you construct your 4-dimensional query vector so that the dot product is maximized for two different angles theta and phi, but lower for any other arbitrary angle?
Viewing our space as two complex points instead of four real:
Let H(z) = (z- exp(i theta)) (z - exp(i phi))
H(z) is zero only for our two points. We'll now take the norm^2, to get something that's minimized on only our two chosen points.
|H(exp(i x))|^2 = H(z) x conj(H(z)) = sum from -2 to 2 of c_j x exp(i j x)
For some c_j (just multiply it out). Note that this is just a Fourier series with highest harmonic 2 (or k in general), so in fact our c_j tell us the four coefficients to use.
For a simpler, but numerically nastier construction, instead use (t, t^2, t^3, ...), the real moment curve. Then given k points, take the degree k poly with those zeros. Square it, negate, and we get a degree 2k polynomial that's maximized only on our selected k points. The coefficients of this poly are the coefficients of our query vector, as once expanded onto the moment curve we can evaluate polynomials by dot product with the coefficients.
The complex version is exactly the same idea but with z^k and z ranging on the unit circle instead.
We already have "sparse" embeddings. Google's Matryoshka embedding schema can scale embeddings from ~150 dimensions to >3k, and it's the same embedding with layers of representational meaning. Imagine decomposing an embedding along principle components, then streaming the embedding vectors in order of their eigenvalue, kind of the idea.
If you consider the actual latent space the full higher dimensional representation, and you take the first principle component, the other vectors are zero. Pretty sparse. No it's not a linked list sparse matrix. Don't be a pedant.
When you truncate Matryoshka embeddings, you get the storage benefits of low-dimensional vectors with the limited expressiveness of low-dimensional vectors. Usually, what people look for in sparse vectors is to combine the storage benefits of low-dimensional vectors with the expressiveness of high-dimensional vectors. For that, you need the non-zero dimensions to be different for different vectors.
Component analysis doesn't fundamentally reduce information, it just rotates it into a more informative basis. People usually drop vectors using the eigenvalues to do dimensionality reduction, but you don't have to do that.
We are always looking for representations that can capture the meaning of information. However, most representations that compress information for retrieval are lossy. For example, embeddings are a form of lossy compression. Similar to the no-free-lunch theorem, no lossy compression method is universally better than another, since downstream tasks may depend on the specific information that gets lost. Therefore, the question is not which representation is perfect, but which representation is better aligned with an AI system. Because AI evolves rapidly, it is difficult to predict the limitations of the next generation of LLMs. For this reason, a good representation for information retrieval in future LLM systems should be closer to how humans represent knowledge.
When a human tries to retrieve information in a library, they first locate a book by category or by using a metadata keyword search. Then, they open the table of contents (ToC) to find the relevant section, and repeat this process as needed. Therefore, I believe the future of AI retrieval systems should mimic this process. The recently popular PageIndex approach (see this discussioin: https://news.ycombinator.com/item?id=45036944) also belongs to this category, as it generates a table-of-contents–like tree for LLMs to reason over. Again, it is a form of lossy compression, so its limitations can be discussed. However, this approach is the closest to how humans perform retrieval.
A follow-up question is: what is the lossless way to represent knowledge? That would mean reading all the knowledge at once, which is the most accurate but also the least efficient method. Therefore, for different applications, we need to find an appropriate trade-off between accuracy and efficiency. In systems like real-time recommendation, we prefer efficiency over accuracy, so vector-based search is suitable. In domain-specific QA, we prefer accuracy over efficiency, so maybe a table-of-contents–based search may be the better choice.
It is also worth mentioning that compression and generative AI are two sides of the same coin. I highly recommend the book "Information Theory, Inference, and Learning Algorithms" by David MacKay, which explores these deep connections.
Looks like, people use hybrid approach, as all these ToC, metadata, etc, are essentially semantic (executed on neural network), but just recognition of text and recognition of characters are neural.
Yeah, I guess another point is that with ToC, metadata representation is transparent—people know what information is lost. On the other hand, vector-based representation is a black box. Explainability and transparency are also important considerations in production-level AI systems.
Humans only retrieve information in a library in that way due to the past limitations on retrieval and processing. The invention of technologies like tables of contents or even the Dewey Decimal Classification are strongly constrained by fundamental technologies like ... the alphabet! And remember, not all languages are alphabetic. And embeddings aren't alphabetic and don't share the same constraints.
I recommend Judith Flanders' "A Place for Everything" as a both a history and survey of the constraints in sorting and organising information in an alphabetic language. It's also a fun read!
tl;dr why would we want an LLM do something as inefficiently as a human?
> Similar to the no-free-lunch theorem, no lossy compression method is universally better than another,
No lunch theorem only works, because they assume you care about every single value of noise. Nobody does. There's a free lunch to be had, and it's order. You don't care about a single pixel difference between two cat pictures, NFL does.
Lossy compression is precisely where NFL does not apply.
Just similar in theorem style, I try to emphasise that no lossy representation is universally (i.e. for all downstream tasks) better than another.
Researchers have discussed limitations of vector-based retrieval from a rank perspective in various forms for a few years. It's further been shown that better alternative exists; some low-rank approaches can theoretically approximate arbitrary high-rank distribution while permitting MIPS-level efficient inference (see e.g., Retrieval with Learned Similarities, https://arxiv.org/abs/2407.15462). Such solutions are already being used in production at Meta and at LinkedIn.
I don't think Mixture of Logits from the paper you link circumvents the theoretical limitations pointed out here, since their dataset size mostly stays well below the limit.
In the end they still rely on Maximum Inner Product Search, just with several lookups for smaller partitions of the full embedding, and the largest dataset is Books, where this paper suggests you'd need more than 512 embedding dimensions, and MoL with 256-dimensional embeddings split into 8 parts of 32 each has an abysmal hit rate.
So that's hardly a demonstration that arbitrary high-rank distributions can be approximated well. MoL seems to approximate it better than other approaches, but all of them are clearly hampered by the small embedding size.
Could somebody suggest good introduction to simulation of complex behavior with neural networks?
I mean, I hear, about experiments of running Turing machine simulation on NN, or even simulation of some physics on NN, but I have not seen any good survey on these topics, and they could be very interest on subject.
In the theoretical section, they extrapolate assuming a polynomial from 40 to thousands of dimensions. Why do they trust a polynomial fit to extrapolate two orders of magnitude? Why do we even think it's polynomial instead of exponential in the first place? Most things like this increase exponentially with dimension.
In fact, I think we can do it in d=2k dimensions, if we're willing to have arbitrarily precise query vectors.
Embed our points as (sin(theta), cos(theta), sin(2 x theta), cos(2 x theta)..., sin(k x theta), cos(k x theta)), with theta uniformly spaced around the circle, and we should be able to select any k of them.
Using a few more dimensions we can then ease the precision requirements on the query.
In practice you're actually hitting further problems because you don't have those synthetic top-k tasks but rather open-domain documents and queries to support. And if you hope to get better than "just" having the top-k correct and instead get some sort of inclusion/exclusion boundary between what should be matched and what should not be matched, you'll hit the same bounds as apply to context length limitations for kq dimenionality in a transformer's attention layers, as I mentioned about 6 weeks ago: https://news.ycombinator.com/item?id=44570650
I'm not following your construction. In the k=2 case, how do you construct your 4-dimensional query vector so that the dot product is maximized for two different angles theta and phi, but lower for any other arbitrary angle?
Viewing our space as two complex points instead of four real:
Let H(z) = (z- exp(i theta)) (z - exp(i phi))
H(z) is zero only for our two points. We'll now take the norm^2, to get something that's minimized on only our two chosen points.
|H(exp(i x))|^2 = H(z) x conj(H(z)) = sum from -2 to 2 of c_j x exp(i j x)
For some c_j (just multiply it out). Note that this is just a Fourier series with highest harmonic 2 (or k in general), so in fact our c_j tell us the four coefficients to use.
For a simpler, but numerically nastier construction, instead use (t, t^2, t^3, ...), the real moment curve. Then given k points, take the degree k poly with those zeros. Square it, negate, and we get a degree 2k polynomial that's maximized only on our selected k points. The coefficients of this poly are the coefficients of our query vector, as once expanded onto the moment curve we can evaluate polynomials by dot product with the coefficients.
The complex version is exactly the same idea but with z^k and z ranging on the unit circle instead.
Their idea is that capacity of even 4096-wide vectors limits their performance.
Sparse models like BM25 have a huge dimension and thus don’t suffer from this limit, but they don’t capture semantics and can’t follow instructions.
It seems like the holy grail is a sparse semantic model. I wonder how splade would do?
Wouldn't holy grail then be parallel channels for candidate generation;
followed by weight scoring, expansion (graph) & rerank (LLM)?that is pretty much exactly what we do for our company-internal knowledge retrieval:
might indeed achieve the best of all worldsWe already have "sparse" embeddings. Google's Matryoshka embedding schema can scale embeddings from ~150 dimensions to >3k, and it's the same embedding with layers of representational meaning. Imagine decomposing an embedding along principle components, then streaming the embedding vectors in order of their eigenvalue, kind of the idea.
Matryoshka embeddings are not sparse. And SPLADE can scale to tens or hundreds of thousands of dimensions.
If you consider the actual latent space the full higher dimensional representation, and you take the first principle component, the other vectors are zero. Pretty sparse. No it's not a linked list sparse matrix. Don't be a pedant.
When you truncate Matryoshka embeddings, you get the storage benefits of low-dimensional vectors with the limited expressiveness of low-dimensional vectors. Usually, what people look for in sparse vectors is to combine the storage benefits of low-dimensional vectors with the expressiveness of high-dimensional vectors. For that, you need the non-zero dimensions to be different for different vectors.
No one means Matryoshka embeddings when they talk about sparse embeddings. This is not pedantic.
No one means wolves when they talk about dogs, obviously wolves and dogs are TOTALLY different things.
Doesn't PCA compress the embeddings in this case, ie reduce the accuracy? It's similar to quantization.
Component analysis doesn't fundamentally reduce information, it just rotates it into a more informative basis. People usually drop vectors using the eigenvalues to do dimensionality reduction, but you don't have to do that.
we used multi-vector models at Morphik, and I can confirm the real-world effectiveness, especially when compared with dense-vector retrieval.
Can you share more about the multi-vector approach? Is it open source?
Curious is that colbert-like ones?
How does it look with ColBert style late interaction embeddings?
The dashed brown line in figures 3, 4, and 6 is labeled GTE-ModernColBERT. So better than all single-vector models, but worse than BM25.