@pikesley @CedC @deathkitten LLM are somewhat essentialization engines, they learns characteristics of what they must reproduce. Those "summarized" characteristics are embodied in embeddings. It is possible to a certain extent to see that as what the LLM "knows".
When you have trained your model, embeddings alone can be valuable as "knowledge"
When you have trained your model, embeddings alone can be valuable as "knowledge"