@n_dimension I don't get what you mean. Running the LLMs is not the issue, especially if they're lightweight, you and I know. Of course you can run a lightweight model on a low ressource price tag. The training of LLMs is ressource hungry as hell. This is what a large part of the ressources of datacentres are being used for.
bhg@troet.cafe
@bhg@troet.cafe