@abe I found your site via Linux Unplugged (https://linuxunplugged.com/652) - what you've done is very interesting.
-
@abe I found your site via Linux Unplugged (https://linuxunplugged.com/652) - what you've done is very interesting.
Have you shared any of the code / setup behind this article? https://aindoria.com/posts/letting_llms_loose_in_vm-1/ - I think it would be interesting for others (me) to experiment in the same way.
-
@abe I found your site via Linux Unplugged (https://linuxunplugged.com/652) - what you've done is very interesting.
Have you shared any of the code / setup behind this article? https://aindoria.com/posts/letting_llms_loose_in_vm-1/ - I think it would be interesting for others (me) to experiment in the same way.
@roo@mstdn.ca Hey Roo, thanks for the kind words. I haven't shared that code, but I still have it somewhere. Let me see if I can make a repo up over the weekend -- just a heads up though, the volition/abiverse thing was a successor to this so the first experiment would be very, very rough. You'd have to manually watch the logs etc ec.
-
@roo@mstdn.ca Hey Roo, thanks for the kind words. I haven't shared that code, but I still have it somewhere. Let me see if I can make a repo up over the weekend -- just a heads up though, the volition/abiverse thing was a successor to this so the first experiment would be very, very rough. You'd have to manually watch the logs etc ec.
@abe You don't need to make it a turn-key solution. I was just hoping for a sketch of what you did - like a pile of parts to build something similar.
If you simply took whatever you had - and redacted your tokens any any personal stuff - even just delete and replace with a comment saying "this code did basically X" -- it'd be a starting point to replicate your experiment.
I honestly probably wouldn't try to replicate, but mimic the intent. I'm curious if a similar experiment can be done 100% locally - with much smaller models. What behaviour(s) will emerge?
Could I get 2 different small models to collaborate and build something neither could on their own?
-
@abe You don't need to make it a turn-key solution. I was just hoping for a sketch of what you did - like a pile of parts to build something similar.
If you simply took whatever you had - and redacted your tokens any any personal stuff - even just delete and replace with a comment saying "this code did basically X" -- it'd be a starting point to replicate your experiment.
I honestly probably wouldn't try to replicate, but mimic the intent. I'm curious if a similar experiment can be done 100% locally - with much smaller models. What behaviour(s) will emerge?
Could I get 2 different small models to collaborate and build something neither could on their own?
@roo@mstdn.ca Just letting you know that I've not forgotten this. I had to go into old hard drives to figure out where it was. I finally found it this morning. I'll strip off some hardcoded config lines and publish it. Like an idiot I never version controlled it.
-
@abe You don't need to make it a turn-key solution. I was just hoping for a sketch of what you did - like a pile of parts to build something similar.
If you simply took whatever you had - and redacted your tokens any any personal stuff - even just delete and replace with a comment saying "this code did basically X" -- it'd be a starting point to replicate your experiment.
I honestly probably wouldn't try to replicate, but mimic the intent. I'm curious if a similar experiment can be done 100% locally - with much smaller models. What behaviour(s) will emerge?
Could I get 2 different small models to collaborate and build something neither could on their own?
@roo@mstdn.ca I didn't clean it up much beyond stripping secrets. https://github.com/AIndoria/llm-experiment
-
@roo@mstdn.ca I didn't clean it up much beyond stripping secrets. https://github.com/AIndoria/llm-experiment
@abe Thanks!
-
R relay@relay.mycrowd.ca shared this topic