i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade
-
RE: https://mstdn.social/@mattwilcox/116056989955523921
i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade
@Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.
-
@Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.
@nerdpr0f 100% - its coming. and when that shit lands, there's gonna be a massive massive run on gpus, because if you have any of the X090 cards (i have a 3090ti for example) they have 24gig of vram, which is enough to run the larger 120b models, and if you can wrap say, qwen or deepseek or gpt-oss:120b with a decent enough harness, you can get 80% of frontier model functionality at home
-
@Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.
-
-
@nerdpr0f 100% - its coming. and when that shit lands, there's gonna be a massive massive run on gpus, because if you have any of the X090 cards (i have a 3090ti for example) they have 24gig of vram, which is enough to run the larger 120b models, and if you can wrap say, qwen or deepseek or gpt-oss:120b with a decent enough harness, you can get 80% of frontier model functionality at home
@Viss Yeah. My institution just rolled out an in-house developed platform that, more or less, does this. Playing around with this is on my summer to-do list.
-
@Viss Yeah. My institution just rolled out an in-house developed platform that, more or less, does this. Playing around with this is on my summer to-do list.
@nerdpr0f if you want a real interesting experience, clone down codex, light it up inside of a container or some kind, attach it to the llm, and then ask the llm to review its code and start implementing memory management type features. thats the current big push - to figure out how to get these harnesses to remember like a person does, and not go all memento on you every time you stop/restart the harness
-
@nerdpr0f if you want a real interesting experience, clone down codex, light it up inside of a container or some kind, attach it to the llm, and then ask the llm to review its code and start implementing memory management type features. thats the current big push - to figure out how to get these harnesses to remember like a person does, and not go all memento on you every time you stop/restart the harness
@Viss That sounds interesting, but my main focus - as much as I hate it - is going to be around making use of this platform in some of my existing courses (exploit dev, reversing, web sec, mobile sec) in line with actual industry use cases.
-
R relay@relay.infosec.exchange shared this topic
-
@Viss That sounds interesting, but my main focus - as much as I hate it - is going to be around making use of this platform in some of my existing courses (exploit dev, reversing, web sec, mobile sec) in line with actual industry use cases.
@nerdpr0f you may find that my proposal is 100% congruent with exactly what you are trying to do, as basically no matter what you use an llm tui for, the problems of prompt engineering, harness engineering and the next one coming im calling memory management affect absolutely anything you could possibly try to do with a tui
-
@nerdpr0f you may find that my proposal is 100% congruent with exactly what you are trying to do, as basically no matter what you use an llm tui for, the problems of prompt engineering, harness engineering and the next one coming im calling memory management affect absolutely anything you could possibly try to do with a tui
@Viss Admittedly, I have quite a lot of work to do on this. I don't really have any LLMs inline for any of my workflows at the moment. Since I'm not research faculty, most of the development I do is oriented around classes and LLMs are... just overkill for that. I can write a malware sample from scratch, say, for my reversing class in substantially less time than it would take to set up that kind of pipeline.... even if the pipeline is more efficient long term.
So, I need to figure out these work flows from (more or less) scratch at the moment.
-
@Viss Admittedly, I have quite a lot of work to do on this. I don't really have any LLMs inline for any of my workflows at the moment. Since I'm not research faculty, most of the development I do is oriented around classes and LLMs are... just overkill for that. I can write a malware sample from scratch, say, for my reversing class in substantially less time than it would take to set up that kind of pipeline.... even if the pipeline is more efficient long term.
So, I need to figure out these work flows from (more or less) scratch at the moment.
@nerdpr0f a good place to start may be to just install ollama somewhere and start with single one-liner commands. cuz you can literally just shoot a single sentence into ollama at the command line, and it'll go into a model, and output will happen. no conversation, no system prompt, no harness - nothing. just input and output.
-
@nerdpr0f a good place to start may be to just install ollama somewhere and start with single one-liner commands. cuz you can literally just shoot a single sentence into ollama at the command line, and it'll go into a model, and output will happen. no conversation, no system prompt, no harness - nothing. just input and output.
@Viss That's about where I am right now. I've got a few models running locally on an older gaming box, they're not inline with any work flows.