Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade

i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade

Scheduled Pinned Locked Moved Uncategorized
12 Posts 3 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • viss@mastodon.socialV This user is from outside of this forum
    viss@mastodon.socialV This user is from outside of this forum
    viss@mastodon.social
    wrote last edited by
    #1

    RE: https://mstdn.social/@mattwilcox/116056989955523921

    i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade

    nerdpr0f@infosec.exchangeN 1 Reply Last reply
    0
    • viss@mastodon.socialV viss@mastodon.social

      RE: https://mstdn.social/@mattwilcox/116056989955523921

      i bet you can already picture my 'why knowing basic networking and self hosting is important' tirade

      nerdpr0f@infosec.exchangeN This user is from outside of this forum
      nerdpr0f@infosec.exchangeN This user is from outside of this forum
      nerdpr0f@infosec.exchange
      wrote last edited by
      #2

      @Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.

      viss@mastodon.socialV h2onolan@infosec.exchangeH 2 Replies Last reply
      0
      • nerdpr0f@infosec.exchangeN nerdpr0f@infosec.exchange

        @Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.

        viss@mastodon.socialV This user is from outside of this forum
        viss@mastodon.socialV This user is from outside of this forum
        viss@mastodon.social
        wrote last edited by
        #3

        @nerdpr0f 100% - its coming. and when that shit lands, there's gonna be a massive massive run on gpus, because if you have any of the X090 cards (i have a 3090ti for example) they have 24gig of vram, which is enough to run the larger 120b models, and if you can wrap say, qwen or deepseek or gpt-oss:120b with a decent enough harness, you can get 80% of frontier model functionality at home

        nerdpr0f@infosec.exchangeN 1 Reply Last reply
        0
        • nerdpr0f@infosec.exchangeN nerdpr0f@infosec.exchange

          @Viss I just had a deeply cursed thought. AI manufacturers block queries covering specialized support knowledge (COBOL, FORTRAN, etc) and roll it ought under a "Legacy Systems Support" license that costs slightly less than what the consultants in this space bill for.

          h2onolan@infosec.exchangeH This user is from outside of this forum
          h2onolan@infosec.exchangeH This user is from outside of this forum
          h2onolan@infosec.exchange
          wrote last edited by
          #4

          @nerdpr0f @Viss shut up you

          viss@mastodon.socialV 1 Reply Last reply
          0
          • h2onolan@infosec.exchangeH h2onolan@infosec.exchange

            @nerdpr0f @Viss shut up you

            viss@mastodon.socialV This user is from outside of this forum
            viss@mastodon.socialV This user is from outside of this forum
            viss@mastodon.social
            wrote last edited by
            #5

            @h2onolan @nerdpr0f https://www.youtube.com/watch?v=pJG5q0CLGys

            1 Reply Last reply
            0
            • viss@mastodon.socialV viss@mastodon.social

              @nerdpr0f 100% - its coming. and when that shit lands, there's gonna be a massive massive run on gpus, because if you have any of the X090 cards (i have a 3090ti for example) they have 24gig of vram, which is enough to run the larger 120b models, and if you can wrap say, qwen or deepseek or gpt-oss:120b with a decent enough harness, you can get 80% of frontier model functionality at home

              nerdpr0f@infosec.exchangeN This user is from outside of this forum
              nerdpr0f@infosec.exchangeN This user is from outside of this forum
              nerdpr0f@infosec.exchange
              wrote last edited by
              #6

              @Viss Yeah. My institution just rolled out an in-house developed platform that, more or less, does this. Playing around with this is on my summer to-do list.

              viss@mastodon.socialV 1 Reply Last reply
              0
              • nerdpr0f@infosec.exchangeN nerdpr0f@infosec.exchange

                @Viss Yeah. My institution just rolled out an in-house developed platform that, more or less, does this. Playing around with this is on my summer to-do list.

                viss@mastodon.socialV This user is from outside of this forum
                viss@mastodon.socialV This user is from outside of this forum
                viss@mastodon.social
                wrote last edited by
                #7

                @nerdpr0f if you want a real interesting experience, clone down codex, light it up inside of a container or some kind, attach it to the llm, and then ask the llm to review its code and start implementing memory management type features. thats the current big push - to figure out how to get these harnesses to remember like a person does, and not go all memento on you every time you stop/restart the harness

                nerdpr0f@infosec.exchangeN 1 Reply Last reply
                0
                • viss@mastodon.socialV viss@mastodon.social

                  @nerdpr0f if you want a real interesting experience, clone down codex, light it up inside of a container or some kind, attach it to the llm, and then ask the llm to review its code and start implementing memory management type features. thats the current big push - to figure out how to get these harnesses to remember like a person does, and not go all memento on you every time you stop/restart the harness

                  nerdpr0f@infosec.exchangeN This user is from outside of this forum
                  nerdpr0f@infosec.exchangeN This user is from outside of this forum
                  nerdpr0f@infosec.exchange
                  wrote last edited by
                  #8

                  @Viss That sounds interesting, but my main focus - as much as I hate it - is going to be around making use of this platform in some of my existing courses (exploit dev, reversing, web sec, mobile sec) in line with actual industry use cases.

                  viss@mastodon.socialV 1 Reply Last reply
                  1
                  0
                  • R relay@relay.infosec.exchange shared this topic
                  • nerdpr0f@infosec.exchangeN nerdpr0f@infosec.exchange

                    @Viss That sounds interesting, but my main focus - as much as I hate it - is going to be around making use of this platform in some of my existing courses (exploit dev, reversing, web sec, mobile sec) in line with actual industry use cases.

                    viss@mastodon.socialV This user is from outside of this forum
                    viss@mastodon.socialV This user is from outside of this forum
                    viss@mastodon.social
                    wrote last edited by
                    #9

                    @nerdpr0f you may find that my proposal is 100% congruent with exactly what you are trying to do, as basically no matter what you use an llm tui for, the problems of prompt engineering, harness engineering and the next one coming im calling memory management affect absolutely anything you could possibly try to do with a tui

                    nerdpr0f@infosec.exchangeN 1 Reply Last reply
                    0
                    • viss@mastodon.socialV viss@mastodon.social

                      @nerdpr0f you may find that my proposal is 100% congruent with exactly what you are trying to do, as basically no matter what you use an llm tui for, the problems of prompt engineering, harness engineering and the next one coming im calling memory management affect absolutely anything you could possibly try to do with a tui

                      nerdpr0f@infosec.exchangeN This user is from outside of this forum
                      nerdpr0f@infosec.exchangeN This user is from outside of this forum
                      nerdpr0f@infosec.exchange
                      wrote last edited by
                      #10

                      @Viss Admittedly, I have quite a lot of work to do on this. I don't really have any LLMs inline for any of my workflows at the moment. Since I'm not research faculty, most of the development I do is oriented around classes and LLMs are... just overkill for that. I can write a malware sample from scratch, say, for my reversing class in substantially less time than it would take to set up that kind of pipeline.... even if the pipeline is more efficient long term.

                      So, I need to figure out these work flows from (more or less) scratch at the moment.

                      viss@mastodon.socialV 1 Reply Last reply
                      0
                      • nerdpr0f@infosec.exchangeN nerdpr0f@infosec.exchange

                        @Viss Admittedly, I have quite a lot of work to do on this. I don't really have any LLMs inline for any of my workflows at the moment. Since I'm not research faculty, most of the development I do is oriented around classes and LLMs are... just overkill for that. I can write a malware sample from scratch, say, for my reversing class in substantially less time than it would take to set up that kind of pipeline.... even if the pipeline is more efficient long term.

                        So, I need to figure out these work flows from (more or less) scratch at the moment.

                        viss@mastodon.socialV This user is from outside of this forum
                        viss@mastodon.socialV This user is from outside of this forum
                        viss@mastodon.social
                        wrote last edited by
                        #11

                        @nerdpr0f a good place to start may be to just install ollama somewhere and start with single one-liner commands. cuz you can literally just shoot a single sentence into ollama at the command line, and it'll go into a model, and output will happen. no conversation, no system prompt, no harness - nothing. just input and output.

                        nerdpr0f@infosec.exchangeN 1 Reply Last reply
                        0
                        • viss@mastodon.socialV viss@mastodon.social

                          @nerdpr0f a good place to start may be to just install ollama somewhere and start with single one-liner commands. cuz you can literally just shoot a single sentence into ollama at the command line, and it'll go into a model, and output will happen. no conversation, no system prompt, no harness - nothing. just input and output.

                          nerdpr0f@infosec.exchangeN This user is from outside of this forum
                          nerdpr0f@infosec.exchangeN This user is from outside of this forum
                          nerdpr0f@infosec.exchange
                          wrote last edited by
                          #12

                          @Viss That's about where I am right now. I've got a few models running locally on an older gaming box, they're not inline with any work flows.

                          1 Reply Last reply
                          1
                          0
                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups