Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. The #OpenClaw and #Ollama local #AI #agent combo is working fairly well.

The #OpenClaw and #Ollama local #AI #agent combo is working fairly well.

Scheduled Pinned Locked Moved Uncategorized
openclawollamaagentgpt
13 Posts 5 Posters 35 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lydie@tech.lgbtL This user is from outside of this forum
    lydie@tech.lgbtL This user is from outside of this forum
    lydie@tech.lgbt
    wrote last edited by
    #1

    The #OpenClaw and #Ollama local #AI #agent combo is working fairly well. The setup is an absolute nightmare, but I won after many hours of tweaking stuff. Running the #GPT-OSS:20B model with 32k context window on a 7900XTX. The OpenClaw install is in a VirtualBox VM running Linux, running on a Windows 10 host with a 7950X and 128GB of DDR5. The OLLAMA is running on the bare metal.

    Responses take about a minute, give or take.

    Link Preview Image
    davep@infosec.exchangeD bitzero@corteximplant.netB maikm@chaos.socialM 3 Replies Last reply
    0
    • lydie@tech.lgbtL lydie@tech.lgbt

      The #OpenClaw and #Ollama local #AI #agent combo is working fairly well. The setup is an absolute nightmare, but I won after many hours of tweaking stuff. Running the #GPT-OSS:20B model with 32k context window on a 7900XTX. The OpenClaw install is in a VirtualBox VM running Linux, running on a Windows 10 host with a 7950X and 128GB of DDR5. The OLLAMA is running on the bare metal.

      Responses take about a minute, give or take.

      Link Preview Image
      davep@infosec.exchangeD This user is from outside of this forum
      davep@infosec.exchangeD This user is from outside of this forum
      davep@infosec.exchange
      wrote last edited by
      #2

      @Lydie What are you using it for?

      lydie@tech.lgbtL 1 Reply Last reply
      0
      • lydie@tech.lgbtL lydie@tech.lgbt

        The #OpenClaw and #Ollama local #AI #agent combo is working fairly well. The setup is an absolute nightmare, but I won after many hours of tweaking stuff. Running the #GPT-OSS:20B model with 32k context window on a 7900XTX. The OpenClaw install is in a VirtualBox VM running Linux, running on a Windows 10 host with a 7950X and 128GB of DDR5. The OLLAMA is running on the bare metal.

        Responses take about a minute, give or take.

        Link Preview Image
        bitzero@corteximplant.netB This user is from outside of this forum
        bitzero@corteximplant.netB This user is from outside of this forum
        bitzero@corteximplant.net
        wrote last edited by
        #3
        @Lydie
        Why win10? Everything on Linux seems more straightforward.
        lydie@tech.lgbtL 1 Reply Last reply
        0
        • sheepfreak@pixelfed.socialS This user is from outside of this forum
          sheepfreak@pixelfed.socialS This user is from outside of this forum
          sheepfreak@pixelfed.social
          wrote last edited by
          #4
          "The setup is an absolute nightmare"... true! I gave up on it 😕
          lydie@tech.lgbtL 1 Reply Last reply
          0
          • lydie@tech.lgbtL lydie@tech.lgbt

            The #OpenClaw and #Ollama local #AI #agent combo is working fairly well. The setup is an absolute nightmare, but I won after many hours of tweaking stuff. Running the #GPT-OSS:20B model with 32k context window on a 7900XTX. The OpenClaw install is in a VirtualBox VM running Linux, running on a Windows 10 host with a 7950X and 128GB of DDR5. The OLLAMA is running on the bare metal.

            Responses take about a minute, give or take.

            Link Preview Image
            maikm@chaos.socialM This user is from outside of this forum
            maikm@chaos.socialM This user is from outside of this forum
            maikm@chaos.social
            wrote last edited by
            #5

            @Lydie Did you try other models, like gemma3:32b?

            lydie@tech.lgbtL 1 Reply Last reply
            0
            • maikm@chaos.socialM maikm@chaos.social

              @Lydie Did you try other models, like gemma3:32b?

              lydie@tech.lgbtL This user is from outside of this forum
              lydie@tech.lgbtL This user is from outside of this forum
              lydie@tech.lgbt
              wrote last edited by
              #6

              @maikm I tried QWEN3.5 and it struggled - very slow, seemed to overflow to sysram. 20b seems a good model size to fit the context window.

              maikm@chaos.socialM 1 Reply Last reply
              0
              • bitzero@corteximplant.netB bitzero@corteximplant.net
                @Lydie
                Why win10? Everything on Linux seems more straightforward.
                lydie@tech.lgbtL This user is from outside of this forum
                lydie@tech.lgbtL This user is from outside of this forum
                lydie@tech.lgbt
                wrote last edited by
                #7

                @bitzero See my profile for a note on that...

                bitzero@corteximplant.netB 1 Reply Last reply
                0
                • sheepfreak@pixelfed.socialS sheepfreak@pixelfed.social
                  "The setup is an absolute nightmare"... true! I gave up on it 😕
                  lydie@tech.lgbtL This user is from outside of this forum
                  lydie@tech.lgbtL This user is from outside of this forum
                  lydie@tech.lgbt
                  wrote last edited by
                  #8

                  @sheepfreak I almost did. Needless to say, I made some solid backups!

                  1 Reply Last reply
                  0
                  • lydie@tech.lgbtL lydie@tech.lgbt

                    @bitzero See my profile for a note on that...

                    bitzero@corteximplant.netB This user is from outside of this forum
                    bitzero@corteximplant.netB This user is from outside of this forum
                    bitzero@corteximplant.net
                    wrote last edited by
                    #9
                    @Lydie
                    Ah ok. Got it.
                    1 Reply Last reply
                    0
                    • lydie@tech.lgbtL lydie@tech.lgbt

                      @maikm I tried QWEN3.5 and it struggled - very slow, seemed to overflow to sysram. 20b seems a good model size to fit the context window.

                      maikm@chaos.socialM This user is from outside of this forum
                      maikm@chaos.socialM This user is from outside of this forum
                      maikm@chaos.social
                      wrote last edited by
                      #10

                      @Lydie I've used those before too (and also deepseek-r1:70b) and settled on gemma3:27b (sorry, not 20b) for it's nice balance of speed and quality.

                      I run these on a MacBook Pro M1Max with 64 GB, which supports up to 48GB models, don't know if the 27b will fit in your GPU.

                      lydie@tech.lgbtL 1 Reply Last reply
                      0
                      • maikm@chaos.socialM maikm@chaos.social

                        @Lydie I've used those before too (and also deepseek-r1:70b) and settled on gemma3:27b (sorry, not 20b) for it's nice balance of speed and quality.

                        I run these on a MacBook Pro M1Max with 64 GB, which supports up to 48GB models, don't know if the 27b will fit in your GPU.

                        lydie@tech.lgbtL This user is from outside of this forum
                        lydie@tech.lgbtL This user is from outside of this forum
                        lydie@tech.lgbt
                        wrote last edited by
                        #11

                        @maikm I have a Strix Halo tablet that can do a similar trick, I should give it a go. The thought of using a tablet as a remote LLM host is 🤣

                        1 Reply Last reply
                        0
                        • davep@infosec.exchangeD davep@infosec.exchange

                          @Lydie What are you using it for?

                          lydie@tech.lgbtL This user is from outside of this forum
                          lydie@tech.lgbtL This user is from outside of this forum
                          lydie@tech.lgbt
                          wrote last edited by
                          #12

                          @davep Eventually, to automate monotonous daily tasks for work. E.g. collecting the latest daily NWS weather forecasts and summarizing them hyper-locally to deliver to my field colleagues.

                          davep@infosec.exchangeD 1 Reply Last reply
                          0
                          • lydie@tech.lgbtL lydie@tech.lgbt

                            @davep Eventually, to automate monotonous daily tasks for work. E.g. collecting the latest daily NWS weather forecasts and summarizing them hyper-locally to deliver to my field colleagues.

                            davep@infosec.exchangeD This user is from outside of this forum
                            davep@infosec.exchangeD This user is from outside of this forum
                            davep@infosec.exchange
                            wrote last edited by
                            #13

                            @Lydie 😎

                            1 Reply Last reply
                            1
                            0
                            • R relay@relay.infosec.exchange shared this topic
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups