Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

Scheduled Pinned Locked Moved Uncategorized
aianxietyantigravityvibecoderscomputeclaudecode
6 Posts 2 Posters 5 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • n_dimension@infosec.exchangeN This user is from outside of this forum
    n_dimension@infosec.exchangeN This user is from outside of this forum
    n_dimension@infosec.exchange
    wrote last edited by
    #1

    Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

    #Antigravity is a thing and apparently its pretty shmick. But I have no time for now to suck up another platform.

    What the do the smart #Vibecoders do to mitigate rate limits (#Compute ceiling is something you hit very hard if you start doing serious projects)?

    1. Run #Claudecode inside Antigravity for #Vibecode
    2. Add automated testing agent to clean up bugs without wasting tokens. WHAT?
    3. Use #Gemini3 for design and architecture
    4. #Testsprite MCP for testing (Not using tokens).

    I have no idea how to do this, I would have to pick up 3 new platforms and even with #Ai this is beyond me...

    ... I understand why #techbros are hitting #ketamine.... That or retire to write poetry.

    zer0unplanned@friendica.rogueproject.orgZ 3 Replies Last reply
    0
    • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

      Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

      #Antigravity is a thing and apparently its pretty shmick. But I have no time for now to suck up another platform.

      What the do the smart #Vibecoders do to mitigate rate limits (#Compute ceiling is something you hit very hard if you start doing serious projects)?

      1. Run #Claudecode inside Antigravity for #Vibecode
      2. Add automated testing agent to clean up bugs without wasting tokens. WHAT?
      3. Use #Gemini3 for design and architecture
      4. #Testsprite MCP for testing (Not using tokens).

      I have no idea how to do this, I would have to pick up 3 new platforms and even with #Ai this is beyond me...

      ... I understand why #techbros are hitting #ketamine.... That or retire to write poetry.

      zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
      zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
      zer0unplanned@friendica.rogueproject.org
      wrote last edited by
      #2

      @n_dimension it is like this from compiled LLama.cpp using cmake.
      treshold RAM is 12GB for whole system so it must be a Quantized model for the RAM's sake. I gave it access to full CPU available on this image.
      ~/
      ├── llama.cpp/build/bin/llama-server (0.0.0.0)
      ├── qwen2.5-7b-q4.gguf
      ├── RAG/
      │ ├── scraper.py
      │ └── data/ -> /var/home/plan/
      └── scripts/
      ├── multi_hallucination_check.py
      └── code_validator.py

      the checkers you must ask yourself, as the scraper does only 1 page per page friendly hhtp to .txt requests ( teh txt is saved from the http) are keyword checkers and a code checker as 2 other metrics that can be used anywhere but not hooked to the VM so it rubs separately in the same Pod environment.
      examples:

      The keyword checker searches for comma-separated terms you enter in the input field. For example:
      If you type hello,world,foo in the keyword box
      And your code contains "Hello world!" and foo = 42
      It finds 3 matches: "hello" (case-insensitive), "world", "foo"
      Shows: Keywords: 3
      It counts whole word occurrences case-insensitively using substring matching. The search terms are split by commas, and each term is searched throughout your entire input text. If the keyword field is empty, it shows Keywords: 0.
      Cyclomatic
      Measures code complexity by counting decision points (if/for/while/switch/case/?)
      - Formula: (50 - complexity count) * 2 = percentage
      - Your code had only 1 decision point → (50-1)*2 = 98%
      - Higher % = simpler code (less branching logic)
      Entropy
      Measures character randomness/predictability in your text
      - Formula: Shannon entropy calculation * 12.5 (to normalize 0-100%)
      - Your text has moderate character variety (mix of letters/symbols/spaces)
      - Higher % = more unpredictable/random character patterns
      - Lower % = more repetitive/predictable text (like long strings of same letter)

      the code checker is self explanatory , but all this part runs outside the LLM and is applicable on all pasted text (a way to detect AI btw)

      So I use that last part of checkers to check the answers of internet AI as mt RAG makes this useless.

      No API that connect me to their cloud > all is in the manuals and official docs as text files in the RAG Data or it gives me answers full of hallucinations and from 2023 > now 100% clear of hallucinations but restricted to the feeded sources. Also one has to be wary what to feed it, from a human on Khan Acadamy and it goes wrong for me or Reddit etc crap from humans does not count here for me.

      n_dimension@infosec.exchangeN 1 Reply Last reply
      0
      • zer0unplanned@friendica.rogueproject.orgZ zer0unplanned@friendica.rogueproject.org

        @n_dimension it is like this from compiled LLama.cpp using cmake.
        treshold RAM is 12GB for whole system so it must be a Quantized model for the RAM's sake. I gave it access to full CPU available on this image.
        ~/
        ├── llama.cpp/build/bin/llama-server (0.0.0.0)
        ├── qwen2.5-7b-q4.gguf
        ├── RAG/
        │ ├── scraper.py
        │ └── data/ -> /var/home/plan/
        └── scripts/
        ├── multi_hallucination_check.py
        └── code_validator.py

        the checkers you must ask yourself, as the scraper does only 1 page per page friendly hhtp to .txt requests ( teh txt is saved from the http) are keyword checkers and a code checker as 2 other metrics that can be used anywhere but not hooked to the VM so it rubs separately in the same Pod environment.
        examples:

        The keyword checker searches for comma-separated terms you enter in the input field. For example:
        If you type hello,world,foo in the keyword box
        And your code contains "Hello world!" and foo = 42
        It finds 3 matches: "hello" (case-insensitive), "world", "foo"
        Shows: Keywords: 3
        It counts whole word occurrences case-insensitively using substring matching. The search terms are split by commas, and each term is searched throughout your entire input text. If the keyword field is empty, it shows Keywords: 0.
        Cyclomatic
        Measures code complexity by counting decision points (if/for/while/switch/case/?)
        - Formula: (50 - complexity count) * 2 = percentage
        - Your code had only 1 decision point → (50-1)*2 = 98%
        - Higher % = simpler code (less branching logic)
        Entropy
        Measures character randomness/predictability in your text
        - Formula: Shannon entropy calculation * 12.5 (to normalize 0-100%)
        - Your text has moderate character variety (mix of letters/symbols/spaces)
        - Higher % = more unpredictable/random character patterns
        - Lower % = more repetitive/predictable text (like long strings of same letter)

        the code checker is self explanatory , but all this part runs outside the LLM and is applicable on all pasted text (a way to detect AI btw)

        So I use that last part of checkers to check the answers of internet AI as mt RAG makes this useless.

        No API that connect me to their cloud > all is in the manuals and official docs as text files in the RAG Data or it gives me answers full of hallucinations and from 2023 > now 100% clear of hallucinations but restricted to the feeded sources. Also one has to be wary what to feed it, from a human on Khan Acadamy and it goes wrong for me or Reddit etc crap from humans does not count here for me.

        n_dimension@infosec.exchangeN This user is from outside of this forum
        n_dimension@infosec.exchangeN This user is from outside of this forum
        n_dimension@infosec.exchange
        wrote last edited by
        #3

        @zer0unplanned

        Thats impressive.
        Thank you for showing me your stack.

        Impressive you are using it for Ai-text checking... I actually had that thought literally yesterday and pencil sketched a system. But, it turns out if you can spot a bot... so then can a bot use the same algorithms to fake being human 🙄
        Consequently the only bots you will spot are the shitty ones, not the state actor ones.

        Im not using RAG as I mainly #vibecode and for that once the box spits out the code. It becomes deterministic and its operation can be verified functionally.

        zer0unplanned@friendica.rogueproject.orgZ 1 Reply Last reply
        0
        • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

          @zer0unplanned

          Thats impressive.
          Thank you for showing me your stack.

          Impressive you are using it for Ai-text checking... I actually had that thought literally yesterday and pencil sketched a system. But, it turns out if you can spot a bot... so then can a bot use the same algorithms to fake being human 🙄
          Consequently the only bots you will spot are the shitty ones, not the state actor ones.

          Im not using RAG as I mainly #vibecode and for that once the box spits out the code. It becomes deterministic and its operation can be verified functionally.

          zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
          zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
          zer0unplanned@friendica.rogueproject.org
          wrote last edited by
          #4
          @n_dimension Thank you friend .
          Then the multi hallucination checker is something for you, so you can check whatever AI you use to be certain and as even humans can be wrong or not the right solution for the use case in my opinion. Difference is I feed it manuals and it cooks me the code ( VibeCoding as well ) with the advantage to be able to troubleshoot a network failure or problem as it works offline using official RFC's and man pages. The trick is simple in fact as you are on it developing such a RAG like stack as well is to create a feed file in var/home/etc and just set the system message to explicitly look there and it works, but slow output.
          1 Reply Last reply
          1
          0
          • R relay@relay.mycrowd.ca shared this topic
          • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

            Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

            #Antigravity is a thing and apparently its pretty shmick. But I have no time for now to suck up another platform.

            What the do the smart #Vibecoders do to mitigate rate limits (#Compute ceiling is something you hit very hard if you start doing serious projects)?

            1. Run #Claudecode inside Antigravity for #Vibecode
            2. Add automated testing agent to clean up bugs without wasting tokens. WHAT?
            3. Use #Gemini3 for design and architecture
            4. #Testsprite MCP for testing (Not using tokens).

            I have no idea how to do this, I would have to pick up 3 new platforms and even with #Ai this is beyond me...

            ... I understand why #techbros are hitting #ketamine.... That or retire to write poetry.

            zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
            zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
            zer0unplanned@friendica.rogueproject.org
            wrote last edited by
            #5
            @n_dimension Don't want to annoy searching pictures as I posted a lots but that is the scraper that feeds to the opt/llmfeed/etc
            Link Preview ImageLink Preview ImageLink Preview Image
            1 Reply Last reply
            1
            0
            • n_dimension@infosec.exchangeN n_dimension@infosec.exchange

              Ugh... this AI world is moving too fast...No wonder so many smart folks have #aianxiety

              #Antigravity is a thing and apparently its pretty shmick. But I have no time for now to suck up another platform.

              What the do the smart #Vibecoders do to mitigate rate limits (#Compute ceiling is something you hit very hard if you start doing serious projects)?

              1. Run #Claudecode inside Antigravity for #Vibecode
              2. Add automated testing agent to clean up bugs without wasting tokens. WHAT?
              3. Use #Gemini3 for design and architecture
              4. #Testsprite MCP for testing (Not using tokens).

              I have no idea how to do this, I would have to pick up 3 new platforms and even with #Ai this is beyond me...

              ... I understand why #techbros are hitting #ketamine.... That or retire to write poetry.

              zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
              zer0unplanned@friendica.rogueproject.orgZ This user is from outside of this forum
              zer0unplanned@friendica.rogueproject.org
              wrote last edited by
              #6

              @n_dimension OK, last one. To my horror I found out that somehow I've accidentally deleted the toolbox pod, so I had to re make a llm-dev environment and re install packages and dependencies etc to make the apps work, server was kept ok on host .
              the test as that man[age on man7.org is about URL's and URI and etc (and I only slept 5H20mins last night.
              Notice the response time in the AI's response vs the text (what I meant it is slow but accurate so far I see)

              Have a good day, I think you live in down under and are sleeping now.

              Link Preview Image
              1 Reply Last reply
              1
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups