Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

  1. Home
  2. Uncategorized
  3. This is really super thorough.

This is really super thorough.

Scheduled Pinned Locked Moved Uncategorized
2 Posts 1 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • paco@infosec.exchangeP This user is from outside of this forum
    paco@infosec.exchangeP This user is from outside of this forum
    paco@infosec.exchange
    wrote last edited by
    #1

    RE: https://mstdn.social/@hkrn/116234269715436761

    This is really super thorough. Such a powerful summary of the AI agent attack surface.

    What I think people miss is cost/benefit. Consider 2 things:

    (1) they name a shit-ton of defensive activities like blocking outbound connections, signing artefacts, etc. They even say “None of these controls are free. Approval gates reduce autonomy. Outbound restrictions frustrate users who expect agents to browse freely. Memory cleanup can reduce recall if thresholds are too strict.” What is the value prop for AI when done with these necessary safeguards? How do the costs of these mitigations affect the cost benefit calculation?

    (2) this “solution to LLM errors is more LLMs” approach has layer upon layer of LLM. No AI company has ever made a profit. Not even close. When they finally try to reverse the tide, they need to make profit not just on a COGS basis, month-by-month or Q-on-Q—they need to make back some of the catastrophic loss of the first few years of giving it away. So we layer in all the security features we need for (1) and then the price of LLMs jumps 100x or more. Will this design of layers upon layers of LLM agents make financial sense? Will the damage to the climate be worth whatever the results are?
    1/2

    paco@infosec.exchangeP 1 Reply Last reply
    0
    • paco@infosec.exchangeP paco@infosec.exchange

      RE: https://mstdn.social/@hkrn/116234269715436761

      This is really super thorough. Such a powerful summary of the AI agent attack surface.

      What I think people miss is cost/benefit. Consider 2 things:

      (1) they name a shit-ton of defensive activities like blocking outbound connections, signing artefacts, etc. They even say “None of these controls are free. Approval gates reduce autonomy. Outbound restrictions frustrate users who expect agents to browse freely. Memory cleanup can reduce recall if thresholds are too strict.” What is the value prop for AI when done with these necessary safeguards? How do the costs of these mitigations affect the cost benefit calculation?

      (2) this “solution to LLM errors is more LLMs” approach has layer upon layer of LLM. No AI company has ever made a profit. Not even close. When they finally try to reverse the tide, they need to make profit not just on a COGS basis, month-by-month or Q-on-Q—they need to make back some of the catastrophic loss of the first few years of giving it away. So we layer in all the security features we need for (1) and then the price of LLMs jumps 100x or more. Will this design of layers upon layers of LLM agents make financial sense? Will the damage to the climate be worth whatever the results are?
      1/2

      paco@infosec.exchangeP This user is from outside of this forum
      paco@infosec.exchangeP This user is from outside of this forum
      paco@infosec.exchange
      wrote last edited by
      #2

      The LLMs fixing LLMs means that tons of inference results are thrown away. The bottommost agent inferences something that the next agent says is no good. So that’s waste, it is discarded, and the bottom agent tries again with new context added by the second agent. And so on. Each agent layer generates mistakes that are just as costly as the right answer. The LLMs discard so many attempts before arriving at an answer that passes the next agent’s evaluation.

      Cryptocurrency used these GPUs and generated 99.99999999% waste. Only super rare calculations were kept, and the majority of calculated hashes (practically all) were discarded: pure waste. LLMs are using these GPUs for much less waste. We actually accept and use many more of the results. But the more layers we wrap around agents, the more times we remand one agent’s answer back to be redone, the more waste we create.

      It will matter when AI companies start charging customers more than what it costs to provide the service (I.e. a profitable service charge)
      2/2

      1 Reply Last reply
      1
      0
      • R relay@relay.infosec.exchange shared this topic
      Reply
      • Reply as topic
      Log in to reply
      • Oldest to Newest
      • Newest to Oldest
      • Most Votes


      • Login

      • Login or register to search.
      • First post
        Last post
      0
      • Categories
      • Recent
      • Tags
      • Popular
      • World
      • Users
      • Groups