Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Cyborg)
  • No Skin
Collapse
Brand Logo

CIRCLE WITH A DOT

brian_greenberg@infosec.exchangeB

brian_greenberg@infosec.exchange

@brian_greenberg@infosec.exchange
About
Posts
8
Topics
8
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent Best Controversial

  • Grateful to be a part of the Gartner Chicago CIO Community Executive Summit at the Sears (Willis) Tower.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    Grateful to be a part of the Gartner Chicago CIO Community Executive Summit at the Sears (Willis) Tower. The most sobering thing I heard came from CIOs across all sorts of companies, who openly admitted that nobody has solved AI or Agentic AI operationalization. In a room full of people who are supposed to have the answers, that's the right starting point. Every conversation seemed to be all about #agenticAI.

    A few things that stuck:

    ・ The "Death of the ERP?" conversation wasn't hyperbole. Agentic AI is genuinely unbundling what monolithic ERP systems do, and CIOs who aren't asking that question now will be answering it under pressure in two years.

    ・ Most organizations are still stuck between proof of concept and production. The gap is real and larger than most teams are willing to admit publicly.

    ・ Governance has to come before you scale adoption, not after. IDC projects AI identities will hit 1.3 billion within two years. The organizations that haven't started thinking about identity and access controls for AI agents are already behind.

    ・ Know what you're trying to accomplish before you start buying tools. The orgs getting value from AI defined the outcome first.

    The CIO role is shifting. The value is in guiding the organization through the change, not just managing the infrastructure underneath it.

    Shoutout to Zander Petersen and the Gartner team for a well-run day.

    Link Preview Image
    Chicago CIO Community Executive Summit

    C-level executives gain new connections and actionable insights through peer-driven content and discussions at the Chicago CIO Executive Summit.

    favicon

    Evanta_Inc (www.evanta.com)

    #CIO #AI #Leadership #Cybersecurity @RHR_International

    Uncategorized agenticai cio leadership cybersecurity

  • An AI coding agent wiped out a company's entire production database and every backup in just 9 seconds.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    An AI coding agent wiped out a company's entire production database and every backup in just 9 seconds. The AI agent later confessed, in its own words, that it guessed a destructive action would be scoped to the staging environment, didn't verify, didn't read the docs, and just did it anyway. 🤦🏻‍♂️ Everyone's blaming the AI. I'm looking at the humans who handed it the keys. This wasn't a rogue model. It was a predictable outcome of predictable choices:

    - A CLI token with blanket permissions across all environments
    - Backups stored on the same volume as the data they're meant to protect
    - A cloud provider whose API executes destructive commands with zero confirmation step
    - An agent given access to production while the team thought it was safely contained in staging

    The founder is now manually reconstructing customer bookings from Stripe logs and calendar integrations. Every one of his customers is doing the same because of a 9-second API call. AI agents don't have judgment. They have instructions and permissions. Whatever permissions you grant, assume they will eventually be used in the worst possible sequence at the worst possible moment. That's not pessimism, it's how you architect resilient systems. Separate your environments. Scope your tokens. Store backups offline and off-volume. Require confirmation before any destructive operation. These aren't AI-era lessons. They're 30-year-old lessons that people keep skipping because the tooling makes it easy to skip them. The speed AI can act is new. The failure modes underneath it are not.
    https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue
    #AI #Cybersecurity #RiskManagement

    Uncategorized cybersecurity riskmanagement

  • A lower court decided Apple, Google, and Facebook lose Section 230 immunity because they ran credit card transactions inside social casino apps.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    A lower court decided Apple, Google, and Facebook lose Section 230 immunity because they ran credit card transactions inside social casino apps. Not because they built the apps. Not because they designed the gambling mechanics. Because they processed the payments.

    Follow that logic downstream and Etsy is liable for a seller's counterfeit goods the moment a buyer checks out. Patreon is exposed the second a creator's content draws a lawsuit. Section 230 has kept smaller platforms alive since 1996 by separating the pipe from the content flowing through it. Courts inventing a payment-processing carve-out don't hurt Apple. Apple has lawyers. The platforms that get hurt are the ones that can't afford to fight.

    EFF filed an amicus brief arguing the 9th Circuit should reverse the lower court, and they're right. Congress never drew a line between hosting content and processing payments for it. Judges shouldn't draw one now just because the content happens to be digital slot machines.

    https://www.eff.org/deeplinks/2026/04/eff-9th-circuit-again-app-stores-shouldnt-be-liable-processing-payments-user
    #Tech #Law #Leadership

    Uncategorized tech law leadership

  • Anthropic recorded over 16 million interactions with Claude from about 24,000 fake accounts, which are reportedly linked to Chinese companies trying to cheaply copy the model.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    Anthropic recorded over 16 million interactions with Claude from about 24,000 fake accounts, which are reportedly linked to Chinese companies trying to cheaply copy the model. Google faced more than 100,000 attempts to copy Gemini. OpenAI reports that most distillation attacks they find come from China. This is not an isolated event. It is a repeatable and scalable strategy.

    Breaking the terms of service isn't enough to stop people when the reward is closing a years-long gap in AI technology. The House Select Committee on China wants to label 'adversarial distillation' as industrial espionage under the Economic Espionage Act, which makes sense. At the moment, getting caught just means losing an account. That is hardly a real punishment.

    The Trump-Xi summit is approaching, and the White House is reportedly considering sanctions. However, Trump has previously traded away export controls for other deals. If that happens again, AI companies may have to protect their intellectual property by themselves.

    When laws fail to keep pace with new types of attacks, attackers automatically have the advantage.

    If your company is developing anything unique using advanced AI models, your API access logs are now part of your security risks.

    https://arstechnica.com/tech-policy/2026/04/us-accuses-china-of-industrial-scale-ai-theft-china-says-its-slander/

    #AI #Cybersecurity #NationalSecurity #IntellectualProperty #Geopolitics #security #privacy #cloud #infosec #Espionage

    Uncategorized cybersecurity nationalsecurit intellectualpro geopolitics

  • An ex-Azure engineer published six essays arguing Microsoft's cloud has been on life support since 2008, and the cause isn't bad code.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    An ex-Azure engineer published six essays arguing Microsoft's cloud has been on life support since 2008, and the cause isn't bad code. It's bad people decisions. Rushed launch, post-launch talent exodus, no testing discipline, no architectural vision. Sound familiar to anyone who's worked in a place that ships first and staffs later?

    Now layer 2026 on top. Microsoft cut roughly 15,000 jobs in mid-2025. Coding agents are pumping out 4x more commits in 90 days. GitHub's unofficial uptime has slipped under 90% and the proposed fix is, wait for it, moving more of GitHub onto Azure. The same Azure the engineer says is held together with rushed decisions and wishful thinking.

    🧠 The phrase that stuck with me is "knowledge dilution from high attrition." When the senior people who knew why a system was built that way leave, no LLM in the world can recover that context
    🤖 More AI-written code does not mean less work. It means more code to review, test, deploy, and run, which means more compute and more humans needed downstream
    📉 OpenAI signing an $11.9B compute deal with CoreWeave in March 2025 was the loudest "we don't trust your capacity" signal Microsoft has ever received from its closest partner
    🪑 The bet that AI lets you cut headcount keeps colliding with the reality that AI generates work for humans faster than it removes it

    Every CIO I talk to is being pitched the same dream: fewer engineers, more agents, lower run rate. The Azure story is what happens when that math doesn't pencil out and the bill comes due in incidents instead of dollars.

    https://www.theregister.com/2026/04/04/azure_talent_exodus/
    #Azure #AI #Leadership #security #privacy #cloud #infosec #cybersecurity #software #devops

    Uncategorized azure leadership security privacy

  • Four grand.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    Four grand. That's what it costs a random kid with a laptop to run a voice phishing operation that used to require a call center, a phisher, and a developer. ATHR packages all of it into one dashboard, tosses in AI voice agents that can ad-lib when a victim gets suspicious, and ships with ready-made lures for Google, Microsoft, Coinbase, Binance, and a few more.

    CyberCrime has a SaaS model now, complete with commission splits (10% of profits back to the vendor). The barrier to running a convincing vishing campaign just collapsed, and your awareness training still says "watch for typos in the email."

    🎙️ AI agents handle objections live, so the "support rep" sounds real because they are, functionally, reasoning
    📧 Lure emails are customized per target with accurate IPs, dates, locations, and pass authentication checks
    🏦 Eight brands supported out of the box, crypto exchanges heavily represented for obvious reasons
    🛡️ Stop looking at email indicators, start modeling normal communication patterns and flag the anomalies

    If your vishing defense is a 20-minute annual training video and a phish-report button, you're bringing a knife to a drone fight. The humans on the other end of the phone aren't humans anymore, and they don't get tired, rattled, or bored on calls.

    Link Preview Image
    New ATHR vishing platform uses AI voice agents for automated attacks

    A new cybercrime platform called ATHR can harvest credentials via fully automated voice phishing attacks that use both human operators and AI agents for the social engineering phase.

    favicon

    BleepingComputer (www.bleepingcomputer.com)

    #Cybersecurity #Vishing #AI #security #privacy #cloud #infosec #cybersecurity

    Uncategorized cybersecurity vishing security privacy

  • 🚨 I'm hiring right now.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    🚨 I'm hiring right now. And I'm deleting a huge chunk of applications inside the first 10 seconds. Not because the candidates are bad. Because their profiles look fake.

    📌 TLDR In 2026, bots, scammers, and nation-state actors are flooding every job posting. If your LinkedIn profile looks like one of theirs, you get swept into the same trash pile, no matter how qualified you are. Here's how to clear the 10-second test.

    🔑 THE NON-NEGOTIABLE MINIMUMS

    ✅ A real photo of your actual face. Not an avatar. Not an AI portrait. Not a blank silhouette.
    ✅ LinkedIn identity verification — free, 5 minutes, instant signal you're human: https://www.linkedin.com/help/linkedin/answer/a1359065
    ✅ Your city, or at minimum your state. "United States" alone reads as a scam. Not every company is set up to hire in every state; payroll, tax, and legal nexus all matter.

    🚫 INSTANT TURN-OFFS

    ❌ "Dear Hiring Manager" with zero customization
    ❌ Typos in the first sentence of your outreach
    ❌ Résumé claims that don't match your LinkedIn dates
    ❌ "Can we move this to WhatsApp?" — textbook scammer, blocked and done
    ❌ Bashing your last employer

    The bar hasn't gotten higher. The noise floor has. Standing out in 2026 doesn't require a gimmick. It requires proving, in 10 seconds, that you're not one of the fakes.

    #Hiring #JobSearch #LinkedInTips #CareerAdvice #Recruiting #Cybersecurity

    Uncategorized hiring jobsearch linkedintips careeradvice recruiting

  • 😳 Someone hid a prompt injection inside invisible markdown comments in a pull request.
    brian_greenberg@infosec.exchangeB brian_greenberg@infosec.exchange

    😳 Someone hid a prompt injection inside invisible markdown comments in a pull request. A developer asked Copilot to review the PR. Copilot read the hidden instructions, searched the codebase for AWS keys, encoded them in base16, and smuggled them out through GitHub's own image proxy as 1x1 transparent pixels. The CSP didn't flag it because the traffic was routed through GitHub's trusted infrastructure. CVSS 9.6. No malicious code ever executed.

    The attacker weaponized the AI assistant's own access permissions. Copilot could see everything the developer could see, and it can't distinguish a legitimate instruction from a hidden one buried in a PR description.

    🔍 The attack, dubbed "CamoLeak," was patched by GitHub in August 2025 and publicly disclosed in October
    🔑 Copilot was directed to find secrets like API keys and cloud credentials, then exfiltrate them character by character
    🖼️ Data was hidden inside pre-signed image URLs, making it look like normal browser activity
    ⚠️ Any AI assistant with deep system access, Microsoft 365 Copilot, Google Gemini, all of them, is a potential exfiltration channel if untrusted content can reach its instruction stream

    We've spent years teaching developers not to trust user input. Now we're handing AI tools full repo access and letting them ingest unvalidated text from pull requests.

    https://cybersecuritynews.com/hackers-exploit-github-copilot-flaw/
    #CyberSecurity #AI #GitHubCopilot #security #privacy #cloud #infosec #software

    Uncategorized cybersecurity githubcopilot security privacy
  • Login

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups