in which bsky admits the outage 10 days ago *was* their fucking code after all.
-
in which bsky admits the outage 10 days ago *was* their fucking code after all. they tried to blame AWS at the time and loudly insisted it wasn't their vibe-rotted code
https://pckt.blog/b/jcalabro/april-2026-outage-post-mortem-219ebg2
archive: https://archive.is/AKfKP -
in which bsky admits the outage 10 days ago *was* their fucking code after all. they tried to blame AWS at the time and loudly insisted it wasn't their vibe-rotted code
https://pckt.blog/b/jcalabro/april-2026-outage-post-mortem-219ebg2
archive: https://archive.is/AKfKP@davidgerard >Second, if you find this work interesting, we're hiring!
Lmao sure, let me just grab a steerage ticket for the Titanic at 2:19 AM on April 15th, 1912.
-
in which bsky admits the outage 10 days ago *was* their fucking code after all. they tried to blame AWS at the time and loudly insisted it wasn't their vibe-rotted code
https://pckt.blog/b/jcalabro/april-2026-outage-post-mortem-219ebg2
archive: https://archive.is/AKfKP@davidgerard
> o11y
It's time to stop -
@davidgerard
> o11y
It's time to stop@davidgerard Also, hilariously, I can't open any Bluesky links from that article because a request for app.bsky.ageassurance.getConfig is hanging!
-
in which bsky admits the outage 10 days ago *was* their fucking code after all. they tried to blame AWS at the time and loudly insisted it wasn't their vibe-rotted code
https://pckt.blog/b/jcalabro/april-2026-outage-post-mortem-219ebg2
archive: https://archive.is/AKfKP@davidgerard bluesky is vibe coded????
-
@davidgerard Also, hilariously, I can't open any Bluesky links from that article because a request for app.bsky.ageassurance.getConfig is hanging!
@klikini @davidgerard they’re claiming this is a ddos, not more vibe code

️ -
@klikini @davidgerard they’re claiming this is a ddos, not more vibe code

️@Laukidh @davidgerard I wonder why anyone would DDoS that


/s -
in which bsky admits the outage 10 days ago *was* their fucking code after all. they tried to blame AWS at the time and loudly insisted it wasn't their vibe-rotted code
https://pckt.blog/b/jcalabro/april-2026-outage-post-mortem-219ebg2
archive: https://archive.is/AKfKP@davidgerard They put synchronous loggers in high-volume transactions for memcache errors.
For context: That's like a textbook scenario for use non-blocking loggers.
- It's not a financial transaction that requires an airtight audit trail.
- High Throughput expected
- Production environment
- Prone to surges in trafficThe general pattern would be for the async logger to push it to a buffer and let a sidecar (like filebeat) scoop them up for the monitoring pipeline
-
R relay@relay.an.exchange shared this topic