After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia.
“Text generated by large language models (LLMs) often violates several of Wikipedia's core content policies,” Wikipedia’s new policy states. “For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below.”
The new policy, which was accepted in an overwhelming 40 to 2 vote among editors, allows editors to use LLMs to suggest basic copyedits to their own writing, which can be incorporated into the article or rewritten after human review if the LLM doesn’t generate entirely new content on its own.
“Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited,” the policy states. “The use of LLMs to translate articles from another language's Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation.”
I previously reported about editors using LLMs to translate Wikipedia articles and introducing errors to those articles in the process.
Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia and who proposed the guideline said that it seemed unlikely the policy will last because previously the editor community has been divided on the issue. However, Lebleu said “The mood was shifting, with holdouts of cautious optimism turning to genuine worry.”
“A few months ago, a much more bare-bones guideline had passed, only banning the creation of brand new articles with LLMs,” Lebleu told me in an email. “A follow-up proposal to reword it into something more substantial failed to pass, but was noted to have ‘consensus for better guidelines along the lines of and/or in the spirit of this draft.’ In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed.”
The policy was written with the help of WikiProject AI Cleanup, a group of Wikipedia editors dedicated to finding and removing AI-generated errors on the site. Editors have been dealing with an increasing number of AI-generated articles or edits lately, and have made some minor adjustments to its guidelines as a result, like streamlining the process for removing AI-generated articles. Editors’ position, as well as the position of the Wikimedia Foundation, has been to not make blanket rules against AI because Wikipedia already uses some forms of automation, and because AI tools could assist editors in the future.
The new policy doesn’t ban the use of other automated tools that are already in use or future implementations, but it does show the Wikipedia community is less optimistic about the benefit of AI-generated content, and taking a stand against it.
“In context, this has implications far beyond Wikipedia,” Lebleu said. “The same flood of AI-generated content has been seen from social media to open-source projects, where agents submit pull requests much faster than human reviewers can keep up with. StackOverflow and the German Wikipedia paved the way in recent months with similar policies, and, as anxiety over the AI bubble grows, I foresee a domino effect, empowering communities on other platforms to decide whether AI should be welcome. On their own terms.”
This week we start with Emanuel’s crazy story about WebinarTV, a company that is secretly recording Zoom meetings and turning them into AI-powered podcasts. It’s nuts. After the break, Joseph tells us about the eccentric billionaire who tried to become a cocaine kingpin. In the subscribers-only section, we lament the lose of the metaverse.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
WebinarTV, a company that bills itself as “a search engine for the best webinars,” is secretly scanning the internet for Zoom meeting links, recording the calls, and turning them into AI-generated podcasts for profit. In some cases, people only found out that their Zoom calls were recorded once WebinarTV reached out to them directly to say their call was turned into a podcast in an attempt to promote WebinarTV’s services.
WebinarTV claims to host more than 200,000 webinars. It’s not clear how it’s recording so many Zoom calls without permission, but in some cases the stolen videos posted to WebinarTV can put call participants at risk.
North Oaks, Minnesota is the only city in the United States that is not on Google Maps Street View. YouTube documentarian Chris Parr, who grew up not too far from North Oaks, set out to change that earlier this year. For a brief few days, he literally put North Oaks on the map. And then it was gone again.
“It’s known by Minnesotans as a place where executives and CEOs live,” Parr told 404 Media. “Famously Walter Mondale is from North Oaks, but also like United Healthcare executives and Target executives.”
North Oaks has managed to largely stay unmapped on Street View because of the way the city handles its streets. In almost every city and town in the United States, property owners give an easement to their local government for the roads in front of their homes (or don’t have any claim to the roads at all). In North Oaks, homeowners’ property extends into the middle of the street, meaning there is literally no “public” property in the city, and the roads are maintained by the North Oaks Homeowners’ Association (NOHOA): “the City owns no roads, land, or buildings. The 50-60 miles of roads in the city are owned by the NOHOA members whose property extends to the center of the road subject to easements in favor of NOHOA,” the homeowners association’s website, which has very little information on it and notes that it is “unable to share most private documents with the public.” The roads entering North Oaks have no trespassing signs posted and automated license plate readers.
In the early days of Google Maps, North Oaks was on Street View. But in May, 2008, the city threatened Google with a lawsuit because its Street View cars had trespassed. Google deleted its Street View images and North Oaks hasn’t been on Street View since.
"It's not the hoity-toity folks trying to figure out how to keep the world away," then-Mayor Thomas Watson told the Star Tribune in 2008. "They [Google] really didn't have any authorization to go on private property."
Google Maps allows people to upload their own images, however. And Parr set out to find a way to map North Oaks without actually going there. So he began mapping it with a drone.
“It’s a geographic oddity,” Parr said. “I realized the airspace above North Oaks operates differently than the property on the ground. I thought you could effectively map the city with a drone.”
Parr is right. The national airspace is technically managed by the Federal Aviation Administration, and “airspace” starts directly above the ground, which is something I covered over and over in the early days of consumer drones as towns sought to ban drones in certain areas.
“Technically, if you launch your drone from public property, which anyone can do if you’re a registered drone pilot, you can fly it straight up and above private property,” Parr said. And so Parr stood at “six or seven different spots” directly outside the boundary of North Oaks and flew his drone around. “I just pulled my car over onto the shoulder and popped my drone up and flew it over,” he added.
There were parts of North Oaks that he couldn’t reach by drone from outside the boundaries of the city, so eventually he decided he needed an invite into the city to go to a park within its boundaries to keep flying his drone.
“According to North Oak’s ordinances, you can go like, visit a friend, or if you’re a contractor working on a house, you can go into the city, but you have to be an invited guest,” Parr said. “I made a Craigslist post asking for somebody to invite me and I got an absolute ton of responses. I started texting with this woman named Maggie and she invited me, so technically I had the invite to go to the park.”
Parr then took his drone footage and uploaded it to Google Maps. For a few glorious days, North Oaks was mapped. And then it was gone.
“I’ve since been in a battle with the people who flag the images,” he said. He also got a letter from a law firm representing the North Oaks Homeowners Association. “It’s not asking me to take any of the videos down or anything, but basically they say, ‘Don’t come back.’”
Parr’s experiment and documentary raises questions, of course, about who gets to have privacy in America. A wealthy enclave has set up the legal and surveillance infrastructure to be able to prevent being mapped. The rest of us, meanwhile, are subject to all sorts of surveillance by our neighbors and law enforcement. “The only reason it’s set up this way is because it’s such a wealthy community,” Parr said. “I know that I was able to do this, but I don’t know if I should be able to do this, and that’s kind of the question that I wanted to tackle. The YouTube comments are pretty crazy man. They’re all over the place. They’re very split 50/50 on that question.”
North Oaks did not respond to a request for comment.
The Executive Office of the President registered the domain aliens.gov on Wednesday a little after 6:30 AM according to a bot that monitors federal domains. There’s no associated website just yet, but the registration comes a month after Trump said he would direct the government to release files related to aliens and UFOs to the public.
Aliens and UFOs—now often called unidentified aerial phenomena (UAP)—have been a hot news topic over the last few years. Senator Chuck Schumer has pushed for declassification of government reports about strange lights in the sky; Blink-182 frontman Tom DeLonge’s To The Stars initiative released Pentagon footage of strange objects seen by Navy pilots; and Congress has held repeated hearings in an attempt to get to the bottom of the phenomenon. Interest died down somewhat last year when The Wall Street Journal reported that much of the information we now have is connected to a disinformation campaign and an elaborate Pentagon hazing ritual.
But humans will always look into the sky, and the phenomenon got new attention in February when former President Barack Obama discussed aliens during an interview with Brian Tyler Cohen. Cohen asked Obama if aliens were real. “They’re real but I haven’t seen them and they’re not being kept […] in Area 51. There’s no underground facility. Unless there’s this enormous conspiracy and they hid it from the President of the United States,” Obama said. The clip went viral.
Obama walked this back days later in a post on Instagram: “I was trying to stick with the spirit of the speed round, but since it’s gotten attention let me clarify. Statistically, the universe is so vast that the odds are good there’s life out there. But the distances between solar systems are so great that the chances we’ve been visited by aliens is low, and I saw no evidence during my presidency that extraterrestrials have made contact with us. Really!”
Four days later, a reporter asked Trump about the incident during a press conference on Air Force One.
“Well he gave classified information, he’s not supposed to be doing that,” Trump said.
“So aliens are real?” the reporter asked.
“Well I don’t know if they’re real or not, I can tell you he gave classified information, he’s not supposed to be doing that. He made a big mistake, he took it out of classified information. No, I don’t have an opinion on it. I never talk about it. A lot of people do. A lot of people believe it. Do you believe it, Peter?” Trump said.“The President can declassify anything that he wants to,” the reporter said.
“Well maybe I’ll get him out of trouble. I may get him out of trouble by declassifying.”
In a post on Truth Social later that day, Trump promised to do just that: “Based on the tremendous interest shown, I will be directing the Secretary of War, and other relevant Departments and Agencies, to begin the process of identifying and releasing Government files related to alien and extraterrestrial life, unidentified aerial phenomena (UAP), and unidentified flying objects (UFOs), and any and all other information connected to these highly complex, but extremely interesting and important, matters. GOD BLESS AMERICA!”
The promised declassification of government reports related to aliens follows a now familiar Trump administration pattern. Trump ordered the declassification and publication of files related to JFK and Jeffrey Epstein, two long obsessed over conspiracy-related topics. Those disclosures, and especially the Epstein files, have had knock-on effects, including the release of nude images and naming of previously unknown victims.
An insolvency judge in England tossed out testimony after discovering a witness was being coached on what to say in real time through a pair of smartglasses. When the voice of the coach started coming through the cellphone after it was disconnected from the glasses, the witness blamed the whole thing on ChatGPT.
Insolvency and Companies Court (ICC) Judge Agnello KC in Britain wrote up the incident after it happened in January and the UK-based legal research blog Legal Futures was first to report it. The case considered the liquidation of a Lithuanian company co-owned by a man named Laimonas Jakštys. Jakštys was in court to get his business off an insolvency list and to put himself back in charge of it. It didn’t go well.
“Right at the start of his cross examination, he seemed to pause quite a bit before replying to the questions being asked,” Judge KC wrote. “These questions were interpreted and then there was a pause before there was a reply. After several questions, [defense lawyer Sarah Walker] then informed me that she could hear an interference coming from around Mr. Jakštys and asked if Mr. Jakštys could take his glasses off for a period as she was aware smart glasses existed.”
There was a Lithuanian interpreter on hand to help Jakštys talk to the court and she, too, said she could hear voices from Jakštys’s glasses. The judge pointed out they were smart glasses and asked him to take them off. “After a few further questions, when the interpreter was in the process of translating a question, Mr Jakštys’ mobile phone started broadcasting out loud with the voice of someone talking,” Judge KC wrote. “There was clearly someone on the mobile phone talking to Mr. Jakštys. He then removed his mobile phone from his inner jacket pocket. At my direction, the smart glasses and his mobile were placed into the hands of his solicitor.”
Jakštys showed up the next day in the glasses again and the judge told him to turn them off. “Jakštys denied that he was using the smart glasses to receive the answers that he was to give in court to the questions being asked,” the judgement said. “He also denied that his smart glasses were linked to his mobile phone at the time that he was giving evidence before me.”
During the court appearance, Jakštys claimed his mobile phone had been stolen but couldn’t provide a police report for the incident. He also repeatedly received calls on his smartglasses-connected phone from a number listed as “abra kadabra.” The call log showed that many of the calls occurred when he was on the witness stand. The judge asked him about the identity of “abra kadabra” and Jakštys said it was a taxi driver.
“When he was pressed as to why all these calls were made…Mr. Jakštys stated that he was not able to remember. This was a reply which he also gave frequently during his evidence,” Judge KC said.
In the end, the Judge tossed out all of Jakštys’ testimony. “He was untruthful in relation to his use about the smart glasses and in being coached through the smart glasses,” the judgement said. “In my judgment, from what occurred in court, it is clear that call was made, connected to his smart glasses and continued during his evidence until his mobile phone was removed from him. When asked about this, his explanation was that he thought it was ChatGPT which caused the voice to be heard from his mobile phone once his smart glasses had been removed. That lacks any credibility.”
This incident in the London court is just another in a long line of bad behavior from people wearing smartglasses. CBP agents have been spotted wearing them during immigration raids and Harvard students have loaded them with facial recognition tech to instantly dox strangers.
The DOGE deposition videos a judge ordered removed from YouTube on Friday after they had gone massively viral have since been backed up across the internet, including as a torrent and to the Internet Archive.The videos included DOGE members unable or unwilling to define DEI; discussing how they used ChatGPT and terms such as “black” and “homosexual” to flag grants for termination but not “white” or “caucasian,” and acknowledgements that despite their aggressive cuts they failed to achieve the stated goal of lowering the government deficit.
The news shows the difficulty in trying to remove material from the internet, especially that which has a high public interest and has already been viewed likely millions of times. It’s also an example of the “Streisand Effect,” a phenomenon where trying to suppress information often results in the information spreading further.
Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
Welcome back to the Abstract! These are the studies this week that searched for life in the dark, stood up for hedgehogs, dropped some wisdom, and died in an inexplicably epic explosion.
First, aliens might be riding around interstellar space on exomoons, just in case that’s of interest to you. Then: an ultrasonic solution to roadkill, the limits of metrification, and an answer to a cosmic mystery.
Living on a planet with a boring old Sun is for normies. In a new study, astronomers suggest that alien life could potentially emerge in a much more unexpected place—”exomoons” that orbit free-floating planets in interstellar space.
There are likely trillions of rogue planets wandering through the Milky Way, untethered to any star, raising the tantalizing mystery of whether any of them could be habitable. Now, researchers led by David Dahlbüdding of the Max Planck Institute for Extraterrestrial Physics (MPE) extend this question to exomoons that were dragged out into interstellar space with their planets.
“The search for exomoons within conventional stellar systems continues with no confirmed detection to date,” the team said. “Thus, free-floating planets might offer an alternative pathway for the first discovery of an exomoon.”
In other words, astronomers have never clearly seen an exomoon. But new techniques for spying free-floating worlds—such as microlensing, which reveals objects through the warped light of their gravity—could provide the sensitivity that is required for this long-sought detection.
With regard to potential habitability, Dahlbüdding and his colleagues focused specifically on exomoons that orbit planets with thick hydrogen atmospheres. If such a pair were to be kicked out of a star system, the exomoon’s orbit could become stretched out into a far more elliptical shape. This shift would cause the planet to exert more intense tidal forces onto its satellite, generating heat that could keep liquid water flowing on the moon over vast timescales.
“Close encounters before the final ejection even increase the ellipticity of the moon’s orbit, boosting tidal heating over millions to billions of years, depending on the moon’s and free-floating planet’s properties,” the team said. The tidal forces and atmospheric components could also “create favourable conditions for RNA polymerisation and thus support the emergence of life.”
“These potentially habitable moons could be detected through a variety of techniques,” including microlensing, the researchers added, though they noted that actually analyzing their atmospheres “may not be feasible with any instruments currently in operation.”
While we may not be able to spot signs of life on these worlds anytime soon, it would be exciting just to discover a planet and a moon bound together, but unbound from any star, which is a genuine near-term possibility.
Hedgehogs have long been ubiquitous in Europe, but cars now kill up to one-third of their population each year. Even more nightmarish, the advent of robotic lawn mowers has led to an uptick in hedgehog deaths.
To help protect these iconic critters, scientists suggest testing out acoustic repellents. A series of experiments with 20 hedgehogs from a wildlife rescue established that “hedgehogs can perceive a broad ultrasonic range,” with peak sensitivity around 40 kHz.
Rasmussen, who goes by Dr. Hedgehog, with a hedgehog. Image: Joan Ostenfeldt
The results “show a potential for the development of targeted ultrasonic sound repellents to deter hedgehogs temporarily from potential dangers such as the particular models of robotic lawn mowers found to be hazardous to hedgehog survival, and more importantly, cars,” said researchers led by Sophie Lund Rasmussen of the University of Oxford.
“Designing sound repellents for cars to reduce the high number of road-killed hedgehogs enhances animal welfare and supports conservation of this declining flagship species,” the team concluded.
To channel the old joke, why did the hedgehog cross the road? Answer: Ideally it didn’t, due to scientific intervention. (I’ll be here all night).
The metric system has been adopted by every country except Liberia, Myanmar, and the United States. But even as metrication was rapidly embraced in the 17th and 18th centuries, a far more imprecise system—the drop—refused to drop out.
People have measured liquids in drop form for thousands of years, and still do in many contexts today. Researchers led by Armel Cornu of Uppsala University have now explored how such “non-standard units survive lengthy waves of standardization.” The paper is worth a read for its many interesting asides, like how acids were tested “by counting the number of drops…that could be placed on the skin before one witnessed the effects.” Gnarly.
It also gets into the political dimensions of metrication, including this proto-populist justification for standardizing units: “Numerous complaints about the diversity of measurements and their lack of cross-readability” were directed with “a special ire at powerful lords who abused standards in order to extort the population,” Cornu’s team said. The metric system was one response to "the discontent of peasants and the little people against the powerful.”
Anyway, a little bit of drop-related science history never hurt anyone—unless you volunteered to be an acid tester.
Astronomers have discovered the mysterious power source of rare and radiant stellar explosions called “Type I superluminous supernovae” which are ten times brighter than regular supernovae.
The secret superluminous sauce, as it turns out, is the birth of a magnetar, a highly magnetized stellar remnant, according to a supernova first observed in December 2024. The light from this stellar explosion contained imprints of the Lense–Thirring effect, in which spacetime is dragged around by massive and rapidly rotating objects, a key sign of a magnetar origin.
Artist’s conception of a magnetar surrounded by an accretion disk exhibiting Lense-Thirring precession. Image: Joseph Farah and Curtis McCully
“Our observations are consistent with a magnetar centrally located within the expanding supernova ejecta,” said researchers led by Joseph Farah of Las Cumbres Observatory. “These results provide the first observational evidence of the Lense–Thirring effect in the environment of a magnetar and confirm the magnetar spin-down model as an explanation for the extreme luminosity observed in Type I superluminous supernovae.”
“We anticipate that this discovery will create avenues for testing general relativity in a new regime—the violent centres of young supernovae,” the team concluded.
Forget “stellar” as slang for great; we have graduated to “superluminous.”
Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.
“It required a lot of creativity and fast thinking. Because if I’m talking to a man, I’m supposed to act like a woman. If I’m talking to a woman, I need to act like a man. If I’m talking to a gay person, I need to act like a gay person,” he told me at a coworking space I met him at in Nairobi. After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex.
“It got to a point where my body couldn’t function. Where I saw someone naked, I don’t even feel it. And I have a wife, who expects a lot from you, a young family, she expects a lot from you intimately. But you can’t, like, do it,” Asia said. “It fractured a lot of things for me. My body is like, not functioning at all.”
Asia eventually hit a breaking point and stopped working for AI companies. He is now the secretary general of a Kenyan organization called the Data Labelers Association (DLA) and the author of “The Emotional Labor Behind AI Intimacy,” a testimony of his time working as the real human labor behind AI sex bots. As part of the DLA, Asia has been working to organize workers to fight for better pay, better mental health services, an end to draconian non-disclosure agreements, and better benefits for a workforce that often earns just a few dollars a day. Data labelers train, refine, and moderate the outputs of AI tools made by the largest companies in the world, yet they are wildly underpaid and haven’t benefitted from the runaway valuations of AI companies.
Last month, the DLA held one of its largest events at the Nairobi Arboretum, sign up new members, and to help them tell their stories.
These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world.
“You can’t understand where you’re positioned if you don’t understand your history,” Angela, one of the day’s speakers, told the workers who had assembled there (many of the speakers at the event did not give their full names). “When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”
“These multinationals are coming to rule and dominate here,” she added. “It’s a very unfortunate supply chain, and my call today as data labelers is to build up on this—as we are fighting for labor rights, we are also fighting for the environment […] we are fighting big companies. We are fighting the British imperialist companies of today. It’s Apple, it’s Meta, it’s Gemini. Those are the ones we’re still fighting. It’s a call for solidarity and expanding our thinking beyond what we are doing, beyond our labor.”
In my few days in Kenya earlier this year, where I was traveling to speak at a conference about AI and journalism, it was immediately clear that data labelers make up a significant portion of the country’s tech workforce. Nearly everyone I spoke to there had either been a data labeler (or a content moderator) themselves or knows someone who has. Leaving the airport in Nairobi, you immediately drive by Sameer Business Park, an office complex that houses Sama, a San Francisco-headquartered “data annotation and labeling company” that has contracted with Meta, OpenAI, and many other tech giants. Sama has been sued repeatedly for its low pay and the fact that many of its workers suffer PTSD from repetitively looking at graphic content. For years, a giant sign outside its office read: “Samasource THE SOUL OF AI.” My Uber driver asked why I was going to a random office building in Nairobi’s Central Business District—I told her I was going to interview a data labeler. “Oh, I do data labeling too,” she said.
Michael Geoffrey Asia. Image: Jason Koebler
Asia studied air cargo management in university. He graduated and expected to find a job planning out cargo and baggage routes, but couldn’t find a job because he graduated into an industry ravaged by COVID. Around this time, his child was diagnosed with lymphatic cancer, and he took out a loan of about $17,000 USD to pay for his treatments. He needed work, and found data labeling.
“It wasn’t offering good pay, to be honest,” Asia told me. “It was around $240 US dollars per month. But I felt like I didn’t have an option, I had a financial crisis, a sick child.”
Asia took a job at Sama, where he worked on various Meta projects. “You’re given a video and then told to describe the video, or you’re given pictures of people and told to identify faces. You’re supposed to draw bounding boxes around the faces and label that.” Last week, Sweden’s Svenska Dagbladet reported that Kenyan data labelers for Sama have been viewing and annotating uncensored footage from Meta’s AI camera glasses, which has included highly sensitive and violent footage.
Asia, through a group of colleagues and friends who called themselves “the Brotherhood,” eventually found another data labeling job that let him work from home. “We were a group of six friends, and everyone had to bring three job opportunities on a weekly basis,” he said. “I came across another gig that ended up not being a good one, where I had to annotate pornography.”
At this job, Asia went frame-by-frame in porn videos to annotate what was happening and what type of porn category it could possibly be. “You’re supposed to put yourself in the minds of the 8 billion people on Earth, every second of that video. So I may have someone searching for this pornography in Cuba and think ‘these are the tags they can use,’ if you’re searching ‘doggy,’ you know, that kind of thing,” he said. “So I worked on pornography for eight hours a day, and I did that project for eight months.” His ‘boss’ at the time was essentially a no-reply email with a link sent each day that gave him his work.
At the same time, Asia picked up a second job that started immediately after his shift tagging porn ended, where he was “training” AI companion bots, though he had no way of knowing which company he was actually working for. He quickly surmised that he was simply taking on the persona of different AI sex bots and was sexting with real people in real time.
“I could feel the human aspect in the conversations. Most of the people on the other side were lonely people,” he said. “I would have several profiles and the profiles are switching constantly depending on the needs of the person who pops up on your dashboard. I’d be sitting here talking to an old woman who needs love, but if she goes offline, another conversation pops up and then I’m responding to a gay person.”
The two jobs, done back to back, caused him to have insomnia, PTSD, and trouble having sex. Some data labelers, he said, work 18 hours a day. When I met him, he said he had essentially gone three full days without sleep because his body still hasn’t readjusted from his messed up schedule.
Asia said he eventually was able to get mental health counseling through his child’s cancer center, which started because he was the caregiver of a child with cancer but quickly turned into therapy for PTSD related to his job. “It was of immense help to me as a person, it was one of the best services I’ve ever gotten, because they stood with me, and I said ‘I need a solution to this.”
“We need technology, but it shouldn’t come at a human cost. What is so hard with offering mental support to the people working on graphic content? If this job was done in the U.S., would they do what they are doing in Kenya? Would they still give the pay they’re giving here? Here we are paid $.01 per task—it doesn’t make sense. Why this discrimination? If they can pay people in the U.S., well that means they can pay people in Kenya,” Asia said.
Image: Data Labelers Association
The message of many data labelers and of the lawyers who have been helping them is that artificial intelligence is not a magical tool built by people in San Francisco making millions of dollars a year and pushing their companies to insane valuations. Artificial intelligence is an extractive technology that relies on the brutal labor of underpaid workers around the world. For years, the work of African data labelers has been more or less “ghost work,” the unseen, hidden labor that lets American tech companies build their products.
“AI can never be AI without humans. It is not artificial intelligence. It’s African intelligence,” Asia said. “Most of these are dirty jobs and most of these jobs have been done here in Africa. And then once you’re done, once a tool is functional, all the communication stops. You get locked out. We are training our own death. We train ChatGPT and it’s killing us slowly.”
Draconian nondisclosure agreements and terms of services that workers can’t opt out from have created a culture of fear, and one of DLA’s goals is to make it easier for workers to speak out. At the time I met Asia in January, the DLA had 870 members, but its ranks have been growing quickly.
“I’m doing this from a point of experience, not assumption. I have been through this. I know what I’m talking about,” Asia said. “We have this monster called the NDA. The NDA is a slave tool used to enslave people to not speak about what they’re going through. I’m very much ready for any legal battle [associated with NDAs] because we’re not going to keep quiet. This is us suffering, and we can’t suffer in silence. This is not the colonial period. I have the right to speak against any violation [of my rights] and that’s what I’m doing.”
Mercy Mutemi, a workers’ rights lawyer who has sued several big tech companies including Meta for how they treat content moderators and data labelers, told me that when something happens in the United States—when a new gadget or product or feature or policy is launched, there’s a corresponding reaction in Africa.
“When something happens in the U.S., there’s an African cost to that,” she said. “Kenya has been pushing for trade deals with the U.S., right? And the direction that conversation is taking is about immunity and protection for big tech. It’s like, ‘You want any business with us at all? Well, you’ve got to get Meta out of these cases.’”
Mutemi has been working on the Meta lawsuit, and on pushing back against NDAs so that workers can more freely talk about their experiences. Tech companies “get people in a mental jail where they feel like they can’t talk about this. But NDAs are nonsensical—our laws don’t recognize these types of NDAs,” she said. “There’s a way to go about this where it’s not exploitative.”
Back at the arboretum in Nairobi, the message to DLA’s members is largely that their work is important, that it’s human, and that they deserve better.
“Africa is at the bottom of the supply chain of AI. But right now, the fact that we are all here and most of you are data labelers—you are the people who supply the labor. When we think of the whole AI ecosystem, who’s an engineer, and maybe that’s the image of AI that the majority of the world has,” Angela said. “And that’s actually very intentional. To make [your labor] invisible, to make AI look like this shiny object that no one understands, it’s very automatic and beautiful and tech. That’s the intentionality of hiding the labor and the behind the scenes of AI.”
It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.
Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.
“FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together,” Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
Do you work at GSA? Or are you a contractor impacted by this change? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.
FPDS was very basic, in a very good way. You could type in something like “Clearview AI” for example, and it would show all the government contracts that mentioned the facial recognition company. That included both contracts with Clearview AI, but also ones with larger government contractors that were reselling the technology and included “Clearview AI” in the item description. Often when digging through government purchasing data you’ll find some surveillance technology is not sold to agencies by the company directly, but by firms that have ongoing relationships with the government.
A search result from FDPS.gov.
Then when FPDS displayed the results, it was incredibly easy to get the information you wanted at a glance. Each result was a single rectangle which showed the company that the contract was with, the agency buying the product, and, importantly for me, the broad category of product. This often included things like computer-related services, letting me very quickly figure out whether, as a technology journalist, that is something I should look into. FPDS also displayed new contracts before they appeared in SAM.gov.
The General Services Administration ran FPDS. The idea was to bring FPDS into SAM.gov, so there aren’t a bunch of different sites but a single platform for contractors or the public to explore.
I do use SAM.gov a lot too. But for a singular purpose: to find what agencies might buy in the future. On that site, agencies often post Requests for Information in which they signal the sort of spy tech they are interested in. It’s not a contract or sale, but an indication of what they want to get their hands on.
The thing is, SAM.gov is awful for finding what agencies have actually bought. Searches that would return clear results in FPDS are not available immediately in SAM.gov. You may have to tweak some obscure setting to get them to display. You might need to be logged in for some results (FPDS didn’t require this); for other results, it seems better to actually not be logged in. The results do not immediately show the category of the purchase, such as whether it was technology related or not. You have to filter the results by a specific agency if you don’t want just a bunch of noise, but the filters appear finicky and sometimes don’t work. And all of that is only if the data you’re searching for is surfaceable at all through SAM.gov.
As one site that connects agencies and contractors wrote recently, FPDS “has long been the master repository of federal contract activity, containing millions of contract actions that NEVER hit SAM.gov.” Now, maybe they will, but that doesn’t solve SAM’s search issues.
Also, whenever someone pastes a SAM.gov link into 404 Media’s Slack channel, co-founder and journalist Sam Cole gets a notification. “I get excited… someone wants to talk to me. Then it’s SAM.gov,” she told me on Thursday.
The work of journalists and researchers certainly won’t be impossible with SAM.gov. But it is absolutely a less transparent system than the perfectly good one we had until this week.
Amazon is telling people who use its wishlists feature to switch to post office boxes or non-residential delivery addresses if they want to ensure their home addresses remain private, as part of a change in how it processes gifts bought from third-party sellers. The change is especially concerning to many sex workers, influencers and public figures who use Amazon wishlists to receive gifts from fans and clients.
First spotted by adult content creators raising the alarm on social media, the changes open anyone who uses wishlists publicly to increased privacy risk unless they change how they receive packages.
In an email sent to list holders, Amazon said beginning March 25, it will reveal users’ shipping addresses to third-party sellers. The platform added that gift purchasers might end up seeing your address as part of this process, too.
Before this change, the only information visible to sellers and gift purchases was the recipients’ city and state.
“We're writing to inform you about an upcoming change to Amazon Lists. Starting March 25, 2026, we will remove the option to restrict purchases from third-party sellers for list items. When this change takes effect, gift purchasers will be able to purchase items sold by third-party sellers from your lists and your delivery address will be shared with the seller for fulfillment. This change will provide gift purchasers with access to a wider selection of items when shopping from your lists,” Amazon said in the email. “Important note: When gifts are purchased from your shared or public lists, Amazon needs to provide your shipping address to sellers and delivery partners to fulfill these orders. During the delivery process, your address may become visible to gift purchasers through delivery updates and tracking information. To help protect your privacy, we recommend using a PO Box or non-residential address for any list you share with public audiences.”
If you have public wishlists, you can manage individual list settings here and select "manage list." From there you can change your list privacy settings to private or shared to limit who has access, or remove your shipping address entirely by selecting "none" from the dropdown menu.
Most of the popular shipping methods in the US, including UPS, Fedex, and the USPS, don’t show full addresses as part of package tracking. But if a third-party seller shares a gift recipient’s home address with a buyer as part of the tracking process, Amazon is saying that’s out of the platform’s control. And some of those delivery services send photos as part of the tracking process for proof of delivery, which could include more information about one’s home or location than they would want a gift sender to see.
“Those who do a range of work where privacy concerns are top of mind would be left to wonder what problem Amazon is solving with this change,” Krystal Davis, an adult content creator who posted about receiving the email from Amazon, told 404 Media. “Those who use these lists as an opportunity to allow fans to show support and offset expenses will lose that option. The alternatives to Amazon wishlist are significantly lacking.”
Many online sex workers use Amazon wishlists to receive gifts from subscribers and fans. It’s a practice that’s gone on for years. Revealing one’s full address to buyers — especially if they don’t realize this change has gone into effect, or missed the email sent by Amazon with the warning to switch to a P.O. box — puts their safety at serious risk. And like so many privacy and security issues that affect sex workers first, anyone could potentially be affected; lots of people use public wishlists who might want to keep their location private, and should consider checking their settings or switching to a non-residential address if they want to maintain that privacy.
Screenshot via Amazon showing the "Manage List" page, with the option to share shipping address with sellers grayed out and a notice: "This setting will no longer be supported starting February 25, 2026. After this date, third-party sellers will receive your shipping address to fulfill orders. You can review of update your lists' shipping address on this page."
Amazon provides conflicting information on when and how this change will go into effect. The email sent to wishlist holders says it will start on March 25, 2026, but as of writing, a notice on the “Manage List” settings page said starting February 25, third party sellers will see users’ shipping addresses. Amazon confirmed to 404 Media that the option to restrict purchases from third-party sellers for list items is being removed on March 25, one month from today.