The Intercept https://theintercept.com/author/sambiddle/ Tue, 05 Dec 2023 01:28:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 <![CDATA[Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist]]> https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/ https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/#respond Tue, 21 Nov 2023 18:10:11 +0000 https://theintercept.com/?p=452299 After the ad was discovered, digital rights advocates ran an experiment testing the limits of Facebook’s machine-learning moderation.

The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

]]>
A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Other posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people.”

“The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

“Our ad review system is designed to review all ads before they go live,” according to a Facebook ad policy overview. As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.

While these technologies allow the company to skirt the labor issues associated with human moderators, they also obscure how moderation decisions are made behind secret algorithms.

Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. Following the audit, Meta claimed it had “launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content.” Content, that is, like an ad espousing murder.

Incitement to Violence on Facebook

Amid the Israeli war on Palestinians in Gaza, Nashif was troubled enough by the explicit call in the ad to murder Larudee that he worried similar paid posts might contribute to violence against Palestinians.

Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide. (Last year, another group ran test ads inciting against Rohingya, a project along the same lines as 7amleh’s experiment; in that case, all the ads were also approved.)

The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. In light of assurances from Facebook that safeguards were in place, Nashif and 7amleh, which formally partners with Meta on censorship and free expression issues, were puzzled.

“Meta has a track record of not doing enough to protect marginalized communities.”

Curious if the approval was a fluke, 7amleh created and submitted 19 ads, in both Hebrew and Arabic, with text deliberately, flagrantly violating company rules — a test for Meta and Facebook. 7amleh’s ads were designed to test the approval process and see whether Meta’s ability to automatically screen violent and racist incitement had gotten better, even with unambiguous examples of violent incitement.

“We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalized communities,” Nashif said, “and that their ads manager system was particularly vulnerable.”

Meta’s appears to have failed 7amleh’s test.

The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality. Despite this, confirmation emails shared with The Intercept show Facebook approved every single ad.

Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users. (Meta retroactively rejected 7amleh’s Hebrew ads after The Intercept brought them to the company’s attention, but the Arabic versions remain approved within Facebook’s ad system.)

Facebook spokesperson Erin McPike confirmed the ads had been approved accidentally. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said. “That’s why ads can be reviewed multiple times, including once they go live.”

Just days after its own experimental ads were approved, 7amleh discovered an Arabic ad run by a group calling itself “Migrate Now” calling on “Arabs in Judea and Sumaria” — the name Israelis, particularly settlers, use to refer to the occupied Palestinian West Bank — to relocate to Jordan.

According to Facebook documentation, automated, software-based screening is the “primary method” used to approve or deny ads. But it’s unclear if the “hostile speech” algorithms used to detect violent or racist posts are also used in the ad approval process. In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine. Based on 7amleh’s experiment, however, this classifier either doesn’t work very well or is for some reason not being used to screen advertisements. (McPike did not answer when asked if the approval of 7amleh’s ads reflected an underlying issue with the hostile speech classifier.)

Either way, according to Nashif, the fact that these ads were approved points to an overall problem: Meta claims it can effectively use machine learning to deter explicit incitement to violence, while it clearly cannot.

“We know that Meta’s Hebrew classifiers are not operating effectively, and we have not seen the company respond to almost any of our concerns,” Nashif said in his statement. “Due to this lack of action, we feel that Meta may hold at least partial responsibility for some of the harm and violence Palestinians are suffering on the ground.”

The approval of the Arabic versions of the ads come as a particular surprise following a recent report by the Wall Street Journal that Meta had lowered the level of certainty its algorithmic censorship system needed to remove Arabic posts — from 80 percent confidence that the post broke the rules, to just 25 percent. In other words, Meta was less sure that the Arabic posts it was suppressing or deleting actually contained policy violations.

Nashif said, “There have been sustained actions resulting in the silencing of Palestinian voices.”

The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/21/facebook-ad-israel-palestine-violence/feed/ 0
<![CDATA[LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection]]> https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/ https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/#respond Thu, 16 Nov 2023 17:42:15 +0000 https://theintercept.com/?p=451615 The data brokerage giant sold face recognition, phone tracking, and other surveillance technology to the border guards, say government documents.

The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

]]>
The popular data broker LexisNexis began selling face recognition services and personal location data to U.S. Customs and Border Protection late last year, according to contract documents obtained through a Freedom of Information Act request.

According to the documents, obtained by the advocacy group Just Futures Law and shared with The Intercept, LexisNexis Risk Solutions began selling surveillance tools to the border enforcement agency in December 2022. The $15.9 million contract includes a broad menu of powerful tools for locating individuals throughout the United States using a vast array of personal data, much of it obtained and used without judicial oversight.

“This contract is mass surveillance in hyperdrive.”

Through LexisNexis, CBP investigators gained a convenient place to centralize, analyze, and search various databases containing enormous volumes of intimate personal information, both public and proprietary.

“This contract is mass surveillance in hyperdrive,” Julie Mao, an attorney and co-founder of Just Futures Law, told The Intercept. “It’s frightening that a rogue agency such as CBP has access to so many powerful technologies at the click of the button. Unfortunately, this is what LexisNexis appears now to be selling to thousands of police forces across the country. It’s now become a one-stop shop for accessing a range of invasive surveillance tools.”

A variety of CBP offices would make use of the surveillance tools, according to the documents. Among them is the U.S. Border Patrol, which would use LexisNexis to “help examine individuals and entities to determine their admissibility to the US. and their proclivity to violate U.S. laws and regulations.”

Among other tools, the contract shows LexisNexis is providing CBP with social media surveillance, access to jail booking data, face recognition and “geolocation analysis & geographic mapping” of cellphones. All this data can be queried in “large volume online batching,” allowing CBP investigators to target broad groups of people and discern “connections among individuals, incidents, activities, and locations,” handily visualized through Google Maps.

CBP declined to comment for this story, and LexisNexis did not respond to an inquiry. Despite the explicit reference to providing “LexisNexis Facial Recognition” in the contract, a fact sheet published by the company online says, “LexisNexis Risk Solutions does not provide the Department of Homeland Security” — CBP’s parent agency — “or US Immigration and Customs Enforcement with license plate images or facial recognition capabilities.”

The contract includes a variety of means for CBP to exploit the cellphones of those it targets. Accurint, a police and counterterror surveillance tool LexisNexis acquired in 2004, allows the agency to do analysis of real-time phone call records and phone geolocation through its “TraX” software.

While it’s unclear how exactly TraX pinpoints its targets, LexisNexis marketing materials cite “cellular providers live pings for geolocation tracking.” These materials also note that TraX incorporates both “call detail records obtained through legal process (i.e. search warrant or court order) and third-party device geolocation information.” A 2023 LexisNexis promotional brochure says, “The LexisNexis Risk Solutions Geolocation Investigative Team offers geolocation analysis and investigative case assistance to law enforcement and public safety customers.”

Any CBP use of geolocational data is controversial, given the agency’s recent history. Prior reporting found that, rather than request phone location data through a search warrant, CBP simply purchased such data from unregulated brokers — a practice that critics say allows the government to sidestep Fourth Amendment protections against police searches.

According to a September report by 404 Media, CBP recently told Sen. Ron Wyden, D-Ore., it “will not be utilizing Commercial Telemetry Data (CTD) after the conclusion of FY23 (September 30, 2023),” using a technical term for such commercially purchased location information.

The agency, however, also told Wyden that it could renew its use of commercial location data if there were “a critical mission need” in the future. It’s unclear if this contract provided commercial location data to CBP, or if it was affected by the agency’s commitment to Wyden. (LexisNexis did not respond to a question about whether it provides or provided the type of phone location data that CBP had sworn off.)

The contract also shows how LexisNexis operates as a reseller for surveillance tools created by other vendors. Its social media surveillance is “powered by” Babel X, a controversial firm that CBP and the FBI have previously used.

According to a May 2023 report by Motherboard, Babel X allows users to input one piece of information about a surveillance target, like a Social Security number, and receive large amounts of collated information back. The returned data can include “social media posts, linked IP address, employment history, and unique advertising identifiers associated with their mobile phone. The monitoring can apply to U.S. persons, including citizens and permanent residents, as well as refugees and asylum seekers.”

While LexisNexis is known to provide similar data services to U.S. Immigration and Customs Enforcement, another division of the Department of Homeland Security, details of its surveillance work with CBP were not previously known. Though both agencies enforce immigration law, CBP typically focuses on enforcement along the border, while ICE detains and deports migrants inland.

In recent years, CBP has drawn harsh criticism from civil libertarians and human rights advocates for its activities both at and far from the U.S.-Mexico border. In 2020, CBP was found to have flown a Predator surveillance drone over Minneapolis protests after the murder of George Floyd; two months later, CBP agents in unmarked vehicles seized racial justice protesters off the streets of Portland, Oregon — an act the American Civil Liberties Union condemned as a “blatant demonstration of unconstitutional authoritarianism.”

Just Futures Law is currently suing LexisNexis over claims it illegally obtains and sells personal data.

The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/16/lexisnexis-cbp-surveillance-border/feed/ 0
<![CDATA[Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR.]]> https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/ https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/#respond Wed, 15 Nov 2023 19:51:21 +0000 https://theintercept.com/?p=451208 Employees are internally protesting Google’s Project Nimbus, which they fear is being used by Israel to violate Palestinians’ human rights.

The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

]]>
A Google employee protesting the tech giant’s business with the Israeli government was questioned by Google’s human resources department over allegations that he endorsed terrorism, The Intercept has learned. The employee said he was the only Muslim and Middle Easterner who circulated the letter and also the only one who was confronted by HR about it.

The employee was objecting to Project Nimbus, Google’s controversial $1.2 billion contract with the Israeli government and its military to provide state-of-the-art cloud computing and machine learning tools.

Since its announcement two years ago, Project Nimbus has drawn widespread criticism both inside and outside Google, spurring employee-led protests and warnings from human rights groups and surveillance experts that it could bolster state repression of Palestinians.

Mohammad Khatami, a Google software engineer, sent an email to two internal listservs on October 18 saying Project Nimbus was implicated in human rights abuses against Palestinians — abuses that fit a 75-year pattern that had brought the conflict to the October 7 Hamas massacre of some 1,200 Israelis, mostly civilians. The letter, distributed internally by anti-Nimbus Google workers through company email lists, went on to say that Google could become “complicit in what history will remember as a genocide.”

“Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

Twelve days later, Google HR told Khatami they were scheduling a meeting with him, during which he says he was questioned about whether the letter was “justifying the terrorism on October 7th.”

In an interview, Khatami told The Intercept he was not only disturbed by what he considers an attempt by Google to stifle dissent on Nimbus, but also believes he was left feeling singled out because of his religion and ethnicity. The letter was drafted and internally circulated by a group of anti-Nimbus Google employees, but none of them other than Khatami were called by HR, according to Khatami and Josh Marxen, another anti-Nimbus organizer at Google who helped spread the letter. Though he declined to comment on the outcome of the HR meeting, Khatami said it left him shaken.

“It was very emotionally taxing,” Khatami said. “I was crying by the end of it.”

“I’m the only Muslim or Middle Eastern organizer who sent out that email,” he told The Intercept. “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

The Intercept reviewed a virtually identical email sent by Marxen, also on October 18. Though there are a few small changes — Marxen’s email refers to “a seige [sic] upon all of Gaza” whereas Khamati’s cites “the complete destitution of Gaza” — both contain verbatim language connecting the October 7 attack to Israel’s past treatment of Palestinians.

Google spokesperson Courtenay Mencini told The Intercept, “We follow up on every concern raised, and in this case, dozens of employees reported this individual’s email – not the sharing of the petition itself – for including language that did not follow our workplace policies.” Mencini declined to say which workplace policies Khatami’s email allegedly violated, whether other organizers had gotten HR calls, or if any other company personnel had been approached by Employee Relations for comments made about the war.

The incident comes just one year after former Google employee Ariel Koren said the company attempted to force her to relocate to Brazil in retaliation for her early anti-Nimbus organizing. Koren later quit in protest and remains active in advocating against the contract. Project Nimbus, despite the dissent, remains in place, in part because of contractual terms put in place by Israel forbidding Google from cutting off service in response to political pressure or boycott campaigns.

Dark Clouds Over Nimbus

Dissent at Google is neither rare nor ineffective. Employee opposition to controversial military contracts has previously pushed the company to drop plans to help with the Pentagon’s drone warfare program and a planned Chinese version of Google Search that would filter out results unwanted by the Chinese government. Nimbus, however, has managed to survive.

In the wake of the October 7 Hamas attacks against Israel and resulting Israeli counteroffensive, now in its second month of airstrikes and a more recent ground invasion, Project Nimbus is again a flashpoint within the company.

With the rank and file disturbed by the company’s role as a defense contractor, Google has attempted to downplay the military nature of the contract.

Mencini, the Google spokesperson, said that anti-Nimbus organizers were “misrepresenting” the contract’s military role.

“This is part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” Mencini said. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

Nimbus training documents published by The Intercept last year, however, show the company was pitching its use for the Ministry of Defense. Moreover, the Israeli government itself is open about the military applications of Project Nimbus: A 2023 press release by the Israeli Ministry of Finance specifically names the Israel Defense Forces as a beneficiary, while an overview written by the country’s National Digital Agency describes the contract as “a comprehensive and in-depth solution to the provision of public cloud services to the Government, the defense establishment and other public organizations.”

“If we do not speak out now, we are complicit in what history will remember as a genocide.”

Against this backdrop, Khatami, in coordination with others in the worker-led anti-Nimbus campaign, sent his October 18 note to internal Arab and Middle Eastern affinity groups laying out their argument against the project and asking like-minded colleagues to sign an employee petition.

“Through Project Nimbus, Google is complicit in the mass surveillance and other human rights abuses which Palestinians have been subject to daily for the past 75 years, and which is the root cause of the violence initiated on October 7th,” the letter said. “If we do not speak out now, we are complicit in what history will remember as a genocide.”

On October 30, Khatami received an email from Google’s Employee Relations division informing him that he would soon be questioned by company representatives regarding “a concern about your conduct that has been brought to our attention.”

According to Khatami, in the ensuing phone call, Google HR pressed him about the portion of his email that made a historical connection between the October 7 Hamas attack and the 75 years of Israeli rights abuses that preceded it, claiming some of his co-workers believed he was endorsing violence. Khatami recalled being asked, “Can you see how people are thinking you’re justifying the terrorism on October 7th?”

Khatami said he and his fellow anti-Nimbus organizers were in no way endorsing the violence against Israeli civilians — just as they now oppose the deaths of more than 10,000 Palestinians, according to the latest figures from Gaza’s Ministry of Health. Rather, the Google employees wanted to provide sociopolitical context for Project Nimbus, part of a broader employee-led effort of “demilitarizing our company that was never meant to be militarized.” To point out the relevant background leading to the October 7 attack, he said, is not to approve it.

“We wrote that the root cause of the violence is the occupation,” Khatami explained. “Analysis is not justification.”

Double Standard

Khatami also objects to what he said is a double standard within Google about what speech about the war is tolerated, a source of ongoing turmoil at the company. The day after his original email, a Google employee responded angrily to the email chain: “Accusing Israel of genocide and Google of being complicit is a grave accusation!” This employee, who works at the company’s cloud computing division, itself at the core of Project Nimbus, continued:

To break it down for you, project nimbus contributes to Israel’s security. Therefore, any calls to drop it are meant to weaken Israel’s security. If Israel’s security is weak, then the prospect of more terrorist attacks, like the one we saw on October 7, is high. Terrorist attacks will result in casualties that will affect YOUR Israeli colleagues and their family. Attacks will be retaliated by Israel which will result in casualties that will affect your Palestinian colleagues and their family (because they are used as shields by the terrorists)…bottom line, a secured Israel means less lives lost! Therefore if you have the good intention to preserve human lives then you MUST support project Nimbus!

While Khatami disagrees strongly with the overall argument in the response email, he objected in particular to the co-worker’s claim that Israel is killing Palestinians “because they are used as shields by the terrorists” — a justification of violence far more explicit than the one he was accused of, he said. Khatami questioned whether widespread references to the inviolability of Israeli self-defense by Google employees have provoked treatment from HR similar to what he received after his email about Nimbus.

Internal employee communications viewed by The Intercept show tensions within Google over the Israeli–Palestinian conflict aren’t limited to debates over Project Nimbus. A screenshots viewed by The Intercept shows an Israeli Google employee repeatedly asking Middle Eastern colleagues if they support Hamas, while another shows a Google engineer suggesting Palestinians worried about the welfare of their children should simply stop having kids. Another lamented “friends and family [who] are slaughtered by the Gaza-grown group of bloodthirsty animals.”

According to a recent New York Times report, which found “at least one” instance of “overtly antisemitic” content posted through internal Google channels, “one worker had been fired after writing in an internal company message board that Israelis living near Gaza ‘deserved to be impacted.’”

Another screenshot reviewed by The Intercept, taken from an email group for Israeli Google staff, shows employees discussing a post by a colleague criticizing the Israeli occupation and encouraging donations to a Gaza relief fund.

“During this time we all need to stay strong as a nation and united,” one Google employee replied in the email group. “As if we are not going through enough suffering, we will unfortunately see many emails, comments either internally or on social media that are pro Hamas and clearly anti semitic. report immediately!” Another added: “People like that make me sick. But she is a lost cause.” A third chimed in to say they had internally reported the colleague soliciting donations. A separate post soliciting donations for the same Gaza relief fund was downvoted 139 times on an internal message board, according to another screenshot, while a post stating only “Killing civilians is indefensible” received 51 downvotes.

While Khatami says he was unnerved and disheartened by the HR grilling, he’s still committed to organizing against Project Nimbus.

“It definitely emotionally affected me, it definitely made me significantly more fearful or organizing in this space,” he said. “But I think knowing that people are dying right now and slaughtered in a genocide that’s aided and abetted by my company, remembering that makes the fear go away.”

The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/15/google-israel-gaza-nimbus-protest/feed/ 0
<![CDATA[Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets]]> https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/ https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/#respond Mon, 06 Nov 2023 22:53:54 +0000 https://theintercept.com/?p=449844 According to internal materials reviewed by The Intercept, Cruise cars were also in danger of driving into holes in the road.

The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

]]>
In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.

AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.

Despite the popular skepticism, Cruise insists its robots are profoundly safer than what they’re aiming to replace: cars driven by people. In an interview last month, Cruise CEO Kyle Vogt downplayed safety concerns: “Anything that we do differently than humans is being sensationalized.”

The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.

In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.

“This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

In a statement, a spokesperson for Cruise reiterated the company’s position that a future of autonomous cars will reduce collisions and road deaths. “Our driverless operations have always performed higher than a human benchmark, and we constantly evaluate and mitigate new risks to continuously improve,” said Erik Moser, Cruise’s director of communications. “We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle — human operated or autonomous — will have zero risk of collision.”

“These are not self-driving cars. These are cars driven by their companies.”

Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible. With its parent company General Motors, which purchased Cruise in 2016 for $1.1 billion, hemorrhaging money on the venture, any setback for the company’s robo-safety regimen could threaten its business.

Instead of seeing public accidents and internal concerns as yellow flags, Cruise sped ahead with its business plan. Before its permitting crisis in California, the company was, according to Bloomberg, exploring expansion to 11 new cities.

“These are not self-driving cars,” said Smith. “These are cars driven by their companies.”

Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, U.S., on Tuesday, Jan. 21, 2020. The shuttle is designed to be more spacious and passenger-friendly than a conventional, human-driven car. The silver, squared-off vehicle lacks traditional controls like pedals and a steering wheel, freeing up room for multiple people to share rides, Cruise Chief Executive Officer Dan Ammann said at the event. Photographer: David Paul Morris/Bloomberg via Getty Images
Kyle Vogt — co-founder, president, chief executive officer, and chief technology officer of Cruise — holds an articulating radar as he speaks during a reveal event in San Francisco on Jan. 21, 2020.
Photo: David Paul Morris/Bloomberg via Getty Images

“May Not Exercise Additional Care Around Children”

Several months ago, Vogt became choked up when talking about a 4-year-old girl who had recently been killed in San Francisco. A 71-year-old woman had taken what local residents described as a low-visibility right turn, striking a stroller and killing the child. “It barely made the news,” Vogt told the New York Times. “Sorry. I get emotional.” Vogt offered that self-driving cars would make for safer streets.

Behind the scenes, meanwhile, Cruise was grappling with its own safety issues around hitting kids with cars. One of the problems addressed in the internal, previously unreported safety assessment materials is the failure of Cruise’s autonomous vehicles to, under certain conditions, effectively detect children so that they can exercise extra caution. “Cruise AVs may not exercise additional care around children,” reads one internal safety assessment. The company’s robotic cars, it says, still “need the ability to distinguish children from adults so we can display additional caution around children.”

In particular, the materials say, Cruise worried its vehicles might drive too fast at crosswalks or near a child who could move abruptly into the street. The materials also say Cruise lacks data around kid-centric scenarios, like children suddenly separating from their accompanying adult, falling down, riding bicycles, or wearing costumes.

The materials note results from simulated tests in which a Cruise vehicle is in the vicinity of a small child. “Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child,” reads one assessment. In another test drive, a Cruise vehicle successfully detected a toddler-sized dummy but still struck it with its side mirror at 28 miles per hour.

The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.

In its statement, Cruise said, “It is inaccurate to say that our AVs were not detecting or exercising appropriate caution around pedestrian children” — a claim undermined by internal Cruise materials reviewed by The Intercept and the company’s statement itself. In its response to The Intercept’s request for comment, Cruise went on to concede that, this past summer during simulation testing, it discovered that its vehicles sometimes temporarily lost track of children on the side of the road. The statement said the problem was fixed and only encountered during testing, not on public streets, but Cruise did not say how long the issue lasted. Cruise did not specify what changes it had implemented to mitigate the risks.

Despite Cruise’s claim that its cars are designed to identify children to treat them as special hazards, spokesperson Navideh Forghani said that the company’s driving software hadn’t failed to detect children but merely failed to classify them as children.

Moser, the Cruise spokesperson, said the company’s cars treat children as a special category of pedestrians because they can behave unpredictably. “Before we deployed any driverless vehicles on the road, we conducted rigorous testing in a simulated and closed-course environment against available industry benchmarks,” he said. “These tests showed our vehicles exceed the human benchmark with regard to the critical collision avoidance scenarios involving children.”

“Based on our latest assessment this summer,” Moser continued, “we determined from observed performance on-road, the risk of the potential collision with a child could occur once every 300 million miles at fleet driving, which we have since improved upon. There have been no on-road collisions with children.”

Do you have a tip to share about safety issues at Cruise? The Intercept welcomes whistleblowers. Use a personal device to contact Sam Biddle on Signal at +1 (978) 261-7389, by email at sam.biddle@theintercept.com, or by SecureDrop.

Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years. Without fixes to the problems, those rates would presumably increase as more AVs were put on the streets.

It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside. Though the construction site was surrounded by orange cones, the Cruise vehicle drives directly toward it, coming to an abrupt halt. Though it can’t be discerned from the footage whether the car entered the pit or stopped at its edge, the vehicle appears to be only inches away from several workers, one of whom attempted to stop the car by waving a “SLOW” sign across its driverless windshield.

“Enhancing our AV’s ability to detect potential hazards around construction zones has been an area of focus, and over the last several years we have conducted extensive human-supervised testing and simulations resulting in continued improvements,” Moser said. “These include enhanced cone detection, full avoidance of construction zones with digging or other complex operations, and immediate enablement of the AV’s Remote Assistance support/supervision by human observers.”

Known Hazards

Cruise’s undisclosed struggles with perceiving and navigating the outside world illustrate the perils of leaning heavily on machine learning to safely transport humans. “At Cruise, you can’t have a company without AI,” the company’s artificial intelligence chief told Insider in 2021. Cruise regularly touts its AI prowess in the tech media, describing it as central to preempting road hazards. “We take a machine-learning-first approach to prediction,” a Cruise engineer wrote in 2020.

The fact that Cruise is even cataloguing and assessing its safety risks is a positive sign, said Phil Koopman, an engineering professor at Carnegie Mellon, emphasizing that the safety issues that worried Cruise internally have been known to the field of autonomous robotics for decades. Koopman, who has a long career working on AV safety, faulted the data-driven culture of machine learning that leads tech companies to contemplate hazards only after they’ve encountered them, rather than before. The fact that robots have difficulty detecting “negative obstacles” — AV jargon for a hole — is nothing new.

“Safety is about the bad day, not the good day, and it only takes one bad day.”

“They should have had that hazard on their hazard list from day one,” Koopman said. “If you were only training it how to handle things you’ve already seen, there’s an infinite supply of things that you won’t see until it happens to your car. And so machine learning is fundamentally poorly suited to safety for this reason.”

The safety materials from Cruise raise an uncomfortable question for the company about whether robot cars should be on the road if it’s known they might drive into a hole or a child.

“If you can’t see kids, it’s very hard for you to accept that not being high risk — no matter how infrequent you think it’s going to happen,” Koopman explained. “Because history shows us people almost always underestimate the risk of high severity because they’re too optimistic. Safety is about the bad day, not the good day, and it only takes one bad day.”

Koopman said the answer rests largely on what steps, if any, Cruise has taken to mitigate that risk. According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem. In August, Cruise announced the cuts to daytime ride operations in San Francisco but made no mention of its attempt to lower risk to local children. (“Risk mitigation measures incorporate more than AV behavior, and include operational measures like alternative routing and avoidance areas, daytime or nighttime deployment and fleet reductions among other solutions,” said Moser. “Materials viewed by The Intercept may not reflect the full scope of our evaluation and mitigation measures for a specific situation.”)

A quick fix like shifting hours of operation presents an engineering paradox: How can the company be so sure it’s avoiding a thing it concedes it can’t always see? “You kind of can’t,” said Koopman, “and that may be a Catch-22, but they’re the ones who decided to deploy in San Francisco.”

“The reason you remove safety drivers is for publicity and optics and investor confidence.”

Precautions like reduced daytime operations will only lower the chance that a Cruise AV will have a dangerous encounter with a child, not eliminate that possibility. In a large American city, where it’s next to impossible to run a taxi business that will never need to drive anywhere a child might possibly appear, Koopman argues Cruise should have kept safety drivers in place while it knew this flaw persisted. “The reason you remove safety drivers is for publicity and optics and investor confidence,” he told The Intercept.

Koopman also noted that there’s not always linear progress in fixing safety issues. In the course of trying to fine-tune its navigation, Cruise’s simulated tests showed its AV software missed children at an increased rate, despite attempts to fix the issues, according to materials reviewed by The Intercept.

The two larger issues of kids and holes weren’t the only robot flaws potentially imperiling nearby humans. According to other internal materials, some vehicles in the company’s fleet suddenly began making unprotected left turns at intersections, something Cruise cars are supposed to be forbidden from attempting. The potentially dangerous maneuvers were chalked up to a botched software update.

The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on October 25, 2023. (Photo by Kazuhiro NOGI / AFP) (Photo by KAZUHIRO NOGI/AFP via Getty Images)
The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at Honda’s booth during the press day of the Japan Mobility Show in Tokyo on Oct. 25, 2023.
Photo: Kazuhiro Nog/AFP via Getty Images

The Future of Road Safety?

Part of the self-driving industry’s techno-libertarian promise to society — and a large part of how it justifies beta-testing its robots on public roads — is the claim that someday, eventually, streets dominated by robot drivers will be safer than their flesh-based predecessors.

Cruise cited a RAND Corporation study to make its case. “It projected deploying AVs that are on average ten percent safer than the average human driver could prevent 600,000 fatalities in the United States over 35 years,” wrote Vice President for Safety Louise Zhang in a company blog post. “Based on our first million driverless miles of operation, it appears we are on track to far exceed this projected safety benefit.”

During General Motors’ quarterly earnings call — the same day California suspended Cruise’s operating permit — CEO Mary Barra told financial analysts that Cruise “is safer than a human driver and is constantly improving and getting better.”

In the 2022 “Cruise Safety Report,” the company outlines a deeply unflattering comparison of fallible human drivers to hyper-intelligent robot cars. The report pointed out that driver distraction was responsible for more than 3,000 traffic fatalities in 2020, whereas “Cruise AVs cannot be distracted.” Crucially, the report claims, a “Cruise AV only operates in conditions that it is designed to handle.”

“It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver.”

When it comes to hitting kids, however, internal materials indicate the company’s machines were struggling to match the safety performance of even an average human: Cruise’s goal was, at the time, for its robots to merely drive as safely around children at the same rate as an average Uber driver — a goal the internal materials note it was failing to meet.

“It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver,” said Smith, the University of South Carolina law professor. “It’s pretty striking that there’s a memo that says we could hit more kids than an average rideshare driver, and the apparent response of management is, keep going.”

In a statement to The Intercept, Cruise confirmed its goal of performing better than ride-hail drivers. “Cruise always strives to go beyond existing safety benchmarks, continuing to raise our own internal standards while we collaborate with regulators to define industry standards,” said Moser. “Our safety approach combines a focus on better-than-human behavior in collision imminent situations, and expands to predictions and behaviors to proactively avoid scenarios with risk of collision.”

Cruise and its competitors have worked hard to keep going despite safety concerns, public and nonpublic. Before the California Public Utilities Commission voted to allow Cruise to offer driverless rides in San Francisco, where Cruise is headquartered, the city’s public safety and traffic agencies lobbied for a slower, more cautious approach to AVs. The commission didn’t agree with the agencies’ worries. “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway,” said commissioner John Reynolds, who previously worked as a lawyer for Cruise.

Had there always been human safety drivers accompanying all robot rides — which California regulators let Cruise ditch in 2021 — Smith said there would be less cause for alarm. A human behind the wheel could, for example, intervene to quickly steer a Cruise AV out of the path of a child or construction crew that the robot failed to detect. Though the company has put them back in place for now, dispensing entirely with human backups is ultimately crucial to Cruise’s long-term business, part of its pitch to the public that steering wheels will become a relic. With the wheel still there and a human behind it, Cruise would struggle to tout its technology as groundbreaking.

“We’re not in a world of testing with in-vehicle safety drivers, we’re in a world of testing through deployment without this level of backup and with a whole lot of public decisions and claims that are in pretty stark contrast to this,” Smith explained. “Any time that you’re faced with imposing a risk that is greater than would otherwise exist and you’re opting not to provide a human safety driver, that strikes me as pretty indefensible.”

The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

]]>
https://theintercept.com/2023/11/06/cruise-self-driving-cars-children/feed/ 0 GM’s Cruise Reveals First Vehicle Made To Run Without Driver Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, Jan. 21, 2020. JAPAN-AUTO-SHOW The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on Oct.25, 2023.
<![CDATA[Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis.]]> https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/ https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/#respond Sat, 28 Oct 2023 19:01:37 +0000 https://theintercept.com/?p=449406 Meta acknowledged that Instagram was burying some flag emoji comments in “offensive” contexts.

The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

]]>
As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook.

Since Israel launched retaliatory airstrikes in Gaza after the October 7 Hamas attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word “terrorist” into Palestinian Instagram profiles, and suppressed hashtags. Instagram comments containing the Palestinian flag emoji have also been hidden, according to 7amleh, a Palestinian digital rights group that formally collaborates with Meta, which owns Instagram and Facebook, on regional speech issues.

Numerous users have reported to 7amleh that their comments were moved to the bottom of the comments section and require a click to display. Many of the remarks have something in common: “It often seemed to coincide with having a Palestinian flag in the comment,” 7amleh’s U.S. national organizer Eric Sype told The Intercept.

Users report that Instagram had flagged and hidden comments containing the emoji as “potentially offensive,” TechCrunch first reported last week. Meta has routinely attributed similar instances of alleged censorship to technical glitches. Meta spokesperson Andy Stone confirmed to The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain “offensive” contexts that violate the company’s rules. He added that Meta has not created any new policies specific to flag emojis.

“The notion of finding a flag offensive is deeply distressing for Palestinians,” Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy who follows Meta’s policymaking on speech, told The Intercept.

“The notion of finding a flag offensive is deeply distressing for Palestinians.”

Asked about the contexts in which Meta hides the flag, Stone pointed to the Dangerous Organizations and Individuals policy, which designates Hamas as a terrorist organization, and cited a section of the community standards rulebook that prohibits any content “praising, celebrating or mocking anyone’s death.” He said Meta does not have different standards for enforcing its rules for the Palestinian flag emoji.

It remains unclear, however, precisely how Meta determines whether the use of the flag emoji is offensive enough to suppress. The Intercept reviewed several hidden comments containing the Palestinian flag emoji that had no reference to Hamas or any other banned group. The Palestinian flag itself has no formal association with Hamas and predates the militant group by decades.

Some of the hidden comments reviewed by The Intercept only contained emojis and no other text. In one, a user commented on an Instagram video of a pro-Palestinian demonstration in Jordan with green, white, and black heart emojis corresponding to the colors of the Palestinian flag, along with emojis of the Moroccan and Palestinian flags. In another, a user posted just three Palestinian flag emojis. Another screenshot seen by The Intercept showed two hidden comments consisting only of the hashtags #Gaza, #gazaunderattack, #freepalestine, and #ceasefirenow.

“Throughout our long history, we’ve endured moments where our right to display the Palestinian flag has been denied by Israeli authorities. Decades ago, Palestinian artists Nabil Anani and Suleiman Mansour ingeniously used a watermelon as a symbol of our flag,” Shtaya said. “When Meta engages in such practices, it echoes the oppressive measures imposed on Palestinians.”

Faulty Content Moderation

Instagram and Facebook users have taken to other social media platforms to report other instances of censorship. On X, formerly known as Twitter, one user posted that Facebook blocked a screenshot of a popular Palestinian Instagram account he tried to share with a friend via private message. The message was flagged as containing nonconsensual sexual images, and his account was suspended.

On Bluesky, Facebook and Instagram users reported that attempts to share national security reporter Spencer Ackerman’s recent article criticizing President Joe Biden’s support of Israel were blocked and flagged as cybersecurity risks.

On Friday, the news site Mondoweiss tweeted a screenshot of an Instagram video about Israeli arrests of Palestinians in the West Bank that was removed because it violated the dangerous organizations policy.

Meta’s increasing reliance on automated, software-based content moderation may prevent people from having to sort through extremely disturbing and potentially traumatizing images. The technology, however, relies on opaque, unaccountable algorithms that introduce the potential to misfire, censoring content without explanation. The issue appears to extend to posts related to the Israel–Palestine conflict.

An independent audit commissioned by Meta last year determined that the company’s moderation practices amounted to a violation of Palestinian users’ human rights. The audit also concluded that the Dangerous Organizations and Individuals policy — which speech advocates have criticized for its opacity and overrepresentation of Middle Easterners, Muslims, and South Asians — was “more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

Last week, the Wall Street Journal reported that Meta recently dialed down the level of confidence its automated systems require before suppressing “hostile speech” to 25 percent for the Palestinian market, a significant decrease from the standard threshold of 80 percent.

The audit also faulted Meta for implementing a software scanning tool to detect violent or racist incitement in Arabic, but not for posts in Hebrew. “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects … due to lack of linguistic and cultural competence,” the report found.

“Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew.”

Despite Meta’s claim that the company developed a speech classifier for Hebrew in response to the audit, hostile speech and violent incitement in Hebrew are rampant on Instagram and Facebook, according to 7amleh.

“Based on our monitoring and documentation, it seems to be very ineffective,” 7amleh executive director and co-founder Nadim Nashif said of the Hebrew classifier. “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew, that clearly violate Meta’s policies, but are still on the platforms.”

An Instagram search for a Hebrew-language hashtag roughly meaning “erase Gaza” produced dozens of results at the time of publication. Meta could not be immediately reached for comment on the accuracy of its Hebrew speech classifier.

The Wall Street Journal shed light on why hostile speech in Hebrew still appears on Instagram. “Earlier this month,” the paper reported, “the company internally acknowledged that it hadn’t been using its Hebrew hostile speech classifier on Instagram comments because it didn’t have enough data for the system to function adequately.”

Correction: October 30, 2023
Due to an editing error, Meta’s statement that there are no company policies specific to the Palestinian flag emoji was removed from the story. It has been restored.

The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/28/instagram-palestinian-flag-emoji/feed/ 0
<![CDATA[Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe]]> https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/ https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/#respond Thu, 26 Oct 2023 18:00:00 +0000 https://theintercept.com/?p=448997 By focusing on the potential dangers of Chinese spy tech, we’ve ignored how roaming itself creates massive vulnerabilities, a new Citizen Lab report says.

The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

]]>
The very obscure, archaic technologies that make cellphone roaming possible also makes it possible to track phone owners across the world, according to a new investigation by the University of Toronto’s Citizen Lab. The roaming tech is riddled with security oversights that make it a ripe target for those who might want to trace the locations of phone users.

As the report explains, the flexibility that made cellphones so popular in the first place is largely to blame for their near-inescapable vulnerability to unwanted location tracking: When you move away from a cellular tower owned by one company to one owned by another, your connection is handed off seamlessly, preventing any interruption to your phone call or streaming video. To accomplish this handoff, the cellular networks involved need to relay messages about who — and, crucially, precisely where — you are.

“Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information.”

While most of these network-hopping messages are sent to facilitate legitimate customer roaming, the very same system can be easily manipulated to trick a network into divulging your location to governments, fraudsters, or private sector snoops.

“Foreign intelligence and security services, as well as private intelligence firms, often attempt to obtain location information, as do domestic state actors such as law enforcement,” states the report from Citizen Lab, which researches the internet and tech from the Munk School of Global Affairs and Public Policy at the University of Toronto. “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information with high degrees of secrecy.”

The sheer complexity required to allow phones to easily hop from one network to another creates a host of opportunities for intelligence snoops and hackers to poke around for weak spots, Citizen Lab says. Today, there are simply so many companies involved in the cellular ecosystem that opportunities abound for bad actors.

Citizen Lab highlights the IP Exchange, or IPX, a network that helps cellular companies swap data about their customers. “The IPX is used by over 750 mobile networks spanning 195 countries around the world,” the report explains. “There are a variety of companies with connections to the IPX which may be willing to be explicitly complicit with, or turn a blind eye to, surveillance actors taking advantage of networking vulnerabilities and one-to-many interconnection points to facilitate geolocation tracking.”

This network, however, is even more promiscuous than those numbers suggest, as telecom companies can privately sell and resell access to the IPX — “creating further opportunities for a surveillance actor to use an IPX connection while concealing its identity through a number of leases and subleases.” All of this, of course, remains invisible and inscrutable to the person holding the phone.

Citizen Lab was able to document several efforts to exploit this system for surveillance purposes. In many cases, cellular roaming allows for turnkey spying across vast distances: In Vietnam, researchers identified a seven-month location surveillance campaign using the network of the state-owned GTel Mobile to track the movements of African cellular customers. “Given its ownership by the Ministry of Public Security the targeting was either undertaken with the Ministry’s awareness or permission, or was undertaken in spite of the telecommunications operator being owned by the state,” the report concludes.

African telecoms seem to be a particular hotbed of roaming-based location tracking. Gary Miller, a mobile security researcher with Citizen Lab who co-authored the report, told The Intercept that, so far this year, he’d tracked over 11 million geolocation attacks originating from just two telecoms in Chad and the Democratic Republic of the Congo alone.

In another case, Citizen Lab details a “likely state-sponsored activity intended to identify the mobility patterns of Saudi Arabia users who were traveling in the United States,” wherein Saudi phone owners were geolocated roughly every 11 minutes.

The exploitation of the global cellular system is, indeed, truly global: Citizen Lab cites location surveillance efforts originating in India, Iceland, Sweden, Italy, and beyond.

While the report notes a variety of factors, Citizen Lab places particular blame with the laissez-faire nature of global telecommunications, generally lax security standards, and lack of legal and regulatory consequences.

As governments throughout the West have been preoccupied for years with the purported surveillance threats of Chinese technologies, the rest of the world appears to have comparatively avoided scrutiny. “While a great deal of attention has been spent on whether or not to include Huawei networking equipment in telecommunications networks,” the report authors add, “comparatively little has been said about ensuring non-Chinese equipment is well secured and not used to facilitate surveillance activities.”

The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/26/cellphone-roaming-location-tracking-surveillance/feed/ 0
<![CDATA[Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual]]> https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/ https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/#respond Wed, 18 Oct 2023 16:44:01 +0000 https://theintercept.com/?p=448241 In responses to users who tried to post an alleged picture of the Gaza hospital bombing, Instagram and Facebook said it violated guidelines for sexual content or nudity.

The post Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual appeared first on The Intercept.

]]>
Instagram and Facebook users attempting to share scenes of devastation from a crowded hospital in Gaza City claim their posts are being suppressed, despite previous company policies protecting the publication of violent, newsworthy scenes of civilian death.

Late Tuesday, amid a 10-day bombing campaign by Israel, the Gaza Strip’s al-Ahli Hospital was rocked by an explosion that left hundreds of civilians killed and wounded. Footage of the flaming exterior of the hospital, as well as dead and wounded civilians, including children, quickly emerged on social media in the aftermath of the attack.

While the Palestinian Ministry of Health in the Hamas-run Gaza Strip blamed the explosion on an Israeli airstrike, the Israeli military later said the blast was caused by an errant rocket misfired by militants from the Gaza-based group Islamic Jihad.

While widespread electrical outages and Israel’s destruction of Gaza’s telecommunications infrastructure have made getting documentation out of the besieged territory difficult, some purported imagery of the hospital attack making its way to the internet appears to be activating the censorship tripwires of Meta, the social media giant that owns Instagram and Facebook.

Since Hamas’s surprise attack against Israel on October 7 and amid the resulting Israeli bombardment of Gaza, groups monitoring regional social media activity say censorship of Palestinian users is at a level not seen since May 2021, when violence flared between Israel and Gaza following Israeli police incursions into Muslim holy sites in Jerusalem.

Two years ago, Meta blamed the abrupt deletion of Instagram posts about Israeli military violence on a technical glitch. On October 15, Meta spokesperson Andy Stone again attributed claims of wartime censorship on a “bug” affecting Instagram. (Meta could not be immediately reached for comment.)

“It’s censorship mayhem like 2021. But it’s more sinister given the internet shutdown in Gaza.”

Since the latest war began, Instagram and Facebook users inside and outside of the Gaza Strip have complained of deleted posts, locked accounts, blocked searches, and other impediments to sharing timely information about the Israeli bombardment and general conditions on the ground. 7amleh, a Palestinian digital rights group that collaborates directly with Meta on speech issues, has documented over hundreds user complaints of censored posts about the war, according to spokesperson Eric Sype, far outpacing deletion levels seen two years ago.

“It’s censorship mayhem like 2021,” Marwa Fatafta, a policy analyst with the digital rights group Access Now, told The Intercept. “But it’s more sinister given the internet shutdown in Gaza.”

In other cases, users have successfully uploaded graphic imagery from al-Ahli to Instagram, suggesting that takedowns are not due to any formal policy on Meta’s end, but a product of the company’s at times erratic combination of outsourced human moderation and automated image-flagging software.

An Instagram notification shows a story depicting a widely circulated image was removed by the platform on the basis of violating guidelines on nudity or sexual activity.
Screenshot: Obtained by The Intercept

Alleged Photo of Gaza Hospital Bombing

One image rapidly circulating social media platforms following the blast depicts what appears to be the flaming exterior of the hospital, where a clothed man is lying beside a pool of blood, his torso bloodied.

According to screenshots shared with The Intercept by Fatafta, Meta platform users who shared this image had their posts removed or were prompted to remove them themselves because the picture violated policies forbidding “nudity or sexual activity.” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, confirmed she had also gotten reports of two instances of this same image deleted. (The Intercept could not independently verify that the image was of al-Ahli Hospital.)

One screenshot shows a user notified that Instagram had removed their upload of the photo, noting that the platform forbids “showing someone’s genitals or buttocks” or “implying sexual activity.” The underlying photo does not appear to show anything resembling either category of image.

In another screenshot, a Facebook user who shared the same image was told their post had been uploaded, “but it looks similar to other posts that were removed because they don’t follow our standards on nudity or sexual activity.” The user was prompted to delete the post. The language in the notification suggests the image may have triggered one of the company’s automated, software-based content moderation systems, as opposed to a human review.

Meta has previously distributed internal policy language instructing its moderators to not remove gruesome documentation of Russian airstrikes against Ukrainian civilians, though no such carveout is known to have been provided for Palestinians, whether today or in the past. Last year, a third-party audit commissioned by Meta found that systemic, unwarranted censorship of Palestinian users amounted to a violation of their human rights.

The post Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/18/gaza-hospital-instagram-facebook-censored/feed/ 0 An Instagram account notification shows a story depicting a widely circulated image was removed by the platform on the basis of violating guidelines on nudity or sexual activity.
<![CDATA[Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access]]> https://theintercept.com/2023/10/12/israel-gaza-internet-access/ https://theintercept.com/2023/10/12/israel-gaza-internet-access/#respond Fri, 13 Oct 2023 00:00:08 +0000 https://theintercept.com/?p=447530 During a war, when access to the internet could save lives, Palestinians are struggling to reach the outside world and each other.

The post Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access appeared first on The Intercept.

]]>
Amid a heavy retaliatory air and artillery assault by Israel against the Gaza Strip on October 10, Israel Defense Forces spokesperson Avichay Adraee posted a message on Facebook to residents of the al-Daraj neighborhood, urging them to leave their homes in advance of impending airstrikes.

It’s not clear how most people in al-Daraj were supposed to see the warning: Intense fighting and electrical shortages have strangled Palestinian access to the internet, putting besieged civilians at even greater risk.

Following Hamas’s grisly surprise attack across the Gaza border on October 7, the Israeli counterattack — a widespread and indiscriminate bombardment of the besieged Gaza Strip — left the two million Palestinians who call the area home struggling to connect to the internet at a time when access to current information is crucial and potentially lifesaving.

“Shutting down the internet in armed conflict is putting civilians at risk.”

“Shutting down the internet in armed conflict is putting civilians at risk,” Deborah Brown, a senior researcher at Human Rights Watch, told The Intercept. “It could help contribute to injury or death because people communicate around what are safe places and conditions.”

According to companies and research organizations that monitor the global flow of internet traffic, Gazan access to the internet has dramatically dropped since Israeli strikes began, with data service cut entirely for some customers.

“My sense is that very few people in Gaza have internet service,” Doug Madory of the internet monitoring firm Kentik told The Intercept. Madory said he spoke to a contact working with an internet service provider, or ISP, in Gaza who told him that internet access has been reduced by 80 to 90 percent because of a lack of fuel and power, and airstrikes.

As for causes of the outages, Marwa Fatafta, a policy analyst with the digital rights group Access Now, cited Israeli strikes against office buildings housing Gazan telecommunications firms, such as the now-demolished Al-Watan Tower, as a major factor, in addition to damage to the electrical grid.

Fatafta told The Intercept, “There is a near complete information blackout from Gaza.”

Most Gaza ISPs Are Gone

With communications infrastructure left in rubble, Gazans now increasingly find themselves in a digital void at a time when data access is most crucial.

“People in Gaza need access to the internet and telecommunications to check on their family and loved ones, seek life-saving information amidst the ongoing Israeli barrage on the strip; it’s crucial to document the war crimes and human rights abuses committed by Israeli forces at a time when disinformation is going haywire on social media,” Fatafta said.

“There is some slight connectivity,” Alp Toker of the internet outage monitoring firm NetBlocks told The Intercept, but “most of the ISPs based inside of Gaza are gone.”

Though it’s difficult to be certain whether these outages are due to electrical shortages, Israeli ordnance, or both, Toker said that, based on reports he has received from Gazan internet providers, the root cause is the Israeli destruction of fiber optic cables connecting Gaza. The ISPs are generally aware of where their infrastructure is damaged or destroyed, Toker said, but ongoing Israeli airstrikes will make sending a crew to patch them too dangerous to attempt. Still, one popular Gazan internet provider, Fusion, wrote in a Facebook post to its customers that efforts to repair damaged infrastructure were ongoing.

That Gazan internet access remains in place at all, Toker said, is probably due to the use of backup generators that could soon run out of fuel in the face of an intensified Israeli military blockade. (Toker also said that, while it’s unclear if it was due to damage from Hamas rockets or a manual blackout, NetBlocks detected an internet service disruption inside Israel at the start of the attack, but that it quickly subsided.)

Amanda Meng, a research scientist at Georgia Tech who works on the university’s Internet Outage Detection and Analysis project, or IODA, estimated Gazan internet connectivity has dropped by around 55 percent in the recent days, meaning over half the networks inside Gaza have gone dark and no longer respond to the outside internet. Meng compared this level of access disruption to what’s been previously observed in Ukraine and Sudan during recent warfare in those countries. In Gaza, Border Gateway Protocol activity, an obscure system that routes data from one computer to another and undergirds the entire internet, has also seen disruptions.

“On the ground, this looks like people not being able to use networked communication devices that rely on the Internet,” Meng explained.

Organizations like NetBlocks and IODA all used varying techniques to measure internet traffic, and their results tend to vary. It’s also nearly impossible to tell from the other side of the world whether a sudden dip in service is due to an explosion or something else. In addition to methodological differences and the fog of war, however, is an added wrinkle: Like almost everything else in Gaza, ISPs connect to the broader internet through Israeli infrastructure.

“By law, Gaza internet connectivity must go through Israeli infrastructure to connect to the outside world, so there is a possibility that the Israelis could leave it up because they are able to intercept communications,” said Madory of Kentik.

Fatafta, the policy analyst, also cited Israel’s power to keep Gaza offline — but both in this war and in general. “Israel’s full control of Palestinian telecommunications infrastructure and long-standing ban on technology upgrades” is an immense impediment, she said. With the wider internet blockaded, she said, “people in Gaza can only access slow and unreliable 2G services” — a cellular standard from 1991.

While Israel is reportedly also using analog means to warn Palestinians, their effectiveness is not always clear: “Palestinian residents of the city of Beit Lahiya in the northern region of the Gaza Strip said Thursday that Israeli planes dropped flyers warning them to evacuate their homes,” according to the Associated Press. “The area had already been heavily struck by the time the flyers were dropped.”

The post Israel Warns Palestinians on Facebook — but Bombings Decimated Gaza Internet Access appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/12/israel-gaza-internet-access/feed/ 0
<![CDATA[TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza]]> https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/ https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/#respond Wed, 11 Oct 2023 17:44:06 +0000 https://theintercept.com/?p=447300 Periods of Israeli–Palestinian violence have regularly resulted in the corporate suppression of Palestinian social media users.

The post TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza appeared first on The Intercept.

]]>
As Israel escalates its bombardment of the Gaza Strip in retaliation for a surprise attack from Hamas, TikTok and Instagram have come after a news site dedicated to providing coverage on Palestine and Israel.

On Tuesday, a Mondoweiss West Bank correspondent’s Instagram account was suspended, while the news outlet’s TikTok account was temporarily taken down on Monday. Other Instagram users have reported restrictions on their accounts after posting about Palestine, including an inability to livestream or to comment on other’s posts. And on Instagram and Facebook (both owned by the same company, Meta), hashtags relating to Hamas and “Al-Aqsa Flood,” the group’s name for its attack on Israel, are being hidden from search. The death toll from the attack continues to rise, with Israeli officials reporting 1,200 deaths as of Wednesday afternoon.

The platforms’ targeting of accounts reporting on Palestine comes as information from people in Gaza is harder to come by amid Israel’s total siege on its 2 million residents and as Israel keeps foreign media out of the coastal enclave. Israel’s indiscriminate bombing campaign has killed more than 1,100 people and injured thousands more, Gaza’s Health Ministry said Wednesday.

Periods of Israeli–Palestinian violence have regularly resulted in the corporate suppression of Palestinian social media users. In 2021, for instance, Instagram temporarily censored posts that mentioned Jerusalem’s Al-Aqsa Mosque, one of Islam’s most revered sites. Social media policy observers have criticized Meta’s censorship policies on the grounds that they unduly affect Palestinian users while granting leeway to civilian populations in other conflict zones.

“The censorship of Palestinian voices, those who support Palestine, and alternative news media who report on the crimes of Israel’s occupation, by social media networks and giants like Meta and TikTok is well documented,” said Yumna Patel, Palestine news director of Mondoweiss, noting that it includes account bans, content removal, and even limiting the reach of posts. “We often see these violations become more frequent during times like this, where there is an uptick in violence and international attention on Palestine. We saw it with the censorship of Palestinian accounts on Instagram during the Sheikh Jarrah protests in 2021, the Israeli army’s deadly raids on Jenin in the West Bank in 2023, and now once again as Israel declares war on Gaza.”

Instagram and TikTok did not respond to requests for comment. 

Mondoweiss correspondent Leila Warah, who is based in the West Bank, reported on Tuesday that Instagram suspended her account and gave her 180 days to appeal, with the possibility of permanent suspension. After Mondoweiss publicized the suspension, her account was quickly reinstated. Later in the day, however, Mondoweiss reported that Warah’s account was suspended once again, only to be reinstated on Wednesday. 

The news outlet tweeted that the first suspension came “after several Israeli soldiers shared Leila’s account on Facebook pages, asking others to submit fraudulent reports of guideline violations.” 

A day earlier, the outlet tweeted that its TikTok account was “permanently banned” amid its “ongoing coverage of the events in Palestine.” Since the outbreak of war on Saturday, the outlet had posted a viral video about Hamas’s attack on Israel and another about Hamas’s abduction of Israeli civilians. Again, within a couple of hours, and after Mondoweiss publicized the ban, the outlet’s account was back up. 

“We have consistently reviewed all communication from TikTok regarding the content we publish there and made adjustments if necessary,” the outlet wrote. The magazine’s staff did not believe they violated any TikTok guidelines in their coverage in recent days. “This can only be seen as censorship of news coverage that is critical of the prevailing narratives around the events unfolding in Palestine.”

Even though the account has been reinstated, Mondoweiss’s first viral TikTok about the eruption of violence cannot be viewed in the West Bank and some parts of Europe, according to the outlet. Other West Bank residents independently confirmed to The Intercept that they could not access the video, in which Warah describes Hamas’s attack and Israel’s bombing of Gaza as a result, connecting the assault to Israel’s ongoing 16-year siege of Gaza. TikTok did not respond to The Intercept’s questions about access to the video. 

On Instagram, meanwhile, Palestinian creator Adnan Barq reported that the platform blocked him from livestreaming, removed his content, and even prevented his account from being shown to users who don’t follow him. Also on Instagram, hashtags including #alaqsaflood and #hamas are being suppressed; Facebook is suppressing Arabic-language hashtags of the operation’s name too. On paper, Meta’s rules prohibit glorifying Hamas’s violence, but they do not bar users from discussing the group in the context of the news, though the distinction is often collapsed in the real world.

Last year, following a spate of Israeli airstrikes against the Gaza Strip, Palestinian users who photographed the destruction on Instagram complained that their posts were being removed for violating Meta’s “community standards,” while Ukrainian users had received a special carve-out to post similar imagery on the grounds it was “newsworthy.” 

A September 2022 external audit commissioned by Meta found the company’s rulebook “had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.” Similarly, Meta’s Dangerous Organizations and Individuals policy, which maintains a secret blacklist of banned organizations and people, is disproportionately made up of Muslim, Middle Eastern, and South Asian entities, a factor that contributed to over-enforcement against Palestinians.

Big Tech’s content moderation during conflict is increasingly significant as unverified information runs rampant on X, Elon Musk’s information free-for-all diluted version of Twitter, once a crucial source during breaking news events. Musk himself has led his 160 million followers astray, encouraging users on Sunday to follow @WarMonitors and @sentdefender to learn about the war “in real-time.” The former account had posted things like “mind your own business, jew,” while the latter mocked Palestinian civilians trapped from Israel’s siege, writing, “Better find a Boat or get to Swimming lol.” And both have previously circulated fake news, such as false reports of an explosion at the Pentagon in May.

Musk later deleted his post endorsing the accounts.

For now, Musk’s innovative Community Notes fact-checking operation is leaving lies unchallenged for days during a time when decisions and snap judgments are made by the minute. And that says nothing of inflammatory content on X and elsewhere. “In the past few days we have seen open calls for genocide and mass violence against [Palestinians] and Arabs made by official Israeli social media accounts, and parroted by Zionist accounts and pro-Israel bots on platforms like X with absolutely no consequence,” Mondoweiss’s Patel said. “Meanwhile Palestinian journalists & news outlets have had their accounts outright suspended on Instagram and Tiktok simply for reporting the news.”

The post TikTok, Instagram Target Outlet Covering Israel–Palestine Amid Siege on Gaza appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/11/tiktok-instagram-israel-palestine/feed/ 0
<![CDATA[New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network]]> https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/ https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/#respond Sun, 01 Oct 2023 10:00:00 +0000 https://theintercept.com/?p=446051 A new, well-funded pressure group is fighting to get Apple to weaken iPhone privacy protections in the name of catching child predators.

The post New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network appeared first on The Intercept.

]]>
The Heat Initiative, a nonprofit child safety advocacy group, was formed earlier this year to campaign against some of the strong privacy protections Apple provides customers. The group says these protections help enable child exploitation, objecting to the fact that pedophiles can encrypt their personal data just like everyone else.

When Apple launched its new iPhone this September, the Heat Initiative seized on the occasion, taking out a full-page New York Times ad, using digital billboard trucks, and even hiring a plane to fly over Apple headquarters with a banner message. The message on the banner appeared simple: “Dear Apple, Detect Child Sexual Abuse in iCloud” — Apple’s cloud storage system, which today employs a range of powerful encryption technologies aimed at preventing hackers, spies, and Tim Cook from knowing anything about your private files.

Something the Heat Initiative has not placed on giant airborne banners is who’s behind it: a controversial billionaire philanthropy network whose influence and tactics have drawn unfavorable comparisons to the right-wing Koch network. Though it does not publicize this fact, the Heat Initiative is a project of the Hopewell Fund, an organization that helps privately and often secretly direct the largesse — and political will — of billionaires. Hopewell is part of a giant, tightly connected web of largely anonymous, Democratic Party-aligned dark-money groups, in an ironic turn, campaigning to undermine the privacy of ordinary people.

“None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing.”

For experts on transparency about money in politics, the Hopewell Fund’s place in the wider network of Democratic dark money raises questions that groups in the network are disinclined to answer.

“None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing,” said Robert Maguire, of Citizens for Responsibility and Ethics in Washington, or CREW. Maguire said the way the network operated called to mind perhaps the most famous right-wing philanthropy and dark-money political network: the constellation of groups run and supported by the billionaire owners of Koch Industries. Of the Hopewell network, Maguire said, “They also take on some of the structural calling cards of the Koch network; it is a convoluted group, sometimes even intentionally so.”

The decadeslong political and technological campaign to diminish encryption for the sake of public safety — known as the “Crypto Wars” — has in recent years pivoted from stoking fears of terrorists chatting in secret to child predators evading police scrutiny. No matter the subject area, the battle is being waged between those who think privacy is an absolute right and those who believe it ought to be limited for expanded oversight from law enforcement and intelligence agencies. The ideological lines pit privacy advocates, computer scientists, and cryptographers against the FBI, the U.S. Congress, the European Union, and other governmental bodies around the world. Apple’s complex 2021 proposal to scan cloud-bound images before they ever left your phone became divisive even within the field of cryptography itself.

While the motives on both sides tend to be clear — there’s little mystery as to why the FBI doesn’t like encryption — the Heat Initiative, as opaque as it is new, introduces the obscured interests of billionaires to a dispute over the rights of ordinary individuals. 

“I’m uncomfortable with anonymous rich people with unknown agendas pushing these massive invasions of our privacy,” Matthew Green, a cryptographer at Johns Hopkins University and a critic of the plan to have Apple scan private files on its devices, told The Intercept. “There are huge implications for national security as well as consumer privacy against corporations. Plenty of unsavory reasons for people to push this technology that have nothing to do with protecting children.”

Apple’s Aborted Scanning Scheme

Last month, Wired reported the previously unknown Heat Initiative was pressing Apple to reconsider its highly controversial 2021 proposal to have iPhones constantly scan their owners’ photos as they were uploaded to iCloud, checking to see if they were in possession of child sexual abuse material, known as CSAM. If a scan turned up CSAM, police would be alerted. While most large internet companies check files their users upload and share against a centralized database of known CSAM, Apple’s plan went a step further, proposing to check for illegal files not just on the company’s servers, but directly on its customers’ phones.

“In the hierarchy of human privacy, your private files and photos should be your most important confidential possessions,” Green said. “We even wrote this into the U.S. Constitution.”

The backlash was swift and effective. Computer scientists, cryptographers, digital rights advocates, and civil libertarians immediately protested, claiming the scanning would create a deeply dangerous precedent. The ability to scan users’ devices could open up iPhones around the world to snooping by authoritarian governments, hackers, corporations, and security agencies. A year later, Apple reversed course and said it was shelving the idea.

Green said that efforts to push Apple to monitor the private files of iPhone owners are part of a broader effort against encryption, whether used to safeguard your photographs or speak privately with others — rights that were taken for granted before the digital revolution. “We have to have some principles about what we’ll give up to fight even heinous crime,” he said. “And these proposals give up everything.”

“We have to have some principles about what we’ll give up to fight even heinous crime. And these proposals give up everything.”

In an unusual move justifying its position, Apple provided Wired with a copy of the letter it sent to the Heat Initiative in reply to its demands. “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit,” the letter read. “It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”

The strong encryption built into iPhones, which shields sensitive data like your photos and iMessage conversations even from Apple itself, is frequently criticized by police agencies and national security hawks as providing shelter to dangerous criminals. In a 2014 speech, then-FBI Director James Comey singled out Apple’s encryption specifically, warning that “encryption threatens to lead all of us to a very dark place.”

Some cryptographers respond that it’s impossible to filter possible criminal use of encryption without defeating the whole point of encryption in the first place: keeping out prying eyes.

Similarly, any attempt to craft special access for police to use to view encrypted conversations when they claim they need to — a “backdoor” mechanism for law enforcement access — would be impossible to safeguard against abuse, a stance Apple now says it shares.

LOS ANGELES CA - SEPTEMBER 01, 2023: Apple is facing pressure from child safety advocates and shareholders to improve its policies for policing child sexual abuse material in iCloud. Photographed here is Sarah Gardner, head of the Heat Initiative, who is leading the campaign. CREDIT: Jessica Pons for The New York Times
Sarah Gardner, head of the Heat Initiative, on Sept. 1, 2023, in Los Angeles.
Photo: Jessica Pons for the New York Times

Dark-Money Network

For an organization demanding that Apple scour the private information of its customers, the Heat Initiative discloses extremely little about itself. According to a report in the New York Times, the Heat Initiative is armed with $2 million from donors including the Children’s Investment Fund Foundation, an organization founded by British billionaire hedge fund manager and Google activist investor Chris Cohn, and the Oak Foundation, also founded by a British billionaire. The Oak Foundation previously provided $250,000 to a group attempting to weaken end-to-end encryption protections in EU legislation, according to a 2020 annual report.



The Heat Initiative is helmed by Sarah Gardner, who joined from Thorn, an anti-child trafficking organization founded by actor Ashton Kutcher. (Earlier this month, Kutcher stepped down from Thorn following reports that he’d asked a California court for leniency in the sentencing of convicted rapist Danny Masterson.) Thorn has drawn scrutiny for its partnership with Palantir and efforts to provide police with advanced facial recognition software and other sophisticated surveillance tools. Critics say these technologies aren’t just uncovering trafficked children, but ensnaring adults engaging in consensual sex work.

In an interview, Gardner declined to name the Heat Initiative’s funders, but she said the group hadn’t received any money from governmental or law enforcement organizations. “My goal is for child sexual abuse images to not be freely shared on the internet, and I’m here to advocate for the children who cannot make the case for themselves,” Gardner added.

She said she disagreed with “privacy absolutists” — a group now apparently including Apple — who say CSAM-scanning iPhones would have imperiled user safety. “I think data privacy is vital,” she said. “I think there’s a conflation between user privacy and known illegal content.”

Heat Initiative spokesperson Kevin Liao told The Intercept that, while the group does want Apple to re-implement its 2021 plan, it would be open to other approaches to screening everyone’s iCloud storage for CSAM. Since Apple began allowing iCloud users to protect their photos with end-to-end encryption last December, however, this objective is far trickier now than it was back in 2021; to scan iCloud images today would still require the mass-scrutinizing of personal data in some manner. As Apple put it in its response letter, “Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users.”

Both the Oak Foundation and Thorn were cited in a recent report revealing the extent to which law enforcement and private corporate interests have influenced European efforts to weaken encryption in the name of child safety.

Beyond those groups and a handful of names, however, there is vanishingly little information available about what the Heat Initiative is, where it came from, or who exactly is paying its bills and why. Its website, which describes the group only as a “collective effort of concerned child safety experts and advocates” — who go unnamed — contains no information about funding, staff, or leadership.

One crucial detail, however, can be found buried in the “terms of use” section of the Heat Initiative’s website: “THIS WEBSITE IS OWNED AND OPERATED BY Hopewell Fund AND ITS AFFILIATES.” Other than a similarly brief citation in the site’s privacy policy, there is no other mention of the Hopewell Fund or explanation of its role. The omission is significant, given Hopewell’s widely reported role as part of a shadowy cluster of Democratic dark-money groups that funnel billions from anonymous sources into American politics.

Hopewell is part of a labyrinthine billionaire-backed network that receives and distributes philanthropic cash while largely obscuring its origin. The groups in this network include New Venture Fund (which has previously paid salaries at Hopewell), the Sixteen Thirty Fund, and Arabella Advisors, a for-profit company that helps administer these and other Democratic-leaning nonprofits and philanthropies. The groups have poured money into a wide variety of causes ranging from abortion access to opposing Republican tax policy, along the way spending big on elections — about $1.2 billion total in 2020 alone, according to a New York Times investigation.

The deep pockets of this network and mystery surrounding the ultimate source of its donations have drawn comparisons — by Maguire, the Times, and others — to the Koch brothers’ network, whose influence over electoral politics from the right long outraged Democrats. When asked by The Atlantic in 2021 whether she felt good “that you’re the left’s equivalent of the Koch brothers,” Sampriti Ganguli, at the time the CEO of Arabella Advisors, replied in the affirmative.

“Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country. We’re talking here about hundreds of millions of dollars.”

“Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country,” Maguire of CREW told The Intercept. “We’re talking here about hundreds of millions of dollars.”

Liao told The Intercept that Hopewell serves as the organization’s “fiscal sponsor,” an arrangement that allows tax-deductible donations to pass through a registered nonprofit on its way to an organization without tax-exempt status. Liao declined to provide a list of the Heat Initiative’s funders beyond the two mentioned by the New York Times. Owing to this fiscal sponsorship, Liao continued, “the Hopewell Fund’s board is Heat Initiative’s board.” Hopewell’s board includes New Venture Fund President Lee Bodner and Michael Slaby, a veteran of Barack Obama’s 2008 and 2012 campaigns and former chief technology strategist at an investment fund operated by ex-Google chair Eric Schmidt.

When asked who exactly was leading the Heat Initiative, Liao told The Intercept that “it’s just the CEO Sarah Gardner.” According to LinkedIn, however, Lily Rhodes, also previously with Thorn, now works as Heat Initiative’s director of strategic operations. Liao later said Rhodes and Gardner are the Heat Initiative’s only two employees. When asked to name the “concerned child safety experts and advocates” referred to on the Heat Initiative’s website, Liao declined.

“When you take on a big corporation like Apple,” he said, “you probably don’t want your name out front.”

Hopewell’s Hopes

Given the stakes — nothing less than the question of whether people have an absolute right to communicate in private — the murkiness surrounding a monied pressure campaign against Apple is likely to concern privacy advocates. The Heat Initiative’s efforts also give heart to those aligned with law enforcement interests. Following the campaign’s debut, former Georgia Bureau of Investigations Special Agent in Charge Debbie Garner, who has also previously worked for iPhone-hacking tech firm Grayshift, hailed the Heat Initiative’s launch in a LinkedIn group for Homeland Security alumni, encouraging them to learn more.

The larger Hopewell network’s efforts to influence political discourse have attracted criticism and controversy in the past. In 2021, OpenSecrets, a group that tracks money in politics, reported that New Venture Fund and the Sixteen Thirty Fund were behind a nationwide Facebook ad campaign pushing political messaging from Courier News, a network of websites designed to look like legitimate, independent political news outlets.

Despite its work with ostensibly progressive causes, Hopewell has taken on conservative campaigns: In 2017, Deadspin reported with bemusement an NFL proposal in which the league would donate money into a pool administered by the Hopewell Fund as part of an incentive to get players to stop protesting the national anthem.

Past campaigns connected to Hopewell and its close affiliates have been suffused with Big Tech money. Hopewell is also the fiscal sponsor of the Economic Security Project, an organization that promotes universal basic income founded by Facebook co-founder Chris Hughes. In 2016, SiliconBeat reported that New Venture Fund, which is bankrolled in part by major donations from the Bill and Melinda Gates Foundation and William and Flora Hewlett Foundation, was behind the Google Transparency Project, an organization that publishes unflattering research relating to Google. Arabella has also helped Microsoft channel money to its causes of choice, the report noted. Billionaire eBay founder Pierre Omidyar has also provided large cash gifts to both Hopewell and New Venture Fund, according to the New York Times (Omidyar is a major funder of The Intercept).

According to Riana Pfefferkorn, a research scholar at Stanford University’s Internet Observatory program, the existence of the Heat Initiative is ultimately the result of an “unforced error” by Apple in 2021, when it announced it was exploring using CSAM scanning for its cloud service.

“And now they’re seeing that they can’t put the genie back in the bottle,” Pfefferkorn said. “Whatever measures they take to combat the cloud storage of CSAM, child safety orgs — and repressive governments — will remember that they’d built a tool that snoops on the user at the device level, and they’ll never be satisfied with anything less.”

The post New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network appeared first on The Intercept.

]]>
https://theintercept.com/2023/10/01/apple-encryption-iphone-heat-initiative/feed/ 0 Sarah Gardner Heat Initiative Sarah Gardner, head of the Heat Initiative, Sept. 1, 2023, Los Angeles, California.
<![CDATA[Top Biden Cyber Official Accused of Workplace Misconduct at NSA in 2014 — and Again at White House Last Year]]> https://theintercept.com/2023/09/06/anne-neuberger-nsa-cybersecurity/ https://theintercept.com/2023/09/06/anne-neuberger-nsa-cybersecurity/#respond Wed, 06 Sep 2023 15:23:43 +0000 https://theintercept.com/?p=443250 A previously unreported NSA inspector general report about Anne Neuberger reveals disarray and dysfunction at the top of the cybersecurity hierarchy.

The post Top Biden Cyber Official Accused of Workplace Misconduct at NSA in 2014 — and Again at White House Last Year appeared first on The Intercept.

]]>
Anne Neuberger’s ascent to national security eminence has been a steady, impressive climb. Her eight-year tour through the National Security Agency has culminated in a powerful position in President Joe Biden’s National Security Council, where she helps guide national cybersecurity policy.

Since 2007, Neuberger’s rapid rise through some of the most secretive and consequential components of the U.S. global surveillance machinery earned her a reputation as a hyper-capable operator where the government most needs one. While her work has earned public plaudits, The Intercept learned Neuberger’s tenure at the NSA triggered a 2014 internal investigation by the agency’s inspector general following allegations that she created a hostile workplace by inappropriately berating, undermining, and alienating her colleagues. In 2015, the inspector general’s report found that there was not enough evidence to sustain allegations that Neuberger fostered a hostile work environment, but it did conclude that she violated NSA policy by disrespecting colleagues.

In the first of a series of letters to the inspector general in advance of the 2015 report, Neuberger denied the allegations against her. “I strongly disagree with the tentative conclusions of the OIG inquiry (that I sometimes failed to exercise courtesy and respect in dealing with fellow workers),” she wrote. “I firmly believe that I treated everyone with the respect and courtesy they deserved.” Neuberger argued the complaints and the investigation reflected gender bias in a department with employees resentful of being led by a woman — especially one, agency officials pointed out in the report, tasked with curbing politically risky programs in the wake of scandals sparked by NSA whistleblower Edward Snowden.

Almost a decade later, a new allegation of misconduct against Neuberger emerged from the White House, The Intercept’s investigation found. The allegation fit a pattern of behavior established in the inspector general’s findings, this time involving an incident that took place in full view of a visiting delegation from a foreign ally.

The 2015 NSA inspector general’s report and details of the recent complaint — neither of which have been previously reported — not only complicate Neuberger’s public national security star persona, but also offer further evidence of serious discord at the top of American cybersecurity policy. Beyond revealing Neuberger’s alleged interpersonal and managerial shortcomings, the inspector general’s report provides a rare, unflattering self-examination of the post-Snowden NSA as an HR nightmare filled with competing egos, long-standing rivalries, mutual distrust, and ample pettiness.

“We need an absolutely efficient, agile, and well integrated leadership team at the White House and in the major federal agencies, and we don’t have that.”

Attempts to form a cohesive cyberdefense policy at a national scale in the U.S. have long been undermined by turf wars, with multiple agencies, offices, and even branches of government laying claim to overlapping responsibilities. With the National Security Council’s privileged proximity to the president himself, discord within the NSC could particularly jeopardize the country’s ability to nimbly recognize and counter emerging and existing digital threats — a concern echoed by multiple sources with whom The Intercept spoke.

“We recognize that we’re extremely vulnerable; our adversaries are increasing their capabilities month over month,” a former senior U.S. cybersecurity official told The Intercept, speaking on the condition of anonymity to discuss the matter. The former official cited the intertwined work of offices like the national cyber director and agencies such as the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency. “We need an absolutely efficient, agile, and well integrated leadership team at the White House and in the major federal agencies, and we don’t have that. NSC, NCD, NSA, and CISA need to operate in a well-integrated manner, and this kind of friction introduces risk and consequences for national security of our critical infrastructure systems. This matters.”

The allegations uncovered by The Intercept dovetail with a recent Bloomberg article indicating Neuberger’s management style was largely to blame for the February resignation of Chris Inglis, the first U.S. national cyber director and a former NSA deputy director broadly liked by his peers. According to Bloomberg, Inglis said Neuberger withheld information and undermined him as he tried to set the direction of the country’s cybersecurity strategy.

“Chris is deeply thoughtful and smart. He and I disagreed on encryption and surveillance issues, but he always argued with integrity,” Tufts University professor Susan Landau, a scholar of cybersecurity policy, told The Intercept. “I was really sorry to see him leave the national cybersecurity director position.”

Almost eight years after the NSA investigation into Neuberger, in the autumn of 2022, a senior official with CISA filed a complaint about Neuberger, according to three sources familiar with the matter who spoke on the condition of anonymity. The employee alleged Neuberger, by then on detail to the National Security Council, pointed at the door and ordered her out like a child during a meeting with U.S. cybersecurity colleagues and a delegation of visiting Indian government officials. The sources conveyed dismay about the encounter, particularly because of the strategic partnership between the U.S. and India on cybersecurity issues. (CISA declined to comment on the record for this story. Neuberger and the White House did not respond to inquiries.)

UNITED STATES - NOVEMBER 16: White House National Cyber Director Chris Inglis is sworn in before testifying during the House Oversight and Reform Committee hearing on "Cracking Down on Ransomware: Strategies for Disrupting Criminal Hackers and Building Resilience Against Cyber Threats" on Tuesday, Nov. 16, 202. (Photo by Bill Clark/CQ Roll Call via AP Images)
National Cyber Director Chris Inglis is sworn in before testifying at a House Oversight and Reform Committee hearing on Nov. 16, 2021.
Photo: Bill Clark/AP

The Inspector General Report

Before Neuberger became a Biden-era staple of the think tank and media conference circuit, she was a senior official at the NSA, where she ran an office collaborating with the American private sector. Several years into her career, in 2014, the NSA investigated Neuberger, by then its chief risk officer, to determine whether she had fostered a hostile work environment.

The allegations are detailed in a 54-page report, released internally in June 2015 by the agency’s Office of the Inspector General. The report outlines numerous complaints that Neuberger verbally abused and undermined her colleagues, according to a partially redacted copy provided to The Intercept through a Freedom of Information Act request. The report had previously been released by the NSA following a FOIA lawsuit by the journalist Jason Leopold. Complainants made repeated allegations ranging from Neuberger berating co-workers to blocking colleagues from accessing important information. Though her name is redacted throughout, a source familiar with the matter who spoke on the condition of anonymity confirmed Neuberger was the subject of the report. (The NSA declined to comment.)

The NSA inspector general’s office did not find a “preponderance of evidence” to support the hostile workplace claims, but the report noted that Neuberger violated NSA policy because she “failed to exercise courtesy and respect in dealings with fellow workers.” The report said her “conduct had a negative impact on the work environment and individuals (e.g. people were sometimes left feeling ‘savaged’ and ‘practically in tears,’ shaking and afraid, skittish and scared).”

Many of the testimonies in the report describe the post-Snowden NSA of 2014 in a state of disarray. In 2013, after Snowden blew the whistle on the reach and power of the NSA’s secret surveillance, the agency was embarrassed by outrage from foreign allies and Americans alike; calls for reforms grew in Washington. In the report the following year, Neuberger is criticized for “risk aversion” — what her superiors told the inspector general were moves to protect the NSA from “political risk.”

Testimony from Richard Ledgett, NSA deputy director at the time, suggests that Neuberger’s caution arose from his and other top officials’ orders. “NSA must ensure that anything that is questioned by the public is able to be fully explained,” the inspector general’s report on Ledgett’s testimony says. There were “cowboys” at the agency, Ledgett said, and the orders would have rankled some NSA veterans. (Ledgett did not respond to a request for comment.)

Whatever Neuberger’s contribution to the dysfunction, the report sheds light on painfully low morale and general aimlessness among agency staff in the wake of Snowden’s disclosures. “I don’t know what our mission is anymore to be honest,” one employee complained in the report. For Neuberger’s defenders cited in the report, this generally dismal post-Snowden mood was exculpatory evidence concerning her conduct. One NSA employee’s sworn testimony described a redacted office within the agency as a “cesspool of misery and losers, a dead weight environment,” and argued those who accused Neuberger of abusive behavior “lack marketable skills and would have a hard time being gainfully employed elsewhere.”

Far from being a managerial menace, Neuberger’s defenders argue, she was the victim of a gendered “mutiny” by a cadre of bitter NSA men who resented her meteoric rise and efforts to balance the agency’s risk. According to one anonymous account reported by the inspector general, Neuberger was told by a co-worker that “there was a ‘cabal,’ a group of white men that were resistant to [Neuberger] and did not like the changes she was making.”

A separate high-ranking official who also used the word “cabal” described it as a “‘secret society’ that went to the [deputy director] to get [Neuberger] fired.” The cabal’s efforts culminated in what would come to be known inside the NSA as the “mutiny letter.” The emailed catalog of grievances against Neuberger was sent to Teresa Shea, who at the time ran the agency’s much-vaunted Signals Intelligence Directorate, the office that oversees the agency’s global spying efforts, and later forwarded to Ledgett, then NSA deputy director.

In her letter responding to the inspector general’s findings, Neuberger defended her conduct by claiming she’d been warned in disparaging terms about her office and told to whip them into shape. “Prior to taking my job as the chief of [redacted],” Neuberger wrote, “I was told by multiple people that [redacted] was a ‘pit of snakes’ where ‘seniors who can’t get along with anyone else go to spend the rest of their careers.’” Shea and her deputy had criticized Neuberger’s new team as being of “little value” and “useless to mission,” Neuberger added: “They told me they wanted to see change and significant change.” (Shea did not respond to a request for comment.)

FILE - This Sept. 19, 2007 file photo shows the National Security Agency building at Fort Meade, Md. The National Security Agency has been extensively involved in the U.S. government's targeted killing program, collaborating closely with the CIA in the use of drone strikes against terrorists abroad, The Washington Post reported Wednesday Oct. 16, 2013 after a review of documents provided by former NSA systems analyst Edward Snowden. (AP Photo/Charles Dharapak, File)
The National Security Agency building at Fort Meade, Md., on Sept. 19, 2007.
Photo: Charles Dharapak/AP

“Some People Didn’t Like That”

After serving for three years as a special assistant to Gen. Keith Alexander, who ran the NSA from 2005 to 2014, Neuberger worked at the Commercial Solutions Center, a highly sensitive office that overtly works with and covertly sabotages private-sector technology companies. Following that stint, Neuberger was named the NSA’s first chief risk officer: essentially a post-Snowden damage-control position manned by a loyal lieutenant to Alexander. The NSA needed its corporate partners, but those corporations had been embarrassed when their hand-in-hand work with the cyberspooks was made public in Snowden’s disclosures. Neuberger, who had experience both directly in the private sector and dealing with outside companies from inside the NSA’s Commercial Solutions Center, would seem on paper to be a perfect person to repair those relationships.

The relationships that seem to have never been mended were Neuberger’s with her colleagues. Following her flat-out denial of the inspector general’s findings, Neuberger seemed to have moved on — and eventually upward, to the White House. Neuberger had said in her letter to the inspector general that her work ethic had rubbed colleagues the wrong way.

“I worked at all times to be respectful and to listen to folks’ views,” she wrote. “However, I also held folks accountable. Some people didn’t like that.”

“When [Neuberger] was announced as [redacted] Chief there was immediate angst due to her ‘horrible reputation.’”

Neuberger’s formal response to the findings, the letters included in the report itself, argued the allegations about her management were caused by a mix of garden-variety sexism and resistance to her attempts to change workplace culture: “I believe the complaints on style were reflective to a great extent on both that change in approach and, to some extent, perhaps, a gender bias, where a woman (and younger one to boot) who holds people accountable and is direct may be viewed as a challenge.”

Though Neuberger may have butted heads with a contingent of stubborn, ossified men at the agency, women made up some of her fiercest critics in the report.

“She is not surprised by concerns about the work environment and morale in [redacted],” the inspector general reported of an anonymous woman’s testimony. “When [Neuberger] was announced as [redacted] Chief there was immediate angst due to her ‘horrible reputation.’” 

This female employee added that Neuberger “alienated people,” “lacks understanding of how government and the Agency work,” and that “her delivery can be off putting, as she tends to say ‘me, me, me’ rather than ‘us.’” The CISA official who leveled the 2022 allegation of misconduct against Neuberger is also a woman.

Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, center, speaks with reporters in the James Brady Press Briefing Room at the White House, Friday, Feb. 18, 2022, in Washington. White House press secretary Jen Psaki, left, and Daleep Singh, Deputy National Security Advisor for International Economics, right, look on. (AP Photo/Alex Brandon)
Anne Neuberger, deputy national security adviser for cyber and emerging technology, center, speaks with reporters at the White House on Feb. 18, 2022.
Photo: Alex Brandon/AP

“Please God, Just Get Another Leader in Here”

The role of inspectors general is to audit and investigate the federal agencies to ensure their smooth functioning and prevent fraud and abuse. While the findings of inspectors general at other federal agencies are typically freely accessible to the public, the NSA, like the rest of the intelligence community, eschews such routine transparency. Though the Neuberger report was never classified, it was originally marked “For Official Use Only.”

“At NSA, OIG investigations rarely see the light of day because so much of what the agency does is secret,” said James Bamford, a journalist and bestselling author of multiple histories of the agency. “So it’s good that the agency may be opening up a bit to show they are actually taking action against bad senior officials like Neuberger.”

The NSA investigation into Neuberger’s conduct was initiated by an August 5, 2014, complaint filed to the Office of the Inspector General alleging she “created and perpetuated an atmosphere of workplace intimidation within the [redacted],” according to the report. Neuberger at the time led the agency’s Commercial Solutions Center.

“The complainant relayed concerns about allegedly unprofessional behavior, including screaming at work, harassing phone calls to employees at home, and an inability to lead effectively,” according to the report. “The employee further alleged that there was widespread fear of retribution among the [redacted] workforce for speaking out about these concerns.”

“At NSA, OIG investigations rarely see the light of day because so much of what the agency does is secret.”

The ensuing probe produced sworn testimony from 21 NSA employees, some of whom corroborated the allegations, some who defended Neuberger’s conduct, and others who offered mixed appraisals. The Office of the Inspector General was able to confirm one of the more incendiary allegations: yelling at an “extraordinarily high volume” and calling the employee “fucking crazy,” according to witness testimony — a phrase she later told the inspector general she used about a project she considered too risky, not a person. “She admitted to the OIG that, in this instance, she crossed a professional line when she yelled and that she later apologized to the employee,” the report said.

In her first letter to the inspector general in advance of the report, Neuberger admitted she crossed a professional line. In a subsequent letter, she denied ever yelling. “I categorically disagree with the characterization of ‘extraordinarily high volume,’” she wrote. “I did not yell at a high volume. As a rule, I don’t yell. I was raised with parents who yelled and I, as a matter of practice, don’t yell.”

While the allegations generally pertain to her post running the Commercial Solutions Center, some complaints refer back to her time assisting Alexander as a confounding factor. 

“At times, her expectations of the workforce were simply too lofty,” one employee testified. “She was used to seeing NSA at its best, sitting on the 8th floor with the DIRNSA” — a reference to the director, Alexander. “We did not accomplish all we could have. … It was a miserable time,” the employee said, noting a “‘well-attended’ happy hour when her departure was announced.” 

One senior program manager, who said group meetings with Neuberger were so tense that participants avoided making eye contact with her, told the inspector general: “please God, just get another leader in here. … it’s an uncomfortable place to work.”

Some of the allegations are of mere rudeness: snapping her fingers at underlings, pounding on tables, and the like. (In her letters to the inspector general, Neuberger denied the table-pounding incident: “I didn’t ‘bang the table.’”) Other co-workers, however, alleged Neuberger also deliberately shut them out from important information, thwarted their ability to work, and created a workplace climate of fear and distrust.

Neuberger “told [redacted] she learned not to trust anyone with information, because people would undercut her,” claimed one NSA employee. “At some point, [Neuberger] started compartmenting information excluding certain individuals from leadership team emails.” Neuberger was “very secretive and compartmented,” alleged another. “She would not even let her [redacted] leadership team see the overview of their mission that she sent to the DIRNSA.” Some claimed Neuberger’s distrust of her colleagues was mutual: “People avoid informing her of certain things because they are afraid of what might happen.”

The charges in the inspector general’s report jibe with Bloomberg’s story about Inglis, the former NSA deputy director who recently resigned as the first national cyber director: Inglis, according to Bloomberg, had also alleged that Neuberger withheld important information.

Some at the NSA attributed this behavior and certain incidents to Neuberger’s many years of mentorship under Alexander, the inspector general’s report said. “People are afraid to confront [Neuberger] because she is ‘connected,’” one colleague alleged. “She was tightly tied to former DIRNSA, General Keith Alexander, who hired her. … The perception is she has been moved along too quickly.” 

Neuberger leaned on this apparent favoritism, a high-ranking official alleged.

“She is very prone to say, even to this day, that she has the support of some named senior person,” according to a former NSA official who spoke to The Intercept on the condition of anonymity. “It’s often her excuse for doing something that people find surprising or difficult. … Keith gave her that sponsorship.”

The post Top Biden Cyber Official Accused of Workplace Misconduct at NSA in 2014 — and Again at White House Last Year appeared first on The Intercept.

]]>
https://theintercept.com/2023/09/06/anne-neuberger-nsa-cybersecurity/feed/ 0 White House National Cyber Director Chris Inglis White House National Cyber Director Chris Inglis is sworn in before testifying on Nov. 16, 2022. NSA Drone Strikes The National Security Agency building at Fort Meade on Sept. 19, 2007. Jen Psaki, Anne Neuberger, Daleep Singh Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, center, speaks with reporters at the White House on Feb. 18, 2022.
<![CDATA[Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy]]> https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/ https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/#respond Wed, 30 Aug 2023 16:32:15 +0000 https://theintercept.com/?p=442999 In an internal update obtained by The Intercept, Facebook and Instagram’s parent company admits its rules stifled legitimate political speech.

The post Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy appeared first on The Intercept.

]]>
The social media giant Meta recently updated the rulebook it uses to censor online discussion of people and groups it deems “dangerous,” according to internal materials obtained by The Intercept. The policy had come under fire in the past for casting an overly wide net that ended up removing legitimate, nonviolent content.

The goal of the change is to remove less of this material. In updating the policy, Meta, the parent company of Facebook and Instagram, also made an internal admission that the policy has censored speech beyond what the company intended.

Meta’s “Dangerous Organizations and Individuals,” or DOI, policy is based around a secret blacklist of thousands of people and groups, spanning everything from terrorists and drug cartels to rebel armies and musical acts. For years, the policy prohibited the more than one billion people using Facebook and Instagram from engaging in “praise, support or representation” of anyone on the list.

Now, Meta will provide a greater allowance for discussion of these banned people and groups — so long as it takes place in the context of “social and political discourse,” according to the updated policy, which also replaces the blanket prohibition against “praise” of blacklisted entities with a new ban on “glorification” of them.

The updated policy language has been distributed internally, but Meta has yet to disclose it publicly beyond a mention of the “social and political discourse” exception on the community standards page. Blacklisted people and organizations are still banned from having an official presence on Meta’s platforms.

The revision follows years of criticism of the policy. Last year, a third-party audit commissioned by Meta found the company’s censorship rules systematically violated the human rights of Palestinians by stifling political speech, and singled out the DOI policy. The new changes, however, leave major problems unresolved, experts told The Intercept. The “glorification” adjustment, for instance, is well intentioned but likely to suffer from the same ambiguity that created issues with the “praise” standard.

“Changing the DOI policy is a step in the right direction, one that digital rights defenders and civil society globally have been requesting for a long time,” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, told The Intercept.

Observers like Shtaya have long objected to how the DOI policy has tended to disproportionately censor political discourse in places like Palestine — where discussing a Meta-banned organization like Hamas is unavoidable — in contrast to how Meta rapidly adjusted its rules to allow praise of the Ukrainian Azov Battalion despite its neo-Nazi sympathies.

“The recent edits illustrate that Meta acknowledges the participation of certain DOI members in elections,” Shtaya said. “However, it still bars them from its platforms, which can significantly impact political discourse in these countries and potentially hinder citizens’ equal and free interaction with various political campaigns.”

Acknowledged Failings

Meta has long maintained the original DOI policy is intended to curtail the ability of terrorists and other violent extremists from causing real-world harm. Content moderation scholars and free expression advocates, however, maintain that the way the policy operates in practice creates a tendency to indiscriminately swallow up and delete entirely nonviolent speech. (Meta declined to comment for this story.)

In the new internal language, Meta acknowledged the failings of its rigid approach and said the company is attempting to improve the rule. “A catch-all policy approach helped us remove any praise of designated entities and individuals on the platform,” read an internal memo announcing the change. “However, this approach also removes social and political discourse and causes enforcement challenges.”

Meta’s proposed solution is “recategorizing the definition of ‘Praise’ into two areas: ‘References to a DOI,’ and ‘Glorification of DOIs.’ These fundamentally different types of content should be treated differently.” Mere “references” to a terrorist group or cartel kingpin will be permitted so long as they fall into one of 11 new categories of discourse Meta deems acceptable:

Elections, Parliamentary and executive functions, Peace and Conflict Resolution (truce/ceasefire/peace agreements), International agreements or treaties, Disaster response and humanitarian relief, Human Rights and humanitarian discourse, Local community services, Neutral and informative descriptions of DOI activity or behavior, News reporting, Condemnation and criticism, Satire and humor.

Posters will still face strict requirements to avoid running afoul of the policy, even if they’re attempting to participate in one of the above categories. To stay online, any Facebook or Instagram posts mentioning banned groups and people must “explicitly mention” one of the permissible contexts or face deletion. The memo says “the onus is on the user to prove” that they’re fitting into one of the 11 acceptable categories.

According to Shtaya, the Tahrir Institute fellow, the revised approach continues to put Meta’s users at the mercy of a deeply flawed system. She said, “Meta’s approach places the burden of content moderation on its users, who are neither language experts nor historians.”

Unclear Guidance

Instagram and Facebook users will still have to hope their words aren’t interpreted by Meta’s outsourced legion of overworked, poorly paid moderators as “glorification.” The term is defined internally in almost exactly the same language as its predecessor, “praise”: “Legitimizing or defending violent or hateful acts by claiming that those acts or any type of harm resulting from them have a moral, political, logical, or other justification that makes them appear acceptable or reasonable.” Another section defines glorification as any content that “justifies or amplifies” the “hateful or violent” beliefs or actions of a banned entity, or describes them as “effective, legitimate or defensible.”

Though Meta intends this language to be universal, equitably and accurately applying labels as subjective as “legitimate” or “hateful” to the entirety of global online discourse has proven impossible to date.

“Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term,” according to Ángel Díaz, a professor at University of Southern California’s Gould School of Law and a scholar of social media content policy. “The policy still overburdens legitimate discourse.”

“Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term. The policy still overburdens legitimate discourse.”

The notions of “legitimization” or “justification” are deeply complex, philosophical matters that would be difficult to address by anyone, let alone a contractor responsible for making hundreds of judgments each day.

The revision does little to address the heavily racialized way in which Meta assesses and attempts to thwart dangerous groups, Díaz added. While the company still refuses to disclose the blacklist or how entries are added to it, The Intercept published a full copy in 2021. The document revealed that the overwhelming majority of the “Tier 1” dangerous people and groups — who are still subject to the harshest speech restrictions under the new policy — are Muslim, Arab, or South Asian. White, American militant groups, meanwhile, are overrepresented in the far more lenient “Tier 3” category.

Díaz said, “Tier 3 groups, which appear to be largely made up of right-wing militia groups or conspiracy networks like QAnon, are not subject to bans on glorification.”

Meta’s own internal rulebook seems unclear about how enforcement is supposed to work, seemingly still dogged by the same inconsistencies and self-contradictions that have muddled its implementation for years.

For instance, the rule permits “analysis and commentary” about a banned group, but a hypothetical post arguing that the September 11 attacks would not have happened absent U.S. aggression abroad is considered a form of glorification, presumably of Al Qaeda, and should be deleted, according to one example provided in the policy materials. Though one might vehemently disagree with that premise, it’s difficult to claim it’s not a form of analysis and commentary.

Another hypothetical post in the internal language says, in response to Taliban territorial gains in the Afghanistan war, “I think it’s time the U.S. government started reassessing their strategy in Afghanistan.” The post, the rule says, should be labeled as nonviolating, despite what appears to be a clear-cut characterization of the banned group’s actions as “effective.”

David Greene, civil liberties director at the Electronic Frontier Foundation, told The Intercept these examples illustrate how difficult it will be to consistently enforce the new policy. “They run through a ton of scenarios,” Greene said, “but for me it’s hard to see a through-line in them that indicates generally applicable principles.”

The post Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy appeared first on The Intercept.

]]>
https://theintercept.com/2023/08/30/meta-censorship-policy-dangerous-organizations/feed/ 0
<![CDATA[The Online Christian Counterinsurgency Against Sex Workers]]> https://theintercept.com/2023/07/29/skull-games-surveillance-sex-workers/ https://theintercept.com/2023/07/29/skull-games-surveillance-sex-workers/#respond Sat, 29 Jul 2023 10:00:00 +0000 https://theintercept.com/?p=439801 Evangelical military vets are using “counterterror” internet surveillance techniques to help police get search warrants against sex workers.

The post The Online Christian Counterinsurgency Against Sex Workers appeared first on The Intercept.

]]>
The most popular video on Vaught Victor Marx’s YouTube now has more than 15 million views. Standing solemnly in a dark blue karate gi while his son Shiloh Vaughn Marx smiles and points a gun at his face, Marx uses his expertise as a seventh-degree black belt in “Cajun Karate Keichu-Do” to perform what he claims was the world’s fastest gun disarm. Over a period of just 80 milliseconds — according to Marx’s measurement — he snatches the gun from his son and effortlessly ejects the magazine. It’s a striking display, one that unequivocally shouts: I am here to stop bad guys.

Marx is more than just a competitive gun-disarmer and martial artist. He is also a former Marine, a self-proclaimed exorcist, and an author and filmmaker. He also helped launch the Skull Games, a privatized intelligence outfit that purports to hunt pedophiles, sex traffickers, and other “demonic activity” using a blend of sock-puppet social media accounts and commercial surveillance tools — including face recognition software.

The Skull Games events have attracted notable corporate allies. Recent games have been “powered” by the internet surveillance firm Cobwebs, and an upcoming competition is partnered with cellphone-tracking data broker Anomaly Six.

The moral simplicity of Skull Games’s mission is emblazoned across its website in fierce, all-caps type: “We hunt predators.” And Marx has savvily ridden recent popular attention to the independent film “Sound of Freedom,” a dramatization of the life of fellow anti-trafficking crusader Tim Ballard. In the era of QAnon and conservative “groomer” panic, vowing to take down shadowy — and frequently exaggerated — networks of “traffickers” under the aegis of Christ is an exercise in shrewd branding.

Although its name is a reference to the mind games played by pimps and traffickers, Skull Games, which Marx’s church is no longer officially involved in, is itself a form of sport for its participants: a sort of hackathon for would-be Christian saviors, complete with competition. Those who play are awarded points based on their sleuthing. Finding a target’s high school diploma or sonogram imagery nets 15 points, while finding the same tattoo on multiple women would earn a whopping 300. On at least one occasion, according to materials reviewed by The Intercept and Tech Inquiry, participants competed for a chance at prizes, including paid work for Marx’s California church and one of its surveillance firm partners.

While commercially purchased surveillance exists largely outside the purview of the law, Skull Games was founded to answer to a higher power. The event started under the auspices of All Things Possible Ministries, the Murrieta, California, evangelical church Marx founded in 2003.

Marx has attributed his conversion to Christianity to becoming reunited with his biological father — according to Marx, formerly a “practicing warlock” — toward the end of his three years in the Marine Corps. Marx’s tendency to blame demons and warlocks would become the central cause of controversy of his own ministry, largely as a result of his focus on exorcisms as the solutions to issues ranging from pornography to veteran suicides. As Marx recently told “The Spillover” podcast, “I hunt pedophiles, but I also hunt demons.”

Skull Games also ends up being a hunt for sex workers, conflating them with trafficking victims as they prepare intelligence dossiers on women before turning them over to police.

Groups seeking to rescue sex workers — whether through religion, prosecution, or both — are nothing new, said Kristen DiAngelo, executive director of the advocacy group Sex Workers Outreach Project Sacramento. What Skull Games represents — the technological outsourcing of police work to civilian volunteers — presents a new risk to sex workers, she argued.

“I think it’s dangerous because you set up people to have that vigilante mentality.”

“I think it’s dangerous because you set up people to have that vigilante mentality — that idea that, we’re going to go out and we’re going to catch somebody — and they probably really believe that they are going to ‘save someone,’” DiAngelo told The Intercept and Tech Inquiry. “And that’s that savior complex. We don’t need saving; we need support and resources.”

The eighth Skull Games, which took place over the weekend of July 21, operated out of a private investigation firm headquartered in a former church in Wanaque, New Jersey. A photo of the event shared by the director of intelligence of Skull Games showed 57 attendees — almost all wearing matching black T-shirts — standing in front of corporate due diligence firm Hetherington Group’s office with a Skull Games banner unfurled across its front doors. Hetherington Group’s address is simple to locate online, but their office signage doesn’t mention the firm’s name, only saying “593 Ringwood LLC” above the words “In God We Trust.” (Cynthia Hetherington, the CEO of Hetherington Group and a board member of Skull Games, distanced her firm from the surveillance programs normally used at the events. “Cobwebs brought the bagels, which I’m still trying to digest,” she said. “I didn’t see their software anywhere in the event.”)

The attempt to merge computerized counterinsurgency techniques with right-wing evangelism has left some Skull Games participants uncomfortable. One experienced attendee of the January 2023 Skull Games was taken aback by an abundance of prayer circles and paucity of formal training. “Within the first 10 minutes,” the participant recalled of a training webinar, “I was like, ‘What the fuck is this?’”

2M69C9D Jeff Tiegs, chief operations officer of All Things Possible Ministries, blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, April 20, 2022. Tiegs said the hand gesture popularized by Star Trek originated as a blessing of the descendants of Aaron, a Jewish High Priest in the Torah.
Jeff Tiegs blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, on April 20, 2022.
Photo: Alamy

Delta Force OSINT

The numbers of nongovernmental surveillance practitioners has risen in tandem with the post-9/11 boom in commercial tools for social media surveillance, analyzing private chat rooms, and tracking cellphone pings.

Drawing on this abundance of civilian expertise, Skull Games brings together current and former military and law enforcement personnel, along with former sex workers and even employees of surveillance firms themselves. Both Skull Games and the high-profile, MAGA-beloved Operation Underground Railroad have worked with Cobwebs, but Skull Games roots its branding in counterinsurgency and special operations rather than homeland security.

“I fought the worst of the worst: ISIS, Al Qaeda, the Taliban,” Skull Games president and former Delta Force soldier Jeff Tiegs has said. “But the adversary I despise the most are human traffickers.” Tiegs has told interviewers that he takes “counterterrorism / counterinsurgency principles” and applies them to these targets.

“I fought the worst of the worst: ISIS, Al Qaeda, the Taliban. But the adversary I despise the most are human traffickers.”

The plan broadly mimicked a widely praised Pentagon effort to catch traffickers that was ultimately shut down this May due to a lack of funding. In a training session earlier this month, Tiegs noted that active-duty military service members take part in the hunts; veterans like Tiegs himself are everywhere. The attendee list for a recent training event shows participants with day jobs at the Department of Defense, Portland Police Bureau, and Air Force, as well as a lead contracting officer from U.S. Citizenship and Immigration Services.

Skull Games employs U.S. Special Forces jargon, which dominates the pamphlets handed out to volunteers. Each volunteer is assigned the initial informal rank of private and works out of a “Special Operations Coordination Center.” Government acronyms abound: Participants are asked to keep in mind CCIRs — Commander’s Critical Information Requirements — while preventing EEFIs — Essential Elements of Friendly Information— from falling into the hands of the enemy.

Tiegs’s transition from counterinsurgency to counter-human-trafficking empresario came after he met Jeff Keith, the founder of the anti-trafficking nonprofit Guardian Group, where Tiegs was an executive for nearly five years. While Tiegs was developing Guardian Group’s tradecraft for identifying victims, he was also beginning to work more closely with Marx, whom he met on a trip to Iraq in 2017. By the end of 2018, Marx and Tiegs had joined each others’ boards.

Beyond the Special Forces acumen of its leadership, what sets Skull Games apart from other amateur predator-hunting efforts is its reliance on “open-source intelligence.” OSINT, as it’s known, is a military euphemism popular among its practitioners that refers to a broad amalgam of intelligence-gathering techniques, most relying on surveilling the public internet and purchasing sensitive information from commercial data brokers.

Sensitive personal information is today bought and sold so widely, including by law enforcement and spy agencies, that the Office of the Director of National Intelligence recently warned that data “that could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety” is available on “nearly everyone.”

Skull Games’s efforts to tap this unregulated sprawl of digital personal data function as sort of vice squad auxiliaries. Participants scour the U.S. for digital evidence of sex work before handing their findings over to police — officers the participants often describe as friends and collaborators.

After publicly promoting 2020 as the year Guardian Group would “scale” its tradecraft up to tackling many more cases, Tiegs abruptly jumped from his role as chief operating officer of the organization into the same title at All Things Possible — Marx’s church. By December 2021, Tiegs had launched the first Skull Games under the umbrella of All Things Possible. The event was put together in close partnership with Echo Analytics, which had been acquired earlier that year by Quiet Professionals, a surveillance contractor led by a former Delta Force sergeant major. The first Skull Games took place in the Tampa offices of Echo Analytics, just 13 miles from the headquarters of U.S. Special Operations Command.

As of May 2023, Tiegs has separated from All Things Possible and leads the Skull Games as a newly independent, tax-exempt nonprofit. “Skull Games is separate and distinct from ATP,” he said in an emailed statement. “There is no role for ATP or Marx in Skull Games.”

The Hunt

Reached by phone, Tiegs downplayed the role of powerful surveillance tools in Skull Games’s work while also conceding he wasn’t always aware of what technologies were being used in the hunt for predators — or how.

Despite its public emphasis on taking down traffickers, much of Skull Games’s efforts boil down to scrolling through sex worker ad listings and attempting to identify the women. Central to the sleuthing, according to Tiegs and training materials reviewed by The Intercept and Tech Inquiry, is the search for visual indicators in escort ads and social media posts that would point to a woman being trafficked. An October 2022 report funded by the research and development arm of the U.S. Department of Justice, however, concluded that the appearance of many such indicators — mostly emojis and acronyms — was statistically insignificant.

Tiegs spoke candidly about the centrality of face recognition to Skull Games. “So here’s a girl, she’s being exploited, we don’t know who she is,” he said. “All we have is a picture and a fake name, but, using some of these tools, you’re able to identify her mugshot. Now you know everything about her, and you’re able to start really putting a case together.”

According to notes viewed by The Intercept and Tech Inquiry, the competition recommended that volunteers use FaceCheck.id and PimEyes, programs that allow users to conduct reverse image searches for an uploaded picture of face. In a July Skull Games webinar, one participant noted that they had been able to use PimEyes to find a sex worker’s driver’s license posted to the web.

In January, Cobwebs Technologies, an Israeli firm, announced it would provide Skull Games with access to its Tangles surveillance platform. According to Tiegs, the company is “one of our biggest supporters.” Previous reporting from Motherboard detailed the IRS Criminal Investigation unit’s usage of Cobwebs for undercover investigations.

Skull Games training materials provided to The Intercept and Tech Inquiry provide detailed instructions on the creation of “sock puppet” social media accounts: fake identities for covert research and other uses. Tiegs denied recommending the creation of such pseudonymous accounts, but on the eve of the eighth Skull Games, team leader Joe Labrozzi told fellow volunteers, “We absolutely recommend sock puppets,” according to a training seminar transcript reviewed by The Intercept and Tech Inquiry. Other volunteers shared tips on creating fake social media accounts, including the use of ChatGPT and machine learning-based face-generation tools to build convincing social media personas.

Tiegs also denied a participant’s assertion that Clearview AI’s face recognition software was heavily used in the January 2023 Skull Games. Training materials obtained by Tech Inquiry and The Intercept, however, suggest otherwise. At one point in a July training webinar, a Virginia law enforcement volunteer who didn’t give their name asked what rules were in place for using their official access to face recognition and other law enforcement databases. “It’s easier to ask for forgiveness than permission,” replied another participant, adding that some police Skull Games volunteers had permission to tap their departmental access to Clearview AI and Spotlight, an investigative tool that uses Amazon’s Rekognition technology to identify faces.

Cobwebs — which became part of the American wiretapping company PenLink earlier this month — provides a broad array of surveillance capabilities, according to a government procurement document obtained through a Freedom of Information Act request. Cobwebs provides investigators with the ability to continuously monitor the web for certain keyphrases. The Tangles platform can also provide face recognition; fuse OSINT with personal account data collected from search warrants; and pinpoint individuals through the locations of their phones — granting the ability to track a person’s movements going back as many as three years without judicial oversight.

When reached for comment, Cobwebs said, “Only through collaboration between all sectors of society — government, law enforcement, academia — and the proper tools, can we combat human trafficking.” The company did not respond to detailed questions about how its platform is used by Skull Games.

According to a source who previously attended a Skull Games event, and who asked for anonymity because of their ongoing role in counter-trafficking, only one member of the “task force” of participants had access to the Tangles platform: a representative from Cobwebs itself who could run queries from other task force analysts when requested. The rest of the group was equipped with whatever OSINT-gathering tools they already had access to outside of Skull Games, creating a lopsided exercise in which some participants were equipped with little more than their keyboards and Google searches, while others tapped tools like Clearview or Thomson Reuters CLEAR, an analytics tool used by U.S. Immigration and Customs Enforcement.

Tiegs acknowledged that most Skull Games participants likely have some professional OSINT expertise. By his account, they operate on a sort of BYO-intelligence-gathering-tool basis and, owing to Skull Games’s ad hoc use of technology, said he couldn’t confirm how exactly Cobwebs may have been used in the past. Despite Skull Games widely advertising its partnership with another source of cellphone location-tracking data — the commercial surveillance company Anomaly Six — Tiegs said, “We’re not pinpointing the location of somebody.” He claimed Skull Games uses less sophisticated techniques to generate leads for police who may later obtain a court order for, say, geolocational data. (Anomaly Six said that it is not providing its software or data to Skull Games.)

Tiegs also expressed frustration with the notion that deploying surveillance tools to crack down on sex work would be seen as impermissible. “We allow Big Data to monitor everything you’re doing to sell you iPods or sunglasses or new socks,” he said, “but if you need to leverage some of the same technology to protect women and children, all of the sudden everybody’s up in arms.”

Tiegs added, “I’m really conflicted how people rationalize that.”

People march in support of sex workers, Sunday, June 2, 2019, in Las Vegas. People marched in support of decriminalizing sex work and against the Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act, among other issues. (AP Photo/John Locher)
People march in support of sex workers and decriminalizing sex work on June 2, 2019, in Las Vegas.
Photo: John Locher/AP

“Pure Evil”

A potent strain of anti-sex work sentiment — not just opposition to trafficking — has pervaded Skull Games since its founding. Although the events are no longer affiliated with a church, Tiegs and his lieutenants’ devout Christianity suggests the digital hunt for pedophiles and pimps remains a form of spiritual warfare.

Michele Block, a Canadian military intelligence veteran who has worked as Skull Games’s director of intelligence since its founding at All Things Possible, is open about her belief that their surveillance efforts are part of a battle against Satan. In a December 2022 interview at America Fest, a four-day conference organized by the right-wing group Turning Point USA, Block described her work as a fight against “pure evil,” claiming that many traffickers are specifically targeting Christian households.

Tiegs argued that “100 percent” of sex work is human trafficking and that “to legalize the purchasing of women is a huge mistake.”

The combination of digital surveillance and Christian moralizing could have serious consequences not only for “predators,” but also their prey: The America Fest interview showed that Skull Games hopes to take down alleged traffickers by first going after the allegedly trafficked.

“So basically, 24/7, our intelligence department identifies victims of sex trafficking.”

“So basically, 24/7,” Block explained, “our intelligence department identifies victims of sex trafficking.” All of this information — both the alleged trafficker and alleged victim — is then handed over to police. Although Tiegs says Skull Games has provided police with “a couple hundred” such OSINT leads since its founding, he conceded the group has no information about how many have resulted in prosecutions or indictments of actual traffickers.

When asked about Skull Games’s position on arresting victims, Tiegs emphasized that “arresting is different from prosecuting” and argued, “Sometimes they do need to make the arrest, because of the health and welfare of that person. She needs to get clean, maybe she’s high. … Very rarely, in my opinion, is it right to charge and prosecute a girl.”

Sex worker advocates, however, say any punitive approach is not only ungrounded in the reality of the trade, but also hurts the very people it purports to help. Although exploitation and coercion are dire realities for many sex workers, most women choose to go into sex work either out of personal preference or financial necessity, according to DiAngelo, of Sex Workers Outreach Project Sacramento. (The Chicago branch of SWOP was a plaintiff in the American Civil Liberties Union’s successful 2020 lawsuit against Clearview AI in Illinois.)

Referring to research she had conducted with the University of California, Davis, DiAngelo explained that socioeconomic desperation is the most common cause of trafficking, a factor only worsened by a brush with the law. “The majority of the people we interview, even if we removed the person who was exploiting them from their life, they still wanted to be in the sex trade,” DiAngelo explained.

Both DiAngelo and Savannah Sly of the nonprofit New Moon Network, an advocacy group for sex workers, pointed to flaws in the techniques that police claim detect trafficking from coded language in escort ads. “You can’t tell just by looking at a picture whether someone’s trafficked or not,” Sly said. The “dragnet” surveillance of sex workers performed by groups like Skull Games, she claimed, imperils their human rights. “If I become aware I’m being surveilled, that’s not helping my situation,” Sly said, “Sex workers live with a high degree of paranoia.”

Rather than “rescuing” women from trafficking, DiAngelo argued Skull Games’s collaboration with police risks driving women into the company of people seeking to take advantage of them — particularly if they’ve been arrested and face diminished job prospects outside of sex work. DiAngelo said, “They’re going to lock them into sex work, because once you get the scarlet letter, nobody wants you anymore.”

The post The Online Christian Counterinsurgency Against Sex Workers appeared first on The Intercept.

]]>
https://theintercept.com/2023/07/29/skull-games-surveillance-sex-workers/feed/ 0 Jeff Tiegs, chief operations officer of All Things Possible Ministries, blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, April 20, 2022. Tiegs said the hand gesture Jeff Tiegs, blesses U.S. Army Soldiers and explains to them the religious origins of a popular hand gesture on Joint Base Elmendorf-Richardson, Alaska, April 20, 2022. Sex Workers Protest People march in support of sex workers, and decriminalizing sex work, June 2, 2019, in Las Vegas.
<![CDATA[Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency”]]> https://theintercept.com/2023/07/26/texas-phone-tracking-border-surveillance/ https://theintercept.com/2023/07/26/texas-phone-tracking-border-surveillance/#respond Wed, 26 Jul 2023 19:03:26 +0000 https://theintercept.com/?p=436563 The software was licensed as part of Gov. Greg Abbott’s troubled Operation Lone Star border crackdown.

The post Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency” appeared first on The Intercept.

]]>
The Texas Department of Public Safety purchased access to powerful software capable of locating and following people through their phones as part of Republican Gov. Greg Abbott’s “border security disaster” efforts, according to documents reviewed by The Intercept.

In 2021, Abbott proclaimed that the “surge of individuals unlawfully crossing the Texas-Mexico border posed an ongoing and imminent threat of disaster” to the state and its residents. Among other effects, the disaster declaration opened a spigot of government money to a variety of private firms ostensibly paid to help patrol and blockade the state’s border with Mexico.

One of the private companies that got in on the cash disbursements was Cobwebs Technologies, a little-known Israeli surveillance contractor. Cobwebs’s marquee product, the surveillance platform Tangles, offers its users a bounty of different tools for tracking people as they navigate both the internet and the real world, synthesizing social media posts, app activity, facial recognition, and phone tracking.

“As long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

News of the purchase comes as Abbott’s border crackdown escalated to new heights, following a Department of Public Safety whistleblower’s report of severe mistreatment of migrants by state law enforcement and a Justice Department lawsuit over the governor’s deployment of razor wire on the Rio Grande. The Cobwebs documents show that Abbott’s efforts to usurp the federal government’s constitutional authority to conduct immigration enforcement have extended into the electronic realm as well. The implications could reach far beyond the geographic bounds of the border and into the private lives of citizens and noncitizens alike.

“Government agencies systematically buying data that has been originally collected to provide consumer services or digital advertising represents the worst possible kind of decontextualized misuse of personal information,” Wolfie Christl, a privacy researcher who tracks data brokerages, told The Intercept. “But as long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

Like its competitors in the world of software tracking tools, Cobwebs — which sells its services to the Department of Homeland Security, the IRS, and a variety of undisclosed corporate customers — lets its clients track the movements of private individuals without a court order. Instead of needing a judge’s sign-off, these tracking services rely on bulk-purchasing location pings pulled from smartphones, often through unscrupulous mobile apps or in-app advertisers, an unregulated and increasingly pervasive form of location tracking.

In August 2021, the Texas Department of Public Safety’s Intelligence and Counterterrorism division purchased a year of Tangles access for $198,000, according to contract documents, obtained through a public records request by Tech Inquiry, a watchdog and research organization, and shared with The Intercept. The state has renewed its Tangles subscription twice since then, though the discovery that Cobwebs failed to pay taxes owed in Texas briefly derailed the renewal last April, according to an email included in the records request. (Cobwebs declined to comment for this story.)

A second 2021 contract document shared with The Intercept shows DPS purchased “unlimited” access to Clearview AI, a controversial face recognition platform that matches individuals to tens of billions of photos scraped from the internet. The purchase, according to the document, was made “in accordance/governed by the Texas Governor’s Disaster Declaration for the Texas-Mexico border for ongoing and imminent threats.” (Clearview did not respond to a request for comment.)

Each of the three yearlong subscriptions notes Tangles was purchased “in accordance to the provisions outlined in the Texas Governor-Proclaimed Border Disaster Declaration signed May 22, 2022, per Section 418.011 of the Texas Government Code.”

The disaster declaration, which spans more than 50 counties, is part of an ongoing campaign by Abbott that has pushed the bounds of civil liberties in Texas, chiefly through the governor’s use of the Department of Public Safety.

Under Operation Lone Star, Abbott has spent $4.5 billion surging 10,000 Department of Public Safety troopers and National Guard personnel to the border as part of a stated effort to beat back a migrant “invasion,” which he claims is aided and abetted by President Joe Biden. The resulting project has been riddled with scandal, including migrants languishing for months in state jails without charges and several suicides among personnel deployed on the mission. Just this week, the Houston Chronicle obtained an internal Department of Public Safety email revealing that troopers had been “ordered to push small children and nursing babies back into the Rio Grande” and “told not to give water to asylum seekers even in extreme heat.”

On Monday, the U.S. Justice Department sued Texas over Abbott’s deployment of floating barricades on the Rio Grande. Abbott, having spent more than two years angling for a states’ rights border showdown with the Biden administration, responded last week to news of the impending lawsuit by tweeting: “I’ll see you in court, Mr. President.”

Despite Abbott’s repeated claims that Operation Lone Star is a targeted effort focused specifically on crimes at the border, a joint investigation by the Texas Tribune, ProPublica, and the Marshall Project last year found that the state was counting arrests and drug charges far from the U.S-Mexico divide and unrelated to the Operation Lone Star mandate. Records obtained by the news organizations last summer showed that the Justice Department opened a civil rights investigation into Abbott’s operation. The status of the investigation has not been made public.

Where the Department of Public Safety’s access to Tangles’s powerful cellphone tracking software will fit into Abbott’s controversial border enforcement regime remains uncertain. (The Texas Department of Public Safety did not respond to a request for comment.)

Although Tangles provides an array of options for keeping tabs on a given target, the most powerful feature obtained by the Department of Public Safety is Tangles’s “WebLoc” feature: “a cutting-edge location solution which automatically monitors and analyzes location-based data in any specified geographic location,” according to company marketing materials. While Cobwebs claims it sources device location data from multiple sources, the Texas Department of Public Safety contract specifically mentions “ad ID,” a reference to the unique strings of text used to identify and track a mobile phone in the online advertising ecosystem.

“Every second, hundreds of consumer data brokers most people never heard of collect and sell huge amounts of personal information on everyone,” explained Christl, the privacy researcher. “Most of these shady and opaque data practices are systematically enabled by today’s digital marketing and advertising industry, which has gotten completely out of control.”

While advertisers defend this practice on the grounds that the device ID itself doesn’t contain a person’s name, Christl added that “several data companies sell information that helps to link mobile device identifiers to email addresses, phone numbers, names and postal addresses.” Even without extra context, tying a real name to an “anonymized” advertising identifier’s location ping is often trivial, as a person’s daily movement patterns typically quickly reveal both where they live and work.

Cobwebs advertises that WebLoc draws on “huge sums of location-based data,” and it means huge: According to a WebLoc promotional brochure, it affords customers “worldwide coverage” of smartphone pings based on “billions of data points to ensure maximum location based data coverage.” WebLoc not only provides the exact locations of smartphones, but also personal information associated with their owners, including age, gender, languages spoken, and interests — “e.g., music, luxury goods, basketball” — according to a contract document from the Office of Naval Intelligence, another Cobwebs customer.

The ability to track a person wherever they go based on an indispensable object they keep on or near them every hour of every day is of obvious appeal to law enforcement officials, particularly given that no judicial oversight is required to use a tool like Tangles. Critics of the technology have argued that a legislative vacuum allows phone-tracking tools, fed by the unregulated global data broker market, to give law enforcement agencies a way around Fourth Amendment protections.

The power to track people through Tangles, however, is valuable even in countries without an ostensible legal prohibition against unreasonable searches. In 2021, Facebook announced it had removed 200 accounts used by Cobwebs to track its users in Bangladesh, Saudi Arabia, Poland, and several other countries.

“In addition to targeting related to law enforcement activities,” the company explained, “we also observed frequent targeting of activists, opposition politicians and government officials in Hong Kong and Mexico.”

Beryl Lipton, an investigative researcher with the Electronic Frontier Foundation, told The Intercept that bolstering surveillance powers under the aegis of an emergency declaration adds further risk to an already fraught technology. “We need to be very skeptical of any expansion of surveillance that occurs under disaster declarations, particularly open-ended claims of emergency,” Lipton said. “They can undermine legislative checks on the executive branch and obviate bounds on state behavior that exist for good reason.”

The post Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency” appeared first on The Intercept.

]]>
https://theintercept.com/2023/07/26/texas-phone-tracking-border-surveillance/feed/ 0 MCALLEN, TX - JUNE 23: A Guatemalan father and his daughter arrives with dozens of other women, men and their children at a bus station following release from Customs and Border Protection on June 23, 2018 in McAllen, Texas. Once families and individuals are released and given a court hearing date they are brought to the Catholic Charities Humanitarian Respite Center to rest, clean up, enjoy a meal and to get guidance to their next destination. Before President Donald Trump signed an executive order Wednesday that halts the practice of separating families who are seeking asylum, over 2,300 immigrant children had been separated from their parents in the zero-tolerance policy for border crossers (Photo by Spencer Platt/Getty Images)
<![CDATA[Pentagon Joins Elon Musk’s War Against Plane Tracking]]> https://theintercept.com/2023/07/18/military-plane-flight-tracking/ https://theintercept.com/2023/07/18/military-plane-flight-tracking/#respond Tue, 18 Jul 2023 14:49:46 +0000 https://theintercept.com/?p=436252 The U.S. military’s elite special operations command doesn’t want its planes tracked, according to a procurement document.

The post Pentagon Joins Elon Musk’s War Against Plane Tracking appeared first on The Intercept.

]]>
A technology wish list circulated by the U.S. military’s elite Joint Special Operations Command suggests the country’s most secretive war-fighting component shares an anxiety with the world’s richest man: Too many people can see where they’re flying their planes.

The Joint Special Operations Air Component, responsible for ferrying commandos and their gear around the world, is seeking help keeping these flights out of the public eye through a “‘Big Data’ Analysis & Feedback Tool,” according to a procurement document obtained by The Intercept. The document is one of a series of periodic releases of lists of technologies that special operations units would like to see created by the private sector.

The listing specifically calls out the risk of social media “tail watchers” and other online observers who might identify a mystery plane as a military flight. According to the document, the Joint Special Operations Air Component needs software to “leverage historical and real-time data, such as the travel histories and details of specific aircraft with correlation to open-source information, social media, and flight reporting.”

Armed with this data, the tool would help the special operations gauge how much scrutiny a given plane has received in the past and how likely it is to be connected to them by prying eyes online.

“It just gives them better information on how to blend in. It’s like the police deciding to use the most common make of local car as an undercover car.”

Rather than providing the ability to fake or anonymize flight data, the tool seems to be aimed at letting sensitive military flights hide in plain sight. “It just gives them better information on how to blend in,” Scott Lowe, a longtime tail watcher and aviation photographer told The Intercept. “It’s like the police deciding to use the most common make of local car as an undercover car.”

While plane tracking has long been a niche hobby among aviation enthusiasts who enjoy cataloging the comings and goings of aircraft, the public availability of midair transponder data also affords journalists, researchers, and other observers an effective means of tracking the movements and activities of the world’s richest and most powerful. The aggregation and analysis of public flight data has shed light on CIA torture flights, movements of Russian oligarchs, and Google’s chummy relationship with NASA.

More recently, these sleuthing techniques gained international attention after they drew the ire of Elon Musk, the world’s richest man. After he purchased the social media giant Twitter, Musk banned an account that shared the movements of his private jet. Despite repeated promises to protect free speech — and a specific pledge to not ban the @ElonJet account — on the platform, Musk proceeded to censor anyone sharing his plane’s whereabouts, claiming the entirely legally obtained and fully public data amounted to “assassination coordinates.”

The Joint Special Operations Air Component’s desire for more discreet air travel, published six months after Musk’s jet data meltdown, is likely more firmly grounded in reality.

The Joint Special Operations Air Component provides a hypothetical scenario in which special forces need to travel with a “reduced profile” — that is to say, quietly — and use this tool.

“When determining if the planned movement is suitable and appropriate,” the procurement document says, “the ‘Aircraft Flight Profile Management Database Tool’ reveals that the aircraft is primarily associated with a distinctly different geographic area” — a frequent tip-off to civilian plane trackers that something interesting is afoot. “Additionally, ‘tail watchers’ have posted on social media pictures of the aircraft at various airfields. Based on the information available, the commander decides to utilize a different airframe for the mission. With the aircraft in flight, the tool is monitored for any indication of increased scrutiny or mission compromise.”

The request is part of a broad-ranging list of technologies sought by the Joint Special Operations Command, from advanced radios and portable blood pumps to drones that can fly months at a time. The 85-page list essentially advertises these technologies for private-sector contractors, who may be able to sell them to the Pentagon in the near future.

“What will be interesting is seeing how they change their operations after having this information.”

The document — marked unclassified but for “Further dissemination only as directed by the Office of the Secretary of Defense (OSD) Joint Capability and Technology Expo (JCTE) Team” — is part of an annual effort by Joint Special Operations Command to “inform and influence industry’s internal investment decisions in areas that address SOF’s most sensitive and urgent interest areas.”

The anti-plane-tracking tool fits into a broader pattern of the military attempting to minimize the visibility of its flights, according to Ian Servin, a pilot and plane-tracking enthusiast. In March, the military removed tail numbers and other identifying marks from its planes.

“What will be interesting is seeing how they change their operations after having this information,” Servin said. From a transparency standpoint, he added, “Those changes could be problematic or concerning.”

The post Pentagon Joins Elon Musk’s War Against Plane Tracking appeared first on The Intercept.

]]>
https://theintercept.com/2023/07/18/military-plane-flight-tracking/feed/ 0
<![CDATA[LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes]]> https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/ https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/#respond Tue, 20 Jun 2023 20:33:27 +0000 https://theintercept.com/?p=431690 ICE uses LexisNexis to track people's cars, gather information on people, and make arrests for its deportation machine, according to a contract.

The post LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes appeared first on The Intercept.

]]>
The legal research and public records data broker LexisNexis is providing U.S. Immigration and Customs Enforcement with tools to target people who may potentially commit a crime — before any actual crime takes place, according to a contract document obtained by The Intercept. LexisNexis data then helps ICE to track the purported pre-criminals’ movements.

The unredacted contract overview provides a rare look at the controversial $16.8 million agreement between LexisNexis and ICE, a federal law enforcement agency whose surveillance of and raids against migrant communities are widely criticized as brutal, unconstitutional, and inhumane.

“The purpose of this program is mass surveillance at its core.”

“The purpose of this program is mass surveillance at its core,” said Julie Mao, an attorney and co-founder of Just Futures Law, which is suing LexisNexis over allegations it illegally buys and sells personal data. Mao told The Intercept the ICE contract document, which she reviewed for The Intercept, is “an admission and indication that ICE aims to surveil individuals where no crime has been committed and no criminal warrant or evidence of probable cause.”

While the company has previously refused to answer any questions about precisely what data it’s selling to ICE or to what end, the contract overview describes LexisNexis software as not simply a giant bucket of personal data, but also a sophisticated analytical machine that purports to detect suspicious activity and scrutinize migrants — including their locations.

“This is really concerning,” Emily Tucker, the executive director of Georgetown Law School’s Center on Privacy and Technology, told The Intercept. Tucker compared the contract to controversial and frequently biased predictive policing software, causing heightened alarm thanks to ICE’s use of license plate databases. “Imagine if whenever a cop used PredPol to generate a ‘hot list’ the software also generated a map of the most recent movements of any vehicle associated with each person on the hot list.”

The document, a “performance of work statement” made as part of the contract with ICE, was obtained by journalist Asher Stockler through a public records request and shared with The Intercept. LexisNexis Risk Solutions, a subsidiary of LexisNexis’s parent company, inked the contract with ICE, a part of the Department of Homeland Security, in 2021.

“LexisNexis Risk Solutions prides itself on the responsible use of data, and the contract with the Department of Homeland Security encompasses only data allowed for such uses,” said LexisNexis spokesperson Jennifer Richman. She told The Intercept the company’s work with ICE doesn’t violate the law or federal policy, but did not respond to specific questions.

The document reveals that over 11,000 ICE officials, including within the explicitly deportation-oriented Enforcement and Removal Operations branch, were using LexisNexis as of 2021. “This includes supporting all aspects of ICE screening and vetting, lead development, and criminal analysis activities,” the document says.

In practice, this means ICE is using software to “automate” the hunt for suspicious-looking blips in the data, or links between people, places, and property. It is unclear how such blips in the data can be linked to immigration infractions or criminal activity, but the contract’s use of the term “automate” indicates that ICE is to some extent letting computers make consequential conclusions about human activity. The contract further notes that the LexisNexis analysis includes “identifying potentially criminal and fraudulent behavior before crime and fraud can materialize.” (ICE did not respond to a request for comment.)

LexisNexis supports ICE’s activities through a widely used data system named the Law Enforcement Investigative Database Subscription. The contract document provides the most comprehensive window yet for what data tools might be offered to a LEIDS clients. Other federal, state, and local authorities who pay a hefty subscription fee for the LexisNexis program could have access to the same powerful surveillance tools used by ICE.

The LEIDS program is used by ICE for “the full spectrum of its immigration enforcement,” according to the contract document. LexisNexis’s tools allow ICE to monitor the personal lives and mundane movements of migrants in the U.S., in search of incriminating “patterns” and for help to “strategize arrests.”

The ICE contract makes clear the extent to which LexisNexis isn’t simply a resource to be queried but a major power source for the American deportation machine.

LexisNexis is known for its vast trove of public records and commercial data, a constantly updating archive that includes information ranging from boating licenses and DMV filings to voter registrations and cellphone subscriber rolls. In the aggregate, these data points create a vivid mosaic of a person’s entire life, interests, professional activities, criminal run-ins no matter how minor, and far more.

While some of the data is valuable for the likes of researchers, journalists, and law students, LexisNexis has turned the mammoth pool of personal data into a lucrative revenue stream by selling it to law enforcement clients like ICE, who use the company’s many data points on over 280 million different people to not only determine whether someone constitutes a “risk,” but also to locate and apprehend them.

LexisNexis has long since deflected questions about its relationship by citing ICE’s “national security” and “public safety” mission; the agency is responsible for both criminal and civil immigration violations, including smuggling, other trafficking, and customs violations. The contract’s language, however, indicates LexisNexis is empowering ICE to sift through an large sea of personal data to do exactly what advocates have warned against: busting migrants for civil immigration violations, a far cry from thwarting terrorists and transnational drug cartels.

ICE has a documented history of rounding up and deporting nonviolent immigrants without any criminal history, whose only offense may be something on the magnitude of a traffic violation or civil immigration violation. The contract document further suggests LexisNexis is facilitating ICE’s workplace raids, one of the agency’s most frequently criticized practices, by helping immigration officials detect fraud through bulk searches of Social Security and phone numbers.

ICE investigators can use LexisNexis tools, the document says, to pull a large quantity of records about a specified individual’s life and visually map their relationships to other people and property. The practice stands as an exemplar of the digital surveillance sprawl that immigrant advocates have warned unduly broadens the gaze of federal suspicion onto masses of people.

Citing language from the contract, Mao, the lawyer on the lawsuit, said, “‘Patterns of relationships between entities’ likely means family members, one of the fears for immigrants and mixed status families is that LexisNexis and other data broker platforms can map out family relationships to identify, locate, and arrest undocumented individuals.”

The contract shows ICE can combine LexisNexis data with databases from other outside firms, namely PenLink, a controversial company that helps police nationwide request private user data from social media companies.

In this Wednesday, April 29, 2020 photo, a surveillance camera, top right, and license plate scanners, center, are seen at an intersection in West Baltimore. On Friday, May 1, planes equipped with cameras will begin creating a continuous visual record of the city of Baltimore so that police can see how potential suspects and witnesses moved to and from crime scenes. Police alerted to violent crimes by street-level cameras and gunfire sound detectors will work with analysts to see just where people came and went.
A license plate reader, center, and surveillance camera, top right, are seen at an intersection in West Baltimore, Md., on April 29, 2020.
Photo: Julio Cortez/AP

The contract’s “performance of work statement” mostly avoids delving into the numerous categories of data LEIDS makes available to ICE, but it does make clear the importance of one: scanned license plates .

The automatic scanning of license plates has created a feast for data-hungry government agencies, providing an effective means of tracking people. Many people are unaware that their license plates are continuously scanned as they drive throughout their communities and beyond — thanks to automated systems affixed to traffic lights, cop cars, and anywhere else a small camera might fit. These automated license plate reader systems, or ALPRs, are employed by an increasingly diverse range of surveillance-seekers, from toll booths to homeowners associations.

Police are a major consumer of the ALPR spigot. For them, tracking the humble license plate is a relatively cheap means of covertly tracking a person’s movements while — as with all the data offered by LexisNexis — potentially bypassing Fourth Amendment considerations. The trade in bulk license plate data is generally unregulated, and information about scanned plates is indiscriminately aggregated, stored, shared, and eventually sold through companies like LexisNexis and Thomson Reuters.

Though LexisNexis explored selling ICE its license plate scanner data according to the FOIA materials, federal procurement records show Thomson Reuters Special Services, a top LexisNexis Risk Solutions competitor, was awarded a contract in 2021 to provide license plate data. (Thomson Reuters did not immediately respond to a request for comment.)

A major portion of the LEIDS overview document details ICE’s access to and myriad use of license plate reader data to geolocate its targets, providing the agency with 30 million new plate records monthly. The document says ICE can access data on any license plate query going back years; while the time frame for different kinds of investigations aren’t specified, the contract document says immigration investigations can query location and other data on a license plate going back five years.

“This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle.”

The LEIDS license plate bounty provides ICE investigators with a variety of location-tracking surveillance techniques, including the ability to learn which license plates — presumably including people under no suspicion of any wrongdoing — have appeared in a location of interest. Users subscribing to LEIDS can also plug a plate into the system and automatically get updates on the car as they come in, including maps and vehicle images. ICE investigators are allowed to place up to 2,500 different license plates onto their own watchlist simultaneously, the contract notes.

ICE agents can also bring the car-tracking tech on the road through a dedicated smartphone app that allows them to, with only a few taps, snap a picture of someone’s plate to automatically place them on the watchlist. Once a plate of interest is snapped and uploaded, ICE agents then need only to wait for a convenient push notification informing them that there’s been activity detected about the car.

Combining the staggering number of plates with the ability to search them from anywhere provides a potent tool with little oversight, according to Tucker, of Georgetown Law.

Tucker told The Intercept, “This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle encountered by any agent in any context.”

In conjunction with Thomson Reuters plate-reader data, the information provided by LexisNexis creates a potential for powerful tracking. Vehicle ownership and registration information from motor vehicle departments, for instance, can tie specific people to plate numbers. In addition, LexisNexis sells many other forms of personal information that can be used to chart a person’s general location and movements over time: Data on jail bookings, home utilities, and other detailed property and financial records tie people to both places and others in a way that’s difficult if not impossible to opt out of.

LexisNexis’s LEIDS program is, crucially, not an outlier in the United States. For-profit data brokers are increasingly tapped by law enforcement and intelligence agencies for both the vastness of the personal information they collect and the fact that this data can be simply purchased rather than legally obtained with a judge’s approval.

“Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, CAI includes information on nearly everyone,” warned a recently declassified report from the Office of the Director of National Intelligence on so-called commercially available information. Specifically citing LexisNexis, the report said the breadth of the information “could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety.”

While the ICE contract document is replete with mentions of how these tools will be used to thwart criminality — obscuring the extent to which this the ends up deporting noncriminal migrants guilty of breaking only civil immigration rules — Tucker said the public should take seriously the inflated ambitions of ICE’s parent agency, the Department of Homeland Security.

“What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities,” Tucker said, “which produce opportunities for immigration enforcement but no longer have the primary purpose of immigration enforcement.”

“What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities.”

The federal government allows the general Homeland Security apparatus so much legal latitude, Tucker explained, that an agency like ICE is the perfect vehicle for indiscriminate surveillance of the general public, regardless of immigration status.

“That’s not to say that DHS isn’t still detaining and deporting hundreds of thousands of people every year. Of course they are, and it’s horrific,” Tucker said. “But the main goal of DHS’s surveillance infrastructure is not immigration enforcement, it’s … surveillance.

“Use the agency that operates with the fewest legal and political restraints to put everyone inside a digital panopticon, and then figure out who to target for what kind of enforcement later, depending on the needs of the moment.”

Update: June 21, 2023
This story has been updated to clarify that Thomson Reuters Special Services was contracted in 2021 to provide license plate scanner data for the LEIDS program used by ICE.

Update: June 23, 2023
This story has been updated to include specifics on the types of data LexisNexis makes available to ICE that could allow the agency to geolocate and track people.

The post LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates/feed/ 0 MCALLEN, TX - JUNE 23: A Guatemalan father and his daughter arrives with dozens of other women, men and their children at a bus station following release from Customs and Border Protection on June 23, 2018 in McAllen, Texas. Once families and individuals are released and given a court hearing date they are brought to the Catholic Charities Humanitarian Respite Center to rest, clean up, enjoy a meal and to get guidance to their next destination. Before President Donald Trump signed an executive order Wednesday that halts the practice of separating families who are seeking asylum, over 2,300 immigrant children had been separated from their parents in the zero-tolerance policy for border crossers (Photo by Spencer Platt/Getty Images) License plate reader Surveillance A license plate reader, center and surveillance camera, top right, are seen at an intersection in West Baltimore, April 29, 2020.
<![CDATA[Pentagon’s Secret Service Trawls Social Media for Mean Tweets About Generals]]> https://theintercept.com/2023/06/17/army-surveillance-social-media/ https://theintercept.com/2023/06/17/army-surveillance-social-media/#respond Sat, 17 Jun 2023 10:00:00 +0000 https://theintercept.com/?p=430968 A document shows the Protective Services Battalion uses sophisticated surveillance tools that can pinpoint anyone’s location.

The post Pentagon’s Secret Service Trawls Social Media for Mean Tweets About Generals appeared first on The Intercept.

]]>
When the Chair of the Joint Chiefs of Staff Gen. Mark Milley enters into his scheduled retirement later this year, one of the perks will include a personal security detail to protect him from threats — including “embarrassment.” 

The U.S. Army Protective Services Battalion, the Pentagon’s little-known Secret Service equivalent, is tasked with safeguarding top military brass. The unit protects current as well as former high-ranking military officers from “assassination, kidnapping, injury or embarrassment,” according to Army records.

Protective Services’s mandate has expanded to include monitoring social media for “direct, indirect, and veiled” threats and identifying “negative sentiment” regarding its wards, according to an Army procurement document dated September 1, 2022, and reviewed by The Intercept. The expansion of the Protective Services Battalion’s purview has not been previously reported.

The country’s national security machinery has become increasingly focused on social media — particularly as it relates to disinformation. Various national security agencies have spent recent years standing up offices all over the federal government to counter the purported threat.

“The ability to express opinions, criticize, make assumptions, or form value judgments — especially regarding public officials — is a quintessential part of democratic society.”

“There may be legally valid reasons to intrude on someone’s privacy by searching for, collecting, and analyzing publicly available information, particularly when it pertains to serious crimes and terrorist threats,” Ilia Siatitsa, program director at Privacy International, told The Intercept. “However, expressing ‘positive or negative sentiment towards a senior high-risk individual’ cannot be deemed sufficient grounds for government agencies to conduct surveillance operations, even going as far as ‘pinpointing exact locations’ of individuals. The ability to express opinions, criticize, make assumptions, or form value judgments — especially regarding public officials — is a quintessential part of democratic society.”

Protective details have in the past generated controversy over questions about their cost and necessity. During the Trump administration, Education Secretary Betsy DeVos’s around-the-clock security detail racked up over $24 million in costs. Trump’s Environmental Protection Agency Administrator Scott Pruitt ran up over $3.5 million in bills for his protective detail — costs that were determined unjustified by the EPA’s inspector general. The watchdog also found that the EPA had not bothered to “assess the potential dangers posed by any of these threats” to Pruitt. 

Frances Seybold, a spokesperson for the Army Criminal Investigation Division, pointed The Intercept to a webpage about the office, which has been renamed the Executive Protection and Special Investigations Field Office. Seybold did not respond to substantive questions about social media monitoring by the protective unit.

The procurement document — published in redacted form on an online clearinghouse for government contracts but reviewed without redactions by The Intercept — begins by describing the Army’s need to “mitigate online threats” as well as identify “positive or negative sentiment” about senior Pentagon officials.

“This is an ongoing PSIFO/PIB” — Protective Services Field Office/Protective Intelligence Branch — “requirement to provide global protective services for senior Department of Defense (DoD) officials, adequate security in order to mitigate online threats (direct, indirect, and veiled), the identification of fraudulent accounts and positive or negative sentiment relating specifically to our senior high-risk personnel,” the document says.

The document goes on to describe the software it would use to acquire “a reliable social media threat mitigation service.” The document says, “The PSIFO/PIB needs an Open-Source Web based tool-kit with advanced capabilities to collect publicly available information.” The toolkit would “provide the anonymity and security needed to conduct publicly accessible information research through misattribution by curating user agent strings and using various egress points globally to mask their identity.”

The Army planned to use these tools not just to detect online “threats,” but also pinpoint their exact location by combining various surveillance techniques and data sources. 

The document cites access to Twitter’s “firehose,” which would grant the Army the ability to search public tweets and Twitter users without restriction, as well as analysis of 4Chan, Reddit, YouTube, and Vkontakte, a Facebook knockoff popular in Russia. Internet chat platforms like Discord and Telegram will also be scoured for the purpose of “identifying counterterrorism and counter-extremism and radicalization,” though it’s unclear what exactly those terms mean here.

The Army’s new toolkit goes far beyond social media surveillance of the type offered by private contractors like Dataminr, which helps police and military agencies detect perceived threats by scraping social media timelines and chatrooms for various keywords. Instead, Army Protective Services Battalion investigators would seemingly combine social media data with a broad variety of public and nonpublic information, all accessible through a “universal search selector.” 

These sources of information include “signal-rich discussions from illicit threat-actor communities and access to around-the-clock conversations within threat-actor channels,” public research, CCTV feeds, radio stations, news outlets, personal records, hacked information, webcams, and — perhaps most invasive — cellular location data. 

The document mentions the use of “geo-fenced” data as well, a controversial practice wherein an investigator draws a shape on a digital map to focus their surveillance of a specific area. While app-based smartphone tracking is a potent surveillance technique, it remains unclear how exactly this data might actually be used to unmask threatening social media posts, or what relevance other data categories like radio stations or academic research could possibly have.

The Army wasn’t just looking for surveillance software, but also tools to disguise the Army’s internet presence as it monitors the web.

The Army procurement document shows it wasn’t just looking for surveillance software, but also tools to disguise the Army’s internet presence as it monitors the web. The contract says the Army would use “misattribution”: deceiving others about who is actually behind the keyboard. The document says the Army would accomplish this through falsifying web browser information and by relaying Army internet traffic through servers located in foreign cities, obscuring its stateside origin. 

According to the document, “SEWP Solutions, LLC is the only vendor that allows USACID the ability to tunnel into specific countries/cities like Moscow, Russia or Beijing, China and come out on a host nation internet domain.”

The data used by the toolkit all falls under the rubric of “PAI,” or publicly available information, a misnomer that often describes not only what is freely available to the public, but also commercially purchased private information bought and sold by a wide constellation of shadowy surveillance firms and data brokers. Location data gleaned from smartphone apps and resold by the unregulated mobile ad industry provides nearly anyone — including the Army, it appears — with an effortless, unaccountable means of tracking the phone-owning public’s movements with pinpoint accuracy, both in the U.S. and abroad.

A recently declassified report from the Office of the Director of National Intelligence outlines the dramatic and invasive surveillance efforts conducted by the U.S. government through the purchase of data collected in the private sector. Through contracts with private entities, the government has skirted laws enshrining due process, allowing federal agencies to collect cellular data on millions of Americans without warrants or judicial oversight.

While the procurement document doesn’t name a specific product, it does show that the contract was awarded to SEWP Solutions, LLC. SEWP is a federal software vendor that has repeatedly sold the Department of Defense a suite of surveillance tools that closely matches what’s described in the Army project. This suite, marketed under the oddly named Berber Hunter Tool Kit, is a collection of surveillance tools by different firms bundled together by ECS Federal, a major federal software vendor. ECS and three other federal contractors jointly own SEWP, which resells Berber Hunter.

ECS also sells a PAI toolkit under the brand name Argos, whose three main features listed on the ECS website all feature prominently in the Army contracting document. It is unclear if Argos is a rebrand of the Berber Hunter suite, or a new offering. (Neither ECS nor SEWP responded to a request for comment.)

Job listings and contracting documents provide a rough sketch of what’s included in Berber Hunter. According to one job post, the suite includes software made by Babel Street, a controversial broker of personal information and location data, along with so-called open-source intelligence tools sold by Echosec and Zignal Labs. Last year, Echosec was purchased by Flashpoint Intel, an intelligence contractor that reportedly boasted of work to thwart protests and infiltrate private chat rooms. 

A 2022 FBI procurement memo obtained by the researcher Jack Poulson and reviewed by The Intercept mentions the bureau’s use of Flashpoint tools, with descriptions that resemble what the Army says in the procurement document about the monitoring of “extremist” chat rooms.

“In relation to extremist forums, Flashpoint has maintained misattributable personas for years on these platforms,” the FBI memo says. “Through these personas, Flashpoint has captured and scraped the contents of these forums.” The memo noted that the FBI “does not want to advertise they are seeking this type of data collection.”

According to the Protective Services Battalion document, the Army also does not want to advertise its interest in broad data collection. The redacted copy of the contract document, while public, is marked as CUI, for “Controlled Unclassified Information,” and FEDCON, meant for federal employees and contractors only.

“Left unregulated, open-source intelligence could lead to the kind of abuses observed in other forms of covert surveillance operations,” said Siatitsa, of Privacy International. “The systematic collection, storage, and analysis of information posted online by law enforcement and governmental agencies constitutes a serious interference with the right to respect for private life.”

The post Pentagon’s Secret Service Trawls Social Media for Mean Tweets About Generals appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/17/army-surveillance-social-media/feed/ 0
<![CDATA[Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest]]> https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/ https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/#respond Tue, 13 Jun 2023 15:42:46 +0000 https://theintercept.com/?p=431206 The algorithm used for the cash relief program is broken, a Human Rights Watch report found.

The post Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest appeared first on The Intercept.

]]>
A program spearheaded by the World Bank that uses algorithmic decision-making to means-test poverty relief money is failing the very people it’s intended to protect, according to a new report by Human Rights Watch. The anti-poverty program in question, known as the Unified Cash Transfer Program, was put in place by the Jordanian government.

Having software systems make important choices is often billed as a means of making those choices more rational, fair, and effective. In the case of the poverty relief program, however, the Human Rights Watch investigation found the algorithm relies on stereotypes and faulty assumptions about poverty.

“Its formula also flattens the economic complexity of people’s lives into a crude ranking.”

“The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” the report found. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.”

The program, known in Jordan as Takaful, is meant to solve a real problem: The World Bank provided the Jordanian state with a multibillion-dollar poverty relief loan, but it’s impossible for the loan to cover all of Jordan’s needs.  

Without enough cash to cut every needy Jordanian a check, Takaful works by analyzing the household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked — using a secret algorithm — to automatically determine who are the poorest and most deserving of relief. The idea is that such a sorting algorithm would direct cash to the most vulnerable Jordanians who are in most dire need of it. According to Human Rights Watch, the algorithm is broken.

The rights group’s investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car.

Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t. The Human Rights Watch report, however, explains that sometimes electricity usage is high precisely for poverty-related reasons. “For example, a 2020 study of housing sustainability in Amman found that almost 75 percent of low-to-middle income households surveyed lived in apartments with poor thermal insulation, making them more expensive to heat.”

In other cases, one Jordanian household may be using more electricity than their neighbors because they are stuck with old, energy-inefficient home appliances.

Beyond the technical problems with Takaful itself are the knock-on effects of digital means-testing. The report notes that many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online.

“Who needs money?” asked one 29-year-old Jordanian Takaful recipient who spoke to Human Rights Watch. “The people who really don’t know how [to apply] or don’t have internet or computer access.”

Human Rights Watch also faulted Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses, which “fails to recognize how people struggle to make ends meet, or their reliance on credit, support from family, and other ad hoc measures to bridge the gap.”

The report found that the rigidity of this step forced people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity. “Forcing people to mold their hardships to fit the algorithm’s calculus of need,” the report said, “undermines Takaful’s targeting accuracy, and claims by the government and the World Bank that this is the most effective way to maximize limited resources.”

The report, based on 70 interviews with Takaful applicants, Jordanian government workers, and World Bank personnel, emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South.

Confounding the dysfunction of an algorithmic program like Takaful is the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty. Just as dazzled ChatGPT users often accept nonsense outputs from the chatbot because the concept of a convincing chatbot is so inherently impressive, artificial intelligence ethicists warn the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia.

The Jordanian government’s official statement to Human Rights Watch defending Takaful’s underlying technology provides a perfect example: “The methodology categorizes poor households to 10 layers, starting from the poorest to the least poor, then each layer includes 100 sub-layers, using statistical analysis. Thus, resulting in 1,000 readings that differentiate amongst households’ unique welfare status and needs.”

“These are technical words that don’t make any sense together.”

When Human Rights Watch asked the Distributed AI Research Institute to review these remarks, Alex Hanna, the group’s director of research, concluded, “These are technical words that don’t make any sense together.” DAIR senior researcher Nyalleng Moorosi added, “I think they are using this language as technical obfuscation.”

As is the case with virtually all automated decision-making systems, while the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’”

While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction.

So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.

The post Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest appeared first on The Intercept.

]]>
https://theintercept.com/2023/06/13/jordan-world-bank-poverty-algorithm/feed/ 0
<![CDATA[U.S. Marshals Spied on Abortion Protesters Using Dataminr]]> https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/ https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/#respond Mon, 15 May 2023 10:00:39 +0000 https://theintercept.com/?p=427574 Twitter’s “official partner” monitored the precise time and location of post-Roe demonstrations, internal emails show.

The post U.S. Marshals Spied on Abortion Protesters Using Dataminr appeared first on The Intercept.

]]>
Dataminr, an “official partner” of Twitter, alerted a federal law enforcement agency to pro-abortion protests and rallies in the wake of the reversal of Roe v. Wade, according to documents obtained by The Intercept through a Freedom of Information Act request.

Internal emails show that the U.S. Marshals Service received regular alerts from Dataminr, a company that persistently monitors social media for corporate and government clients, about the precise time and location of both ongoing and planned abortion rights demonstrations. The emails show that Dataminr flagged the social media posts of protest organizers, participants, and bystanders, and leveraged Dataminr’s privileged access to the so-called firehose of unrestricted Twitter data to monitor constitutionally protected speech.

“This is a technique that’s ripe for abuse, but it’s not subject to either legislative or judicial oversight,” said Jennifer Granick, an attorney with the American Civil Liberties Union’s Speech, Privacy, and Technology Project.

The data collection alone, however, can have a deleterious effect on free speech. Mary Pat Dwyer, the academic program director of the Institute for Technology Law and Policy at Georgetown University, told The Intercept, “The more it’s made public that law enforcement is gathering up this info broadly about U.S. residents and citizens, it has a chilling effect on whether people are willing to express themselves and attend protests and plan protests.”

The documents obtained by The Intercept are from April to July 2022, during a period of seismic news from the Supreme Court. Following the leak of a draft decision that the court would overturn Roe v. Wade, the cornerstone of reproductive rights in the U.S., pro-abortion advocates staged massive protests and rallies across the country. This was not the first time Dataminr helped law enforcement agencies monitor mass demonstrations in the wake of political outcry: In 2020, The Intercept reported that the company had surveilled Black Lives Matter protests for the Minneapolis Police Department following the murder of George Floyd.

The Marshals Service’s social media surveillance ingested Roe-related posts nearly as soon as they began to appear. In a typical alert, a Dataminr analyst wrote a caption summarizing the social media data in question, with a link to the original post. On May 3, 2022, the day after Politico’s explosive report on the draft decision, New York-based artist Alex Remnick tweeted about a protest planned later that day in Foley Square, a small park in downtown Manhattan surrounded by local and federal government buildings. Dataminr quickly forwarded their tweet to the Marshals. That evening, Dataminr continued to relay information about the Foley Square rally, now in full swing, with alerts like “protestors block nearby streets near Foley Square,” as well as photos of demonstrators, all gleaned from Twitter.

The following week, Dataminr alerted the Marshals when pro-abortion demonstrators assembled at the Basilica of St. Patrick’s Old Cathedral in Manhattan, coinciding with a regular anti-abortion event held by the church. Between 9:06 and 9:53 that morning, the Marshals received five separate updates on the St. Patrick’s protest, including an estimated number of attendees, again based on the posts of unwitting Twitter users.

In the weeks and months that followed, the emails show that Dataminr tipped off the Marshals to dozens of protests, including many pro-abortion gatherings, from Maine to Wisconsin to Virginia, both before and during the demonstrations. Untold other protests, rallies, and exercises of the First Amendment may have been monitored by the company; in response to The Intercept’s public records request, the Marshals Service identified nearly 5,000 pages of relevant documents but only shared about 800 pages. The U.S. Marshals Service did not respond to a request for comment.

The documents obtained by The Intercept are email digests of social media activity that triggered alerts based on requested search terms, which appear at the bottom of the reports. The subscribed topics have ambiguous names like “SCOTUS Mentions,” “Federal Courthouses and Personnel Hazards_V2,” “Public Safety Critical Events,” “Attorneys,” and “Officials.” The lists suggest that the Marshals were not specifically seeking information on abortion rallies; rather, the agency had cast such a broad surveillance net that large volumes of innocuous First Amendment-protected activity regularly got swept up as potential security threats. What the Marshals did with the information Dataminr collected remains unknown.

“The breadth of these search categories and terms is definitely going to loop in political speech. It’s a certainty,” Granick told The Intercept. “It’s a reckless indifference to the fact that you’re going to end up spying on core constitutionally protected political activity.”

Pro-choice and pro-life supporters confronted each other on Mott street between St. Patrick's old cathedral and Planned Parenthood in New York on June 4, 2022. Pro-choice for rights to get abortion staged rally at the front of St. Patrick's old cathedral and pro-life supporters counter protest and pushed their way up to Planned Parenthood. Police tried to separate demonstrators. (Photo by Lev Radin/Sipa USA)(Sipa via AP Images)
Pro-abortion and anti-abortion supporters confronted each other on Mott Street between the Basilica of St. Patrick’s Old Cathedral and Planned Parenthood in New York City on June 4, 2022.
Photo: Lev Radin/Sipa via AP

The oldest law enforcement agency in the U.S., the Marshals are a niche holdover of early American policing, immortalized in cowboy movies and tales of the Wild West. Today, the Marshals Service retains a unique mission among federal agencies, consisting largely of transporting prisoners, hunting fugitives, and ensuring the safety of federal courts and judicial staff.

While some of the Dataminr alerts aligned with this mission, such as informing the Marshals of protests near courthouses or judges’ homes, others monitored protests in locations without any ostensible relation to the judiciary. The Basilica of St. Patrick’s Old Cathedral is well over a mile from the nearest courthouse and surrounded by trendy cafes and boutiques. Brooklyn’s Barclays Center, a sports and performance venue where a protest organized on Facebook was flagged by Dataminr on May 3, 2022, is nearly a mile from the closest courthouse.

The Marshals’ broad use of social media surveillance is not the first instance of its apparent mission creep in recent years: In 2021, The Intercept reported that a drone operated by the Marshals had spied on Black Lives Matter protests in Washington, D.C.

As an attorney who frequents courthouses, including during protests, Granick rejected the notion that a political rally is a security threat by dint of its proximity to a judiciary building.

“I would say that a tiny, tiny, tiny fraction of protests at courthouses pose any kind of risk of either property damage or personal injury,” she said. “And there’s really no reason to gather information on who is going to that protest, or what their other political views are, or how they’re communicating with other people who also believe in that cause.”

Dataminr sent a regular volley of alerts about planned and ongoing protests at or near the homes of conservative Supreme Court Justices Clarence Thomas, Brett Kavanaugh, and Amy Coney Barrett. On June 24, 2022, Dataminr sent the Marshals an alert that read, “Protest planned for 18:30 at CVS on 5700 Burke Centre Parkway in Burke, VA to travel to residence of US Supreme Court Justice Thomas.” Follow-up alerts noted the protesters were “at entrance to subdivision of neighborhood where US Supreme Court Justice Thomas lives.” A third alert included that the Marshals were already at the protest; it’s unclear why the agency would need to monitor discussion of an event where its marshals were already present.

Only a small fraction of the alerts reviewed by The Intercept include content that could plausibly be construed as threatening, and even those seem to lack any specificity that would make them useful to a federal agency. On May 3, 2022, Dataminr flagged a tweet that read “WE’RE COMING FOR YOU PLANNED PARENTHOOD.” A week later, another tweet exhorted followers to “[b]urn down anti abortion orgs, kick in extremist churches and smash the homes of the oppressors.”

“There’s an assumption underlying this that someone who complains on Twitter is more dangerous than someone who doesn’t complain on Twitter.”

The following month, Dataminr reported two tweets to the Marshals that appeared to be more hyperbolic fantasies than credible threats. One user tweeted that they would pay to watch the Supreme Court justices who overturned Roe burn alive, while another cited an individual who tweeted, “I’m not not advocating for burning down buildings. But trauma and destruction is kind of the thing that I love.”

At other times, Dataminr seemed incapable of distinguishing between slang and violence. Among several tweets about the 2022 Met Gala inexplicably flagged by Dataminr, the Marshals Service was alerted to a fan account of the actor Timothée Chalamet that tweeted, “i would destroy the met gala” — an online colloquialism for something akin to stealing the show.

These alerts show that despite the claims in its marketing materials, Dataminr isn’t necessarily in the business of public safety, but rather bulk, automated scrutiny. Given the generally incendiary, keyed-up nature of social media speech, a vast number of people might potentially be treated with suspicion by police in the total absence of a criminal act.

“There’s an assumption underlying this that someone who complains on Twitter is more dangerous than someone who doesn’t complain on Twitter,” Granick said. “Inevitably, you have people making decisions about what anger is legitimate and what anger is not.”

FILE - A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh, in Chevy Chase, Md., June 8, 2022. The House has given final approval to legislation to allow around-the-clock security protection for families of Supreme Court justices. The vote on Tuesday came one week after a man carrying a gun, knife and zip ties was arrested near Justice Brett Kavanaugh’s house after threatening to kill the justice.  (AP Photo/Jacquelyn Martin, File)
A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh in Chevy Chase, Md., on June 8, 2022.
Photo: Jacquelyn Martin/AP

Aside from alerts about protests near judges’ homes or courthouses, many of the Dataminr notices appear to have no relevance to American law enforcement. Emails reviewed by The Intercept show that Dataminr alerted the Marshals to social media chatter about Saudi airstrikes in Yemen, attacks in Syria using improvised explosive devices, and political protests in Argentina.

Dataminr represents itself as a “real-time AI platform,” but company sources have previously told The Intercept that this is largely a marketing feint and that human analysts conduct the bulk of platform surveillance, scouring the web for posts they think their clients want to see.

Nonetheless, Dataminr is armed with one technological advantage: the Twitter firehose. For companies willing to pay for it, Twitter’s firehose program provides unfettered access to the entirety of the social network and the ability to automatically comb every tweet, topic, and photo in real time.

The Marshals Service emails also show the extent to which Dataminr is drinking from far more than the Twitter firehose. The emails indicate that the agency is notified when internet users merely mention certain political figures, namely judges and state attorneys general, on Telegram channels or in the comments of news articles.

Although most of the Dataminr alerts don’t include the text of the original posts, those that do often flag innocuous content across the political spectrum, including hundreds of mundane comments from blogs and news websites. In July, for instance, Dataminr reported to the Marshals web comments calling New York Attorney General Letitia James a “racist;” a user saying, “God Bless Gov. Youngkin,” referring to the Virginia governor; and another comment arguing that “Trump wants to hide out in the Oval Office from the responsibility and any accountability for what he did on January 6th and before.” When Ohio Attorney General Dave Yost made national headlines after suggesting that reports of a 10-year-old rape victim denied an abortion may have been fabricated, the Marshals received dozens of alerts about blog comments debating his words.

In some cases, Dataminr appeared incapable of differentiating between people with the same name. On May 18, the Marshals received an alert that “New Jersey District Court Magistrate Judge Jessica S. Allen” was mentioned in a Telegram channel used to organize an anti-Covid lockdown rally in Australia. The text in question appears to be automated, semicoherent spam: “I’ve been a victim of scam, was scared of getting scammed again, but somehow I managed to squeeze out some couple of dollars and I invested with Jessica Allen, damn to my surprise I got my profit within 2 hours.”

Even those sharing links to articles without any added commentary on Telegram fell under Dataminr scrutiny. When one Telegram user shared a July 4, 2022, story from The Hill about Kentucky Attorney General Daniel Cameron’s request that the Supreme Court put the state’s abortion ban back in place, it was flagged to the U.S. Marshals within an hour.

“Discussions of how people view political officials governing them, discussions of constitutional rights, planning protests — that’s supposed to be the most protected speech,” Georgetown’s Dwyer said. “And here you have it being swept up and provided to law enforcement.”

At the time the Marshals received the alerts obtained by The Intercept, Dataminr was listed as an “official partner” on Twitter’s website. Since Elon Musk acquired Twitter in October 2022, the company’s partnership with the social media site has continued. Despite his fury against people who might track the location of his private jet, Musk does not appear to have similar misgivings about furnishing federal police with the precise real-time locations of peaceful protesters.

Twitter’s longtime policy forbids third parties from “conducting or providing surveillance or gathering intelligence” or “monitoring sensitive events (including but not limited to protests, rallies, or community organizing meetings).” When asked how Dataminr’s surveillance of protests using Twitter could be compatible with the policy banning the surveillance of protests, Dataminr spokesperson Georgia Walker said in a statement:

Dataminr supports all public sector clients with a product called First Alert which was specifically developed with input from Twitter, and fully complies with Twitter’s policies and the policies of all our data providers. First Alert delivers breaking news alerts enabling first responders to respond more quickly to public safety emergencies. First Alert is not permitted to be used for surveillance of any kind by First Alert users. First Alert provides a public good while ensuring maximum protections for privacy and civil liberties.

Both Twitter, which no longer has a communications team in the Musk era, and Dataminr have denied that the persistent real-time monitoring of the platform on behalf of police constitutes “surveillance” because the posts are public. Civil libertarians and scholars of state surveillance generally reject their argument, noting that other forms of surveillance routinely occur in public spaces — security cameras pointed at the sidewalk, for instance — and that Dataminr is surfacing posts that would likely be hard for police to find through a manual search.

“There is a world of difference between reading through some public tweets and having a service which indexes, stores, aggregates, and makes that information searchable.”

“There is a world of difference between reading through some public tweets and having a service which indexes, stores, aggregates, and makes that information searchable,” Granick said. As is typical with surveillance tools, police are inclined to use Dataminr not necessarily because it’s effective in thwarting or solving crimes, she said, but because it’s easy and relatively cheap. Receiving a constant flow of alerts from Dataminr creates the appearance of intelligence-gathering without any clear objective or actual intelligence.

In the absence of automated tools like Dataminr, police would have to make choices about how to use their finite time to sift through the vastness of social media platforms, which would likely result in more focus on actual criminality instead of harmless political chatter.

“What this technology does is it liberates law enforcement from having to make that economic calculation and enables them to do both,” Granick explained. “And then once the technology does that, in the absence of any kind of regulation, there’s insufficient disincentive to stop them from doing it.”

Following January 6, 2021, lawmakers questioned why police were blindsided by the storming of the U.S. Capitol even though it was openly planned online. There were calls to bolster the government’s ability to monitor social media, which were again sounded in the wake of the recent leak of classified intelligence documents on Discord. These calls, however, ignore the vast scale of social media surveillance already taking place, surveillance that has failed to stop both apparent blows to state security.

While Dataminr and its many competitors stand to profit immensely from more government agencies buying these tools, they have little to say about how they’ll avoid generating even more noise in search of signal.

“Collecting more hay,” Granick said, “doesn’t help you find the needle.”

Correction: May 16, 2023
This story has been updated to use Alex Remnick’s correct pronoun.

The post U.S. Marshals Spied on Abortion Protesters Using Dataminr appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/15/abortion-surveillance-dataminr/feed/ 0 Pro-choice and pro-life supporters confrontation Pro-choice and pro-life supporters confronted each other on Mott street between St. Patrick's old cathedral and Planned Parenthood in New York on June 4, 2022. Congress Supreme Court A U.S. Marshal patrols outside the home of Supreme Court Justice Brett Kavanaugh, in Chevy Chase, Md., June 8, 2022.
<![CDATA[Can the Pentagon Use ChatGPT? OpenAI Won’t Answer.]]> https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/ https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/#respond Mon, 08 May 2023 10:00:56 +0000 https://theintercept.com/?p=427162 The AI company is silent on ChatGPT’s use by a military intelligence agency despite an explicit ban in its ethics policy.

The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept.

]]>
As automated text generators have rapidly, dazzlingly advanced from fantasy to novelty to genuine tool, they are starting to reach the inevitable next phase: weapon. The Pentagon and intelligence agencies are openly planning to use tools like ChatGPT to advance their mission — but the company behind the mega-popular chatbot is silent.

OpenAI, the nearly $30 billion R&D titan behind ChatGPT, provides a public list of ethical lines it will not cross, business it will not pursue no matter how lucrative, on the grounds that it could harm humanity. Among many forbidden use cases, OpenAI says it has preemptively ruled out military and other “high risk” government applications. Like its rivals, Google and Microsoft, OpenAI is eager to declare its lofty values but unwilling to earnestly discuss what these purported values mean in practice, or how — or even if — they’d be enforced.

“If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves.”

AI policy experts who spoke to The Intercept say the company’s silence reveals the inherent weakness of self-regulation, allowing firms like OpenAI to appear principled to an AI-nervous public as they develop a powerful technology, the magnitude of which is still unclear. “If there’s one thing to take away from what you’re looking at here, it’s the weakness of leaving it to companies to police themselves,” said Sarah Myers West, managing director of the AI Now Institute and former AI adviser to the Federal Trade Commission.

The question of whether OpenAI will allow the militarization of its tech is not an academic one. On March 8, the Intelligence and National Security Alliance gathered in northern Virginia for its annual conference on emerging technologies. The confab brought together attendees from both the private sector and government — namely the Pentagon and neighboring spy agencies — eager to hear how the U.S. security apparatus might join corporations around the world in quickly adopting machine-learning techniques. During a Q&A session, the National Geospatial-Intelligence Agency’s associate director for capabilities, Phillip Chudoba, was asked how his office might leverage AI. He responded at length:

We’re all looking at ChatGPT and, and how that’s kind of maturing as a useful and scary technology. … Our expectation is that … we’re going to evolve into a place where we kind of have a collision of you know, GEOINT, AI, ML and analytic AI/ML and some of that ChatGPT sort of stuff that will really be able to predict things that a human analyst, you know, perhaps hasn’t thought of, perhaps due to experience, or exposure, and so forth.

Stripping away the jargon, Chudoba’s vision is clear: using the predictive text capabilities of ChatGPT (or something like it) to aid human analysts in interpreting the world. The National Geospatial-Intelligence Agency, or NGA, a relatively obscure outfit compared to its three-letter siblings, is the nation’s premier handler of geospatial intelligence, often referred to as GEOINT. This practice involves crunching a great multitude of geographic information — maps, satellite photos, weather data, and the like — to give the military and spy agencies an accurate picture of what’s happening on Earth. “Anyone who sails a U.S. ship, flies a U.S. aircraft, makes national policy decisions, fights wars, locates targets, responds to natural disasters, or even navigates with a cellphone relies on NGA,” the agency boasts on its site. On April 14, the Washington Post reported the findings of NGA documents that detailed the surveillance capabilities of Chinese high-altitude balloons that had caused an international incident earlier this year.

Forbidden Uses

But Chudoba’s AI-augmented GEOINT ambitions are complicated by the fact that the creator of the technology in question has seemingly already banned exactly this application: Both “Military and warfare” and “high risk government decision-making” applications are explicitly forbidden, according to OpenAI’s “Usage policies” page. “If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes,” the policy reads. “Repeated or serious violations may result in further action, including suspending or terminating your account.”

By industry standards, it’s a remarkably strong, clear document, one that appears to swear off the bottomless pit of defense money available to less scrupulous contractors, and would appear to be a pretty cut-and-dry prohibition against exactly what Chudoba is imagining for the intelligence community. It’s difficult to imagine how an agency that keeps tabs on North Korean missile capabilities and served as a “silent partner” in the invasion of Iraq, according to the Department of Defense, is not the very definition of high-risk military decision-making.

While the NGA and fellow intel agencies seeking to join the AI craze may ultimately pursue contracts with other firms, for the time being few OpenAI competitors have the resources required to build something like GPT-4, the large language model that underpins ChatGPT. Chudoba’s namecheck of ChatGPT raises a vital question: Would the company take the money? As clear-cut as OpenAI’s prohibition against using ChatGPT for crunching foreign intelligence may seem, the company refuses to say so. OpenAI CEO Sam Altman referred The Intercept to company spokesperson Alex Beck, who would not comment on Chudoba’s remarks or answer any questions. When asked about how OpenAI would enforce its use policy in this case, Beck responded with a link to the policy itself and declined to comment further.

“I think their unwillingness to even engage on the question should be deeply concerning,” Myers of the AI Now Institute told The Intercept. “I think it certainly runs counter to everything that they’ve told the public about the ways that they’re concerned about these risks, as though they are really acting in the public interest. If when you get into the details, if they’re not willing to be forthcoming about these kinds of potential harms, then it shows sort of the flimsiness of that stance.”

Public Relations

Even the tech sector’s clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else: Twitter simultaneously forbids using its platform for surveillance while directly enabling it, and Google sells AI services to the Israeli Ministry of Defense while its official “AI principles” prohibit applications “that cause or are likely to cause overall harm” and “whose purpose contravenes widely accepted principles of international law and human rights.” Microsoft’s public ethics policies note a “commitment to mitigating climate change” while the company helps Exxon Mobil analyze oil field data, and similarly professes a “commitment to vulnerable groups” while selling surveillance tools to American police.

It’s an issue OpenAI won’t be able to dodge forever: The data-laden Pentagon is increasingly enamored with machine learning, so ChatGPT and its ilk are obviously desirable. The day before Chudoba was talking AI in Arlington, Kimberly Sablon, principal director for trusted AI and autonomy at the Undersecretary of Defense for Research and Engineering, told a conference in Hawaii, “There’s a lot of good there in terms of how we can utilize large language models like [ChatGPT] to disrupt critical functions across the department,” National Defense Magazine reported last month. In February, CIA Director of Artificial Intelligence Lakshmi Raman told the Potomac Officers Club, “Honestly, we’ve seen the excitement in the public space around ChatGPT. It’s certainly an inflection point in this technology, and we definitely need to [be exploring] ways in which we can leverage new and upcoming technologies.”

Steven Aftergood, a scholar of government secrecy and longtime intelligence community observer with the Federation of American Scientists, explained why Chudoba’s plan makes sense for the agency. “NGA is swamped with worldwide geospatial information on a daily basis that is more than an army of human analysts could deal with,” he told The Intercept. “To the extent that the initial data evaluation process can be automated or assigned to quasi-intelligent machines, humans could be freed up to deal with matters of particular urgency. But what is suggested here is that AI could do more than that and that it could identify issues that human analysts would miss.” Aftergood said he doubted an interest in ChatGPT had anything to do with its highly popular chatbot abilities, but in the underlying machine learning model’s potential to sift through massive datasets and draw inferences. “It will be interesting, and a little scary, to see how that works out,” he added.

U.S. Army Reserve soldiers receive an overview of Washington D.C. as part of the 4th Annual Day with the Army Reserve May 25, 2016.  The event was led by the Private Public Partnership office. (U.S. Army photo by Sgt. 1st Class Marisol Walker)
The Pentagon seen from above in Washington, D.C, on May 25, 2016.
Photo: U.S. Army

Persuasive Nonsense

One reason it’s scary is because while tools like ChatGPT can near-instantly mimic the writing of a human, the underlying technology has earned a reputation for stumbling over basic facts and generating plausible-seeming but entirely bogus responses. This tendency to confidently and persuasively churn out nonsense — a chatbot phenomenon known as “hallucinating” — could pose a problem for hard-nosed intelligence analysts. It’s one thing for ChatGPT to fib about the best places to get lunch in Cincinnati, and another matter to fabricate meaningful patterns from satellite images over Iran. On top of that, text-generating tools like ChatGPT generally lack the ability to explain exactly how and why they produced their outputs; even the most clueless human analyst can attempt to explain how they reached their conclusion.

Lucy Suchman, a professor emerita of anthropology and militarized technology at Lancaster University, told The Intercept that feeding a ChatGPT-like system brand new information about the world represents a further obstacle. “Current [large language models] like those that power ChatGPT are effectively closed worlds of already digitized data; famously the data scraped for ChatGPT ends in 2021,” Suchman explained. “And we know that rapid retraining of models is an unsolved problem. So the question of how LLMs would incorporate continually updated real time data, particularly in the rapidly changing and always chaotic conditions of war fighting, seems like a big one. That’s not even to get into all of the problems of stereotyping, profiling, and ill-informed targeting that plague current data-drive military intelligence.”

OpenAI’s unwillingness to rule out the NGA as a future customer makes good business sense, at least. Government work, particularly of the national security flavor, is exceedingly lucrative for tech firms: In 2020, Amazon Web Services, Google, Microsoft, IBM, and Oracle landed a CIA contract reportedly worth tens of billions of dollars over its lifetime. Microsoft, which has invested a reported $13 billion into OpenAI and is quickly integrating the smaller company’s machine-learning capabilities into its own products, has earned tens of billions in defense and intelligence work on its own. Microsoft declined to comment.

But OpenAI knows this work is highly controversial, potentially both with its staff and the broader public. OpenAI is currently enjoying a global reputation for its dazzling machine-learning tools and toys, a gleaming public image that could be quickly soiled by partnering with the Pentagon. “OpenAI’s righteous presentations of itself are consistent with recent waves of ethics-washing in relation to AI,” Suchman noted. “Ethics guidelines set up what my UK friends call ‘hostages to fortune,’ or things you say that may come back to bite you.” Suchman added, “Their inability even to deal with press queries like yours suggests that they’re ill-prepared to be accountable for their own policy.”

The post Can the Pentagon Use ChatGPT? OpenAI Won’t Answer. appeared first on The Intercept.

]]>
https://theintercept.com/2023/05/08/chatgpt-ai-pentagon-military/feed/ 0 The Pentagon The Pentagon seen from above in Washington, D.C, on May 25, 2016.