Weekly Roundup: AI Generated CSAM Seems Unsolvable
And emerging techno-competition before and after kinetic conflict
Child Sexual Abuse Material (CSAM) is one of the more disturbing problems exacerbated by AI. Criminals are using AI tools to generate CSAM, as well as using bespoke applications to “nudify” photos of real children. These “nudify” apps have been advertised on Instagram and offered in the Apple App Store (although they have since been removed).
The National Center for Missing and Exploited Children, the nation’s main non-profit tackling the problem, is overwhelmed with investigations. Aside from funding shortfalls, lack of personnel, and technical limitations, legal restrictions complicate their job.
NCMEC is the only entity outside of law enforcement authorized to work with child exploitation content. It isn’t a law-enforcement agency, though the majority of its funding comes from the Justice Department. It isn’t a simple nonprofit, either, as federal courts have held that NCMEC can be considered an arm of the government if it goes beyond passively receiving reports, most of which come from internet platforms.
The rigid American legal infrastructure around online child-exploitation material has led many companies to limit their efforts to detect the spread of harmful imagery due to concerns over breaking the law.
NCMEC’s efforts to improve report quality are also complicated by concerns that asking for companies to provide more data might be considered a violation of the Fourth Amendment.
Policy solutions to solve this problem are not easy. There are laws for criminal accountability—the UK just banned one convicted sex offender from using AI at all. However, this is hardly a preventative measure considering the amount of CSAM on the internet. In a great explanatory piece, The Guardian breaks down the different places in the AI-CSAM chain that can be addressed:
Training Data. Preventing CSAM in training data is the obvious solution, but with such a vast amount of images, it is simply impossible to scan every single one. Additionally, using AI to scan for CSAM means that a particular model will need to be trained on CSAM, or it might not recognize the material.
AI-in-the-loop. Larger generative AI platforms use their proprietary models to prevent generation in the first place, “limiting which requests can be sent and filtering generated images before they are sent to the end user. AI safety experts say this is a less fragile way of approaching the problem than solely relying on a system that has been trained never to create such images.”
Banning Apps. “In the short term, the focus of the proposed bans is largely on purpose-built tools. A policy paper co-authored by Innes suggested taking action only against the creators and hosts of single-purpose “nudification” tools.”
Tech-wars—after the “boom”
AI in warfare, a topic I have written about over the last few years, is officially a reality. A well-sourced and succinct War on the Rocks piece breaks down the many legal, moral, strategic, and tactical concerns of using AI on the battlefield and this piece by Peter Singer briefly documents the varied success of AI in the Ukraine and Israel-Hamas wars.
“Ukraine’s front lines have become saturated with thousands of drones, including Kyiv’s new Saker Scout quadcopters that “can find, identify and attack 64 types of Russian ‘military objects’ on their own.” They are designed to operate without human oversight, unleashed to hunt in areas where Russian jamming prevents other drones from working.”
Israel uses a few systems. “‘The Gospel’ is an AI system that considers millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction by air strikes and artillery. Another system, named Lavender, does the same for people, ingesting everything from cellphone use to WhatsApp group membership to set a ranking between 1 and 100 of likely Hamas membership. The top-ranked individuals are tracked by a system called “Where’s Daddy?”, which sends a signal when they return to their homes, where they can be bombed.
Emerging technology is emerging for a reason. Many have correctly pointed out that every level of command is adapting its doctrine through trial and error, and no one really understands how strategy and tactics will evolve in the wake of this new technology.
Technology supplied to Ukraine, Taiwan, and Israel that is a bit more on the traditional side, like kinetic air defenses, has seen more success. In contrast, others, such as Project Maven or Starlink, have shown limitations. Other technologies, like the Israeli tech mentioned above, have flagged so many targets (infrastructure and humans) for destruction that it begs to question how much value added it is in the first place.
and before the “boom”
When the shooting began in Ukraine in 2022, many U.S. companies left Russia and joined the effort to defend Ukraine. Two CSET researchers penned an op-ed summarizing their report, arguing that defense tech will have a much harder problem doing this with China.
As one of us argues in a recent report, several of the companies that came to Ukraine’s defense have much deeper economic ties to China than they did to Russia, and those connections could expose them to Chinese coercion. For example, Tesla reportedly manufactures over 50 percent of its electric vehicles in China, while Apple produces 95 percent of its hardware in the country. Both companies earn around 20 percent of their revenue there. Microsoft and Amazon conduct AI- and computer science-related research in China. These companies, along with Cloudflare, Google, Cisco, and Oracle, all of which supported Ukraine to some degree, have other business interests in China that could affect their decisions about supporting Taiwan in a conflict.
The U.S. has implemented policies to change these deep economic ties through various carrot-and-stick programs. One of the “carrots” is the CHIPS Act, which Chris Miller, author of “Chip War,” has called largely successful. Similar incentives and subsidies have brought back green energy production to the U.S., something China has led the field in almost exclusively.
For the sticks, the U.S. has pressured allies to cut more technology exports to China and has worked with Mexico to stop incentivizing Chinese automakers’ production. Export controls have forced the PRC to buy refurbished servers to cannibalize semi-conductors. The Department of Justice (DOJ) proposed a new rule last week to prevent “data espionage,” something the PRC has been conducting on Americans for years, covertly, criminally, and through legal data practices in various applications. From China Talk:
The two [DOJ] prohibitions are on data brokerage. In other words, the sale or leasing of access to data on a US person to either a country of concern or a covered person, or transfers of genomic data, which we see is one of the most sensitive categories of data. Both of those will be prohibited in the initial regulations.
What would the consequence of the so-called tech-wars be? The Economist warns:
The biggest costs of the tech wars could be the bifurcation of the world’s information and energy-technology industries, leading to sagging economic growth and slower decarbonisation. They will probably accelerate firms’ secretive efforts to develop offerings for the Chinese market over which the American government has little or no control. That could inadvertently give China more power to set technological standards in parts of the world that use its equipment.
In other news
“Cyber attackers are experimenting with their latest ransomware on businesses in Africa, Asia and South America before targeting richer countries that have more sophisticated security methods.”
Schools are having trouble using internet website-blockers, blocking pornography websites along with “analyses of the Greek classic “The Odyssey” for language arts class.”
“The Department of Homeland Security brought on 22 new representatives from various areas of the larger artificial intelligence sector to advise the agency on safety recommendations when developing and deploying AI systems in U.S. critical infrastructure.”
MIT Technology Review published a long-form article on the history of space stations and the current commercial takeover of space flight.
The National Highway Traffic Safety Administration (NHTSA) released a report on a three-year investigation into Tesla’s Autopilot system, finding 13 fatal crashes in which “foreseeable driver misuse of the system played an apparent role. It also found evidence that “Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities,” which resulted in a “critical safety gap.”