Weekly Roundup: Courts, Copyright, and the Crumbling Case for AI Detectors
Plus some new primers on quantum computing and cognitive warfare
The Supreme Court’s 2024-2025 term ended with some statistics that would surprise all except the most ardent court followers. For example, 42% of decisions were unanimous, and another 24% were “lopsided” decisions, with 7 or 8 justices in the majority. “Only 9% of cases overall, six total, resulted in ideologically split 6-3 outcomes, with the liberal Justices dissenting as a bloc. That compares with 6% of cases, or four, that had 6-3 decisions except with Justices Thomas, Alito and Gorsuch in the minority.” (WSJ)
Those 9% were doozies, no doubt, but fall mostly outside the scope of this newsletter. Here’s an update on some of the major cases and legal issues in the tech world:
Copyright cases
Anthropic and Meta both scored victories last week in their copyright infringement cases; however, these are just a couple of the numerous infringement cases still working their way through the courts.
In both cases, a group of authors…set out to prove that a technology company had violated their copyright by using their books to train large language models. And in both cases, the companies argued that this training process counted as fair use, a legal provision that permits the use of copyrighted works for certain purposes.
There the similarities end. Ruling in Anthropic’s favor, senior district judge William Alsup argued on June 23 that the firm’s use of the books was legal because what it did with them was transformative, meaning that it did not replace the original works but made something new from them. “The technology at issue was among the most transformative many of us will see in our lifetimes,” Alsup wrote in his judgment.
In Meta’s case, district judge Vince Chhabria made a different argument. He also sided with the technology company, but he focused his ruling instead on the issue of whether or not Meta had harmed the market for the authors’ work. Chhabria said that he thought Alsup had brushed aside the importance of market harm. “The key question […] is whether allowing people to engage in that sort of conduct would substantially diminish the market for the original,” he wrote on June 25.
It is hard to read too far into the tea leaves in these cases, but they are important as they will be an indicator of how to try future cases. Perhaps the Plaintiff needs to include additional types of proof or think of a novel approach to “market harm,” maybe something along the lines of unjust enrichment. Different jurisdictions with different laws can be tried, and of course, there is always an appeal.
Fuzzy suppositions
The U.S. legal system is increasingly resembling the legal systems of emerging markets I study. Legal interpretations and arguments are a realist’s utopia, and defendants often placate the powerholders rather than presenting their arguments on merit in a free and open judiciary.
Paramount officially bent the knee and kissed the ring by paying President Trump $16 million to settle a “defamation” suit:
Donald Trump just achieved a major legal victory over a media organization he sued for reporting he did not like. Paramount, owner of CBS, has agreed to settle Trump’s claim that “60 Minutes” violated Texas state deceptive trade practices law by selectively editing an interview with Kamala Harris to help her, and hurt him, politically. At stake for Paramount is a merger with Skydance media which, it reportedly fears, will founder on administration opposition if it does not settle the CBS claim. The importance of the deal to Paramount, together with the wish to avoid other retaliatory action by the administration, apparently outweighed all other considerations.
Trump now prevails on a more-than-dubious legal claim—characterized by one expert as “ridiculous junk” worthy only of being “mocked”—and demonstrates that he can use a combination of a personal lawsuit and implicit threats of federal government power to pressure a news organization into political submission. It also highlights the central role of civil society, within the media industry and elsewhere, in taking seriously—or not—an ethical responsibility for the defense of democratic norms and institutions.
Equally as frightening is the argument upholding the President’s continued refusal to enforce the TikTok ban, a law passed by Congress, signed by the President, and upheld by the Supreme Court UNANIMOUSLY. Evidently, because the law touches national security, the President has the prerogative to enforce it or not. The issue of a social media app may seem trivial, but when viewed from the perspective of “do we have a functioning democracy or not,” this is probably the most damning issue on the books.
International news
Australia, India, Japan, and the United States have launched a critical mineral initiative, coupled with tariff talks on technology. This may be especially helpful to India, as its promised lithium mine in Kashmir has proven to be a bust. India has also struggled to stay competitive in the AI race:
Despite its status as a global tech hub, the country lags far behind the likes of the US and China when it comes to homegrown AI. That gap has opened largely because India has chronically underinvested in R&D, institutions, and invention. Meanwhile, since no one native language is spoken by the majority of the population, training language models is far more complicated than it is elsewhere.
U.S. authorities have cracked down on a North Korean cyber criminal network funding their nuclear weapons program. “A group under sanctions and linked to North Korea allegedly stole about $620mn in a cryptocurrency hack in 2022, US prosecutors intend to show in an upcoming trial, illustrating its reach in digital currency.”
From Academia
Two new reports are out. I haven’t read them yet, but they come from two highly respected sources I frequently turn to for research:
A Primer on Russian Cognitive Warfare
Cognitive warfare is a form of warfare that focuses on influencing the opponent's reasoning, decisions, and ultimately, actions to secure strategic objectives without fighting or with less military effort than would otherwise be required. China, Russia, Iran, and North Korea increasingly use cognitive warfare against the United States in order to shape US decision-making.…[It] is much more than misinformation or disinformation.…Cognitive warfare is distinguished by its focus on achieving its aims by influencing the opponent’s perceptions of the world and decision-making rather than by the direct use of force.
Military and Security Dimensions of Quantum Technologies: A Primer
Quantum technologies are advancing rapidly from experimental research into strategic defence and security applications, fundamentally altering how information is sensed, shared and secured. Ground- and satellite-based quantum key distribution networks are already being deployed by China and the European Union, promising virtually unbreakable communication. Quantum sensing systems, capable of precise navigation without a global navigation satellite system as well as subterranean and underwater detection, are nearing operational use…This report therefore provides important background and recommendations aimed at supporting the creation of international ethical, legal and security frameworks that ensure quantum bolsters, rather than undermines, global stability.
In other news
Kids are making deepfakes of each other, and laws aren’t keeping up.
Laws are having a hard time adapting to the increase of minors making sexually explicit deepfakes of each other. Some states lack frameworks to address this type of behavior, while others impose zero-tolerance criminal penalties, even for minors. I tend to skew towards a more “actions have consequences” approach to crime, especially in this arena, but sending kids to jail for this behavior is akin to giving a 14-year-old a bottle of schnapps and keys to a car and then putting him in jail when he drives drunk. Instead of criminal responsibility, we should be preventing them from getting access to these tools in the first place—something that will require a strong, government interventionist approach, much like the prevention of serving alcohol to minors.
AI detection software used at universities is as ineffective as snake oil,
littered with false positives (particularly for international students), and easy ways to get around them. Chinese universities are screening essays with their version of the software, much in the same way American universities use the ubiquitous software Turnitin. The Markup wrote a phenomenal history on the software, highlighting a new concern: The software company is retaining all the essays professors submit to the software as training data. There have been a few successful pushbacks against this practice:
In 2007, high schoolers in Virginia and Arizona sued iParadigms, Turnitin’s parent company at the time, arguing that its database violated their copyright over their own writing. The courts disagreed. The students also lost on appeal, but some colleges still warn their faculty against using free online plagiarism checkers because of privacy concerns inherent in handing student work over to third-party companies.
The story also highlights a generational gap. One student interviewed said professors “tell students not to even use spell-check tools because they’re bolstered by AI. Microsoft Word, Google Docs and Grammarly now all rely on the same algorithms that create ChatGPT’s human-sounding responses to suggest improvements to users’ writing.”
As anyone who is in or has recently completed work at the university level can attest, the quote above exemplifies most academic dishonesty policies. AI in the education system isn’t a problem to solve, but a pedagogical one to adapt to. Instead of spending thousands of dollars on false deterrence, professors should revisit the goals of higher education in the first place and explore how they can achieve those goals with the aid of AI. If they are that concerned about cheating, then they can give in-person, handwritten exams or oral boards. This, however, would require a bit more effort on the part of the professor.
The new workforce is making a debut.
Digital workers (AI agents) have log-ins and bosses, just like human workers, at a few financial services and banks, and Amazon will have more robots than humans in its warehouse very soon. Since the ban on state AI regulation failed, upcoming state legislature sessions may have “human only” hiring laws to protect human jobs.