Weekly Roundup: Is Authenticity Gone Forever?
Tracing authentic facts, failing children (again), and surveillance pricing
The sometimes hairsplitting categories of misinformation and disinformation will soon be left by the wayside as people ask, “Is this authentic or not.” There were a few stories across a wide range of areas this last week that should give everyone pause.
Viral Audio of JD Vance Badmouthing Elon Musk Is Fake, Just the Tip of the AI Iceberg. “AI-generated audio of Vance saying Musk is “cosplaying as a great American leader” has been played more than 2 million times on TikTok alone.”
Pikachu Spotted Fleeing Police Crackdowns During Turkey Protests. “As a real video of someone in a Pikachu suit at a protest goes viral, an AI-generated “photo” is fooling a lot of people.”
The authenticity problem doesn’t have many solutions. Among some of the fixes that are generally given for these problems:
Require verification for content creators. This takes many forms, from removing anonymous speech to tying IDs to online handles so that, if required, information can be traced back to its origin. This idea has some promise when discussing foreign interference in our election process, but there are many efficacy and civil liberty problems.
AI labeling. This is basically a failed solution. AI recognition software doesn’t work, and mandating a hashing system for AI-generated content will only work if the bad guys do it, too (they won’t).
Do nothing. The status quo is always an option on the policy menu, but if the status quo is not working, it is a pretty poor option.
In a recent Oped, an internet researcher articulated the real crux of the problem.
One of our [research organization’s] primary conclusions is that it is increasingly difficult, even for professional researchers, to trace claims about facts back to their origins. In the absence of evidentiary legibility, people face the disorientation that comes from being isolated from facts. Hannah Arendt defined this epistemic “loneliness” as a hallmark of totalitarianism.
Whether or not facts are “true” or whether they support your opinion is not the relevant conversation to have. The marketplace of ideas, recommendation algorithms notwithstanding, can sort that out. The problem is knowing where the information came from so a citizen or consumer can use it to make a decision.
Inauthentic information in the form of deepfakes continues to ravage our adolescence.
A Child Deepfake Exploitation tracker was published last week by Techpolicy.press.
Over 300 million children worldwide have been victims of online sexual exploitation and abuse in the past year alone. Compounding this reality is a rapidly emerging new form of abuse: AI-generated “deepfake nudes.” According to Graphika, a company that analyzes social networks, millions of users visit more than 100 "nudify" sites online each month.
Even legal sites, like OnlyFans, have issues. Last week, they were fined £1mn over inaccurate age-checking information.
Like the authentic-information problem above, solutions are sparse and not always workable. Age verification for clearly adult content like pornography has been put forth in multiple bills at the state and federal levels. There are some relevant concerns regarding censorship and whether age verification platforms work well enough to mandate. Are these concerns sufficient to warrant a sinking of the bill? The Center for Democracy and Technology, which often opposes such measures, finally released some alternative solutions in a severely lacking 360-word brief. They say
“First, parents can already set most children’s devices to block adult websites, which depends on sites labeling themselves as adults-only via metadata.”
Alternatively, just as we allow users to request “safe mode” of Google search or YouTube, devices could be configured to request “safe mode” of other sites on the internet.
It pains me to spend time and newsletter real estate to respond to these milquetoast policy alternatives, but I’m going to do it anyway.
As the briefing readily admits, not all adult content sites will label themselves. Also, even if every adult website on the internet did label themselves, could a child not simply use a different device to access the content?
Basically the same argument as #1. I think if the CDT were being honest with themselves, they would admit that the teenage boys actively seeking porn sites are just going to deselect “safe mode.”
Lastly, on both points, it feels like these policy solutions are akin to saying, “Well, it isn’t the liquor store’s responsibility to check ID. The parents should have been more attentive.” It’s reasonable to expect that dad’s box of Playboys is reasonably hidden in the attic crawl space, but I feel like it should be uncontroversial that we require vendors of this material to bear part of the burden of child safety.
Surveilling supply and demand
There are many ways to use AI to set pricing. Most businesses use algorithms to fine-tune the classic supply and demand curve: figure exactly where market equilibrium is and how to maximize it. When this is reduced to an individual level, it is often called “surveillance pricing.” From The Markup
Ride-sharing apps, travel companies, and retail giants such as Staples, Target, and reportedly Amazon have engaged in the practice, which can set different prices for customers based on factors including internet browsing data or where they live. In one recent example published by SFGATE, a person in the Bay Area was offered a hotel room for $500 more than people in less affluent areas.
The pricing isn’t based on supply or demand. It’s based on predictions made about your eagerness and desires, said researcher Justin Kloczko. In one recent instance he found that Lyft charged his wife $5 more than him for the same ride. Kloczko works at Consumer Watchdog, an advocacy group that cosponsored one of the bills.
In other privacy news
VT Senate passed a data privacy bill containing traditional consumer data privacy protections. However, they stripped down a consumer’s private cause of action (or ability to sue companies for violating these provisions), meaning that unless courts find a private cause of action in common law, it will be up to the government to prosecute every violation.
Meta to stop targeting UK citizen with personalised ads after settling privacy case. This case could serve as a model for the future of personalized ads—Meta said it was considering creating a subscription-based model for users who want an ad-free Facebook. The accompanying profile of the woman who filed and won the lawsuit is also a good read. It also serves as a roadmap for users wanting to do the same.
23andMe’s bankruptcy proceedings mean that someone is going to pay for all the data they accumulated. The linked commentary says that the judge should make the data sale available only to users who opt-in, which has some precedent in bankruptcy court. This is a fairly legally arcane issue for a service that many people, but not everyone, use. However, if you are curious about the impact on you, 1) know that it is highly likely someone in your close genetic circle probably used this service, and 2) go back and read the piece on surveillance pricing.
In other news
Capitalism certainly misses the mark sometimes, but state-run economies also miss it and seldom have the mechanisms to quickly correct. The PRC overbuilt on AI data centers, many of which stand unused.
Anthropic has found a way to “peer inside their large language model,” a first for the AI industry. This could be the first step to solving the “black box” problem, making many legal and ethical concerns moot and opening the way for regulatory solutions.
NIST releases finalized guidelines on protecting AI from attacks.
And lastly,
For anyone doing any tech policy research, I highly recommend the Center for European Policy Analysis (CEPA) Transatlantic Tech Policy Tracker.
“CEPA’s Transatlantic Tech Policy Tracker charts the key tech policy and business developments around the globe. From antitrust to telecommunications and artificial intelligence to European digital regulation, this interactive tool allows users to search and find news items compiled since the beginning of 2020.”