Weekly Roundup: Algorithms and the Market Place of Ideas
Plus, how AGI could become a legal farce
The death (again) of the internet as we know it
Noahpion’s Substack is worth subscribing to if you don’t already. He covers a wide range of topics, and in this one, he writes about how changes to the internet haven’t all been beneficial for humanity.
The internet as we know it has already died once. In the 2010s, the rise of smartphones and mass social media (Twitter/Facebook/Instagram) caused what internet veterans refer to as an Eternal September event, for the entire internet. “Eternal September” is an old slang term for when a bunch of normal folks flood into a previously cozy, boutique online space. When the average person got a high-speed pocket computer that linked them 24/7 to the world of social media, the internet ceased to be the domain of weirdos and hobbyists, and became the town square for our entire society.
That wasn’t a great change, in my opinion. The human race didn’t evolve to all be in the same room together. The early internet was a very special time, when the online world served primarily as a temporary escape or a release valve for anyone who didn’t quite fit in in the offline world; mass social media turned the internet into a collective trap, a place where you had to be instead of a place where you could run away to.
He attributes this decline to a few reasons such as business models driven by ad revenue, AI-generated “slop” that is filling up news feeds, and recommendation algorithms. On the latter:
With TV and radio you could change channels, and the device would dutifully deliver you the channel you picked. With algorithmic social media, there’s really only one channel, even though it’s personalized — Twitter is just a single feed, TikTok is just a single feed, etc. Sure, you can go to a specific creator’s “channel”, but most content isn’t consumed this way. With TV and radio, a few creators generate huge amounts of content, but social media is more about a large number of creators generating small bits of content. So social media is typically consumed via feeds, and you have only one feed per platform.
I touched on this problem a few weeks ago when I wrote about TikTok’s asymmetric advantage:
The choice of what information gets presented is that of the social media platform alone, but the choice of the user in what to receive does not exist…How can a marketplace of ideas exist if there is only one idea? The solution here isn’t to moderate the ideas or ban platforms from competing in the marketplace but to ensure access to a wide variety of ideas. The choice to consume information, to turn away from it, to seek other opinions, is inseparable from the right to speak. Choosing what to listen to is half of the freedom of speech.
Curing the myopia recommendation algorithms created means coming face to face with the problem of multi-stakeholder rights. Does a tech company have the right in a capitalist nation to pick the most efficient business model?—yes. Do people have the right to free speech?—yes. Do I, as a citizen in a democracy, have a right to organize, process, and consume information in the manner of my choosing?—yes. Does one stakeholder have a greater claim? Is the solution to split the difference? In weighing the claims of the stakeholders, the primary test should probably be what is best for democracy.
Chose your own adventure
Ethan Zuckerman is tackling this problem by suing Meta to force the company to allow users to use add-ons that alter their news feed. In a guest essay he makes what seems to be a novel argument about the legality of such tools (Meta is challenging this).
Such tools are protected under Section 230 of the 1996 Communications Decency Act, which safeguards platforms like Facebook from direct liability for the behavior of their users and has been critical in allowing Facebook and others to build billion dollar businesses. But the remainder of the section often goes ignored. We argue that it establishes the rights of users, families and schools to self-police the content they encounter online, using technical means to block material they find objectionable.
Whether or not a judge buys this legal argument, it is odd how seemingly radical the idea of user-regulated algorithms is, especially when companies have no problem moderating their own content when convenient. Elon Musk, for example, has fostered relationships with right-wing dictators around the world, conveniently censoring, or moderating, content according to his business interests. The right to filter your own content, fine-tune a recommendation algorithm, or install software that does all this for you may be an inconvenient cost for businesses, but a small price to pay to keep the internet alive alongside a functioning democracy.
Conscious intelligence?
Despite the narrow and promising applications of AI such as boosting airline efficiency and predicting molecule movement, Big Tech is still obsessed with the holy grail of AGI, in some cases counting on the future technology to tell it how to pay its bills.
When this unbridled capital meets scientific and philosophical disciplines that recently declared consciousness in insects, it becomes easy to anthropomorphize machines, finding them intelligent when, in fact, they are just very good at meeting pre-designated goals and are completely void of intent.
However silly you make think sentient AI is, Ethan Mollick gave the subject a good and grounded look in his latest Substack finding, as always, a fitting metaphor for AGI
The right analogy for AI is not humans, but an alien intelligence with a distinct set of capabilities and limitations. Just because it exceeds human ability at one task doesn’t mean it can do all related work at human level. Although AIs and humans can perform some similar tasks, the underlying “cognitive” processes are fundamentally different.
Debates over consciousness and AGI capability aside, attributing a general, independent intelligence to a machine, regardless of that attribution being philosophical, computational, or otherwise, can not be too far off from having legal consequences. There have been patent claims for AI-generated content and human vs. machine agency is a legitimate liability question. Regulatory liability regimes are slow work and usually reactionary with the most recent example being that the U.S. Government finally took the necessary, but long overdue, first steps toward robust software liability laws.
In other news
Regulators were able to recover $8 billion lost in the FTX scandal.
A new top-selling book is promoting the thesis that social media and smartphones are the cause of children’s increased anxiety over the last decade. The thesis is debated in some scientific circles, claiming that there is no empirical evidence to support the book’s claim. The debate resembles an anecdote on the difficulty of “proving” cigarettes cause lung cancer—many smokers never got the disease, and some non-smokers did too.
Silicon Valley is embracing military public-private partnerships with the United States Government even as economic ties with China are proving hard to sever. These increasingly cozy relationships between big tech and the DoD have generated some pushback. A former Google employee wrote an opinion lamenting the partnership.
Wired conducted an odd interview with an AI spokesperson for the Russian cybercrime group responsible for Solarwinds.
Before you go
A short piece titled “Why the Voices of Black Twitter Were Worth Saving” is worth a read, as are all of the links in the piece. It is a brief expose on the author’s work documenting Black Twitter, and the underlying threat that it could all disappear overnight. It is also a primer for a 3-part Hulu Show “Black Twitter: A People’s History.”