Weekly Roundup: Agencies Across the Globe Scrutinize Success
Africa manages the regulatory-innovation tensions
Before kicking off this week’s roundup, I want to highlight a fantastic fundraising event Forward Horizon Group (FHG) put on in New York last Friday. FHG is a frontier markets and security consulting firm that supports democratic resilience by, among other things, leveraging emerging technology and supporting non-profits. The event supported First Aid of the Soul, an American-based NGO that provides mental health services in Ukraine.
While the event's silent auction is closed, you can still support the joint FAS-FHG mental health project by donating here. You can also support the Folkowisko Foundation, an equally impressive NGO in Poland that provides humanitarian assistance to Ukrainians (and helped make this event possible by providing our team access to vast swaths of Ukraine) through donations here.
FHG prides itself on finding and executing irregular solutions to important problems. Pioneering Oversight’s analysis on emerging technology regulation is in this vein, and I encourage everyone to read a similar Substack, Future Majeure, by my friend, colleague, and co-founder of FHG Jesse Nuese. Future Majeure covers security, politics, technology, markets, and culture from macro to micro in a new and refreshing way by presenting analyses addressing the irregular problems the world is facing.
Weekly Roundup
Europe’s AI Act passed last week. Helpful and concise overviews of the regulation can be hard to find, but two decent ones are from the Center of Democracy and Technology and the MIT Technology Review.
Artists and other creative IP owners have thus far been on the losing end of AI’s IP battle. Big AI companies dodge questions on whether artists should be paid for their work when used as training data, and at least one musician has been jailed for using the technology to drive up royalties by auto-generating plays. There is some good news for artists; however, Tennessee passed the first law protecting musicians' likenesses, including their voices, from AI.
Reddit’s much-anticipated IPO last week brought regulatory scrutiny, as can be expected with a high market cap. With a lot of their valuation banking on data for AI platforms, the FTC is asking the company how they intend to license user’s data. Given that the overwhelming majority of Reddit’s labor is volunteer, it is not without reason that the new public company could face some of the same “platform or employer” labor issues companies like Uber ran into. Here is an excellent long read on the company's history highlighting its volunteer workforce.
The Department of Homeland Security is embracing AI, releasing a strategy directing departmental use. “As part of its plan, the agency plans to hire 50 A.I. experts to work on solutions to keep the nation’s critical infrastructure safe from A.I.-generated attacks and to combat the use of the technology to generate child sexual abuse material and create biological weapons.” The U.S. border will be a large recipient of the technology even though some are raising privacy and human rights concerns.
An bastion of innovation
Europe often gets credit for being the vanguard of regulatory policy, and the United States is seen as the leading innovator in many sectors. Africa is rivaling both places in both areas.
The projected benefit of AI adoption on Africa’s economy is tantalizing. Estimates suggest that four African countries alone—Nigeria, Ghana, Kenya, and South Africa—could rake in up to $136 billion worth of economic benefits by 2030 if businesses there begin using more AI tools.
Now, the African Union—made up of 55 member nations—is preparing an ambitious AI policy that envisions an Africa-centric path for the development and regulation of this emerging technology.
From Tech Safari
Three months after banning Starlink, Ghana is now in talks to grant the ISP a license to operate in the country. This will give Ghana's internet users more options, especially following the recent internet blackouts caused by damage to major undersea internet cables.
TikTok is teaming up with the African Union Commission’s Women, Gender, and Youth Directorate (WGYD) to launch a digital campaign promoting online safety for young people and parents across Africa. This comes as TikTok faces more scrutiny, including bans in Senegal and Somalia.
Unfortunately, the continent faces an infrastructure problem. 13 countries faced outages last week because of damaged undersea cables.
In other news
China Talk published “a fascinating analysis of the political consciousness of four Chinese AI chatbots.”
Until legislation is passed, judges will have to decide who will be held responsible for malpractice when AI is involved. This is a very similar problem to the regulation of autonomous weapons—when there is a lack of specific liability laws, who (if anyone) is responsible when AI makes a mistake?
CSET published a three-part primer on how LLMs work, which is good reading, especially since it is becoming empirically indisputable that LLMs reflect racist undertones, even as humans intervene to prevent them. It may be possible that, with advances in chain-of-thought reasoning research, the computational origins of this problem will become more apparent.
Emerging technology regulation
Neuralink, a company researching brain implants, had a successful test last week, enabling a paralyzed man to move a cursor across a computer screen with his brain. This new technology opens up an almost inconceivable amount of privacy concerns.
Children are being targeted by identity thieves, a problem that can take years to uncover.
“Automakers Are Telling Your Insurance Company How You Really Drive.” LexisNexis, a data broker working with General Motors, is being sued by a man in Florida after his insurance doubled after the data broker company forwarded his speed and braking information to his insurer.