The ethics of AI-driven feeds

Over on Every, Nathan Baschez explores this question: what if we had a personal AI that could write the perfect thing just for us?

Imagine waking up in the morning and having your coffee while reading the perfect article. It was written just for you the moment you picked it up. Unlike regular articles, this one knows everything. It can tell you about all the world events you care about, the companies you’re keeping an eye on, and the thinkers whose ideas you admire. It knows your obsessions and helps you explore them more deeply. It’s even got your weird sense of humor nailed!

I encourage you to read his whole piece.

It was refreshing to see him address some of the ethical considerations surrounding algorithmically driven content at the end of the article. I'd like to explore a few more ethical questions here.

Could AI-driven content be a good thing?

Whether or not AI-driven content will be good for human health and society will depend on how AIs interpret the phrase "the perfect article for you."

What will these artificial intelligence systems optimize for?

For example, if the AI is optimizing for engagement, the "perfect article for me" might appeal to my most undignified desires. It might produce "easy reading" content that doesn't rock the boat or upset me. To ensure I keep reading, the AI might generate paragraphs that continually reinforce my existing beliefs.

The question of "what will AI optimize for?" will largely be answered by large corporations with the computing power to run these systems. Companies like Google, Facebook, and Amazon are investing massive amounts of money in training these machine learning systems. These are capitalist organizations using AI tools for capitalist means.

One of the biggest ethical quandaries in capitalism is that people have strong desires for things that aren't good for them. McDonald's makes junk food, and we eat it. Philip Morris made machines that produce 20,000 cigarettes a minute, and we smoked them. Twitter made a newsfeed designed to ensnare us, and we doomscrolled. This is the principle point of Bernard Mandeville's notorious The Fable of the Bees: our private vices are good for the economy!

For AI, this quandary is even more acute because you can design a self-learning system optimized to fulfill every unhealthy human desire and habit. TikTok and Facebook's feeds have already shown how they can send us down infinite rabbit holes that reinforce our worst impulses.

A thoughtful AI, trained to challenge the human mind in healthy ways, could potentially improve human society. For example, it could determine that "Justin needs to read the stoics" and start sprinkling that into my daily digest. Or, it might teach me how to be a supportive parent once my kids are in college. A thoughtful AI could help me better understand the people in my community and create more shared understanding.

But I fear these AIs will optimize for whatever gets us most addicted so that the companies with the compute power required (Facebook, Google, Amazon) can sell more ads.

Thoughtfully, 
Justin Jackson
twitter.com/mijustin

Published on September 28th, 2022
Home About Articles Newsletter MegaMaker
Powered by Statamic