In November the New Yorker published a great piece from Paul Bloom about AI ethics. He followed up with a post on his Substack more directly with a post called We Don’t Want Moral AI, which he then discussed on Econ Talk: Can Artificial Intelligence Be Moral?
Outlined various scenarios of AI telling you what to do
“We would never abide by an AI who said, 'Oh, you want me to drive you to the bar? I don't think so pal, because you promised to help your kid with his math homework.”
“What are you having for dinner? Well, it's not going to be factory-farmed meat.”
“I don't want my tax preparation software to take the plans of my house and say, ‘Your home office isn't that big.’’’
“You want to go to a football game? Football is bad for human beings. It leads to brain injuries. You should be going to an Ultimate Frisbee match. I'll take you there. Or I'll stay in the driveway. Your choice.”
In short, Paul is imagining AI as an electronic Pinocchio’s Jiminy Cricket.
I think one thing he and Brian mis is modes of implementation. The examples you gave of AI driven morality were very... let's call it authoritarian. I think we need to be more imaginative. As Henry Ford famously said “If I had asked people what they wanted, they would have said faster horses.” What would the AI version of this look like?
As a first step, imagine that instead of the AI telling you what to do, it persuades you do the right thing. If we accept as a given that LLMs have access to all the psychology and behavioral economics research, all the sales training books, all the priming experiments and web A/B split tests, then surely it would be smart enough to use better language than "you should do X."
Your AI Friend Who Knows You…
If the AI is a personal assistant that goes everywhere with you, like your smartphone, perhaps it knows your New Years resolutions. Maybe not directly but perhaps it overheard you talking about them with friends, or writing a comment on Reddit.
In that context, it could use your own words against you. “Remember just last week when you said you wanted to exercise more? How about we park a few blocks away from your destination and walk instead?”
… and Your Wife
The AI could also be in touch with your spouse’s AI. So it could remind you of how happy she would be to know you're exercising. It might even know your spouse is in the middle of a bad day and suggest you pick up some flowers on the way home. “The Safeway has a spring bundle on sale for $7.99. Would you like to me order ahead for curbside pickup?”
Chemistry. Literally.
Go a little further. As a type two diabetic, I wear a glucose sensor that monitors my blood sugar level and alerts my phone when it's too high or too low. It’s a very useful technology that directly affects my behavior every day.
It's not hard to imagine more sophisticated sensors that monitor various hormone levels.
Don't feel like exercising? Maybe the AI surreptitiously gives you a little jolt of adrenalin. The sudden burst of energy motivates you to get up and go to the gym. Did the AI “tell” you what to do, or was it your own free will?
That last one might seem far fetched, but consider the Ozempic revolution. It’s basically hormone therapy that all but eliminates the feeling of hunger. Josh Barro wrote a good piece about his experience with it.
I just eat less. I haven’t forgotten to eat lunch, as such, but on a couple of days I have chosen not to. One day, I went to a cafeteria where none of the options looked very good, so I just had a cup of tomato soup. Another day, all I had for lunch was a banana. These aren’t choices I would have made previously, and if I had, I would have spent the afternoon miserable and hungry in the aftermath. But I wasn’t.
As someone who’s suffered from serious depression over 30 years, I know that one little pill every day changes my entire world view profoundly. Profoundly. I go from not being able to get out of bed to someone who (usually) is excited to get to work every day.
One little pill!
(This is why I don’t believe in free will.)
Of course, many will rebel against these kinds of interventions in the name of freedom, but what if it's something that they start doing in first grade? Is your kid having trouble with motivation? Maybe it’s a hormone problem. Just attach a sensor that monitors their cortisol levels and gives them a jolt when running low.
Certainly their parents and teachers will want it, or risk falling behind their peers. As anyone who listens to sports talk radio can attest, testosterone therapy is becoming mainstream. If I want a raise at work, or just not to get the afternoon blahs, why wouldn’t I want an AI wired in to my endocrine system?
I imagine an AI like Pennyworth to Batman or Javis to Tony Stark - they offer suggestions and are loyal and confident and you are free to take the suggestions or not without judgement from them. I would really like an AI assistant like that.
Very thought provoking, thanks for sharing!! So many lenses to look through.