This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.
When you hear about artificial intelligence, stop imagining computers that can do everything we can do but better.
My colleague Cade Metz, who has a new book about A.I., wants us to understand that the technology is promising but has its downsides: It’s currently less capable than people, and it is being coded with human bias.
I spoke with Cade about what artificial intelligence is (and isn’t), areas where he’s hopeful and fearful of the consequences and areas where A.I. falls short of optimists’ hopes.
Shira: Let’s start with the basics: What is artificial intelligence?
Cade: It’s a term for a collection of concepts that allow computer systems to vaguely work like the brain. Some of my reporting and my book focus on one of those concepts: a neural network, which is a mathematical system that can analyze data and pinpoint patterns.
If you take thousands of cat photos and feed them into a neural network, for instance, it can learn to recognize the patterns that define what a cat looks like. The first neural networks were built in the 1950s, but for decades they never really fulfilled their promise. That started to change around 2010.
What changed?
For decades, neural networks had two significant limitations: not enough data and not enough computer processing power. The internet gave us reams of data, and eventually scientists had enough computing power to crunch through it all.
Where might people see the effects of neural networks?
This one idea changed many technologies over the past 10 years. Digital assistants like Alexa, driverless cars, chat bots, computer systems that can write poetry, surveillance systems and robots that can pick up products in warehouses all rely on neural networks.
Sometimes it feels that people talk about artificial intelligence as if it’s a magic potion.
Yes. The original sin of the A.I. pioneers was that they called it artificial intelligence. When we hear the term, we imagine a computer that can do anything people can do. That wasn’t the case in the 1950s, and it’s not true now.
People don’t realize how hard it is to duplicate human reasoning and our ability to deal with uncertainty. A self-driving car can recognize what’s around it — in some ways better than people can. But it doesn’t work well enough to drive anywhere at any time or do what you and I do, like react to something surprising on the road.
What downsides are there from neural networks and A.I.?
So many. The machines will be capable of generating misinformation at a massive scale. There won’t be any way to tell what’s real online and what’s fake. Autonomous weapons have the potential to be incredibly dangerous, too.
And the scariest thing is that many companies have promoted algorithms as a utopia that removes all human flaws. It doesn’t. Some neural networks learn from massive amounts of information on the internet — and that information was created by people. That means we are building computer systems that exhibit human bias — against women and people of color, for instance.
Some American technologists, including the former Google chief executive Eric Schmidt, say that the United States isn’t taking A.I. seriously enough, and we risk falling behind China. How real is that concern?
It’s legitimate but complicated. Schmidt and others want to try to make sure that the most important A.I. technology is built inside the Pentagon, not just inside giant technology companies like Google.
But we have to be careful about how we compete with a country like China. In the United States, our best technology talent often comes from abroad, including China. Closing off our borders to experts in this field would hurt us in the long run.
Tip of the Week
How to be an informed online shopper
A reader named Eva emailed On Tech asking about small software programs known as browser extensions, plug-ins or add-ons for Chrome, Safari and Firefox that claim they will save her money.
“I keep seeing ads for these browser add-ons like Honey (from PayPal) and Capital One Shopping,” she wrote. “They claim they will automatically find and apply promo codes to save you money whenever you shop online. This sounds terrific, but I keep wondering, What’s in it for them? They’re not just doing this out of the goodness of their hearts. Before I sign up for these services, I want to know what the trade-off is. Can you help me find out?”
Brian X. Chen, the New York Times personal technology columnist, has this response:
Yes, there is always a trade-off. With free software, your personal data is often part of the transaction.
I’d advise taking a few minutes to research the company’s business model and privacy policy.
More than a year ago, Amazon warned customers to remove the Honey add-on because of privacy concerns. Honey’s privacy policy states: “Honey does not track your search engine history, emails or your browsing on any site that is not a retail website (a site where you can shop and make a purchase).”
Read between the lines: That means Honey can track your browsing on retail websites. (Honey has said that it uses data only in ways that people expect.)
The privacy policy for Capital One Shopping is more explicit: “If you download and use our browser extension, we may collect browsing, product and e-commerce information, including but not limited to product pages viewed, pricing information, location data, purchase history on various merchant websites and services, the price you paid for items, whether a purchase was made, and the coupons that you used.”
That’s a lot of information to hand over for software that automatically applies coupons. Whether or not that’s a fair trade is up to you.
Before we go …
-
So. Much. Money. Everywhere: My colleague Erin Griffith connects the dots among digital art selling for $69 million, a mania for cryptocurrency and soaring prices of things like vintage sneakers. Basically, it pays to take financial risks right now, plus our brains are turning to goo in a pandemic. Related: Stripe, which makes the software plumbing for businesses to accept digital payments, is now one of the most valuable start-ups in history.
-
Facebook is studying our vaccine views: Facebook is conducting internal research about the spread of ideas on its apps that contribute to vaccine hesitancy, The Washington Post reported. The early findings suggest that messages that aren’t outright false may be “causing harm in certain communities, where it has an echo chamber effect,” The Post said.
-
How to keep Americans safe: The failures of U.S. intelligence agencies to detect recent digital attacks by Russia and China are causing American officials to rethink how the nation should protect itself, my colleagues reported. One thorny idea is for tech companies and U.S. intelligence agencies to collaborate on real-time assessments of cyberthreats.
Hugs to this
Go hug a cow. It might help.
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.
If you don’t already get this newsletter in your inbox, please sign up here.