HomeNewsOpinion | We Need Less Talk and More Action From Congress on...

Opinion | We Need Less Talk and More Action From Congress on Tech – The New York Times

Advertisement
Supported by
Kara Swisher
Send any friend a story
As a subscriber, you have 10 gift articles to give each month. Anyone can read what you share.

Opinion Writer
I’ve not been shy about calling out social media companies like Meta — parent of Facebook — for amplifying and worsening a range of societal problems over the past decade or so. Much as the influence of cable news began the coarsening of all kinds of dialogue, particularly around politics, it’s hard to argue that the explosion of online discourse has had a dulcet influence on the world.
That’s been especially troubling when it comes to kids, which is why what feels like the umpteenth hearing (it’s actually just the fifth) on the safety of children online held this week on Capitol Hill was once again so dispiriting. More talk and no action by lawmakers. I am beginning to wonder if our world is so warped by tech’s influence that any fix is but a drop in a lake. And doesn’t it feel like some of the outrage over the revelations in the Facebook Papers has already passed us by?
This latest round of grilling featured the congressional debut of Adam Mosseri, the head of Meta’s Instagram unit, before the Senate Commerce Committee. As is typical with most tech execs, he appeared contrite, while dodging the most challenging questions. He also came armed with some reforms the photo-sharing service has in the works, including a programmed “break” for teenagers. All are fine but mostly just wallpapering over the true damage of social media, as noted by Senators Marsha Blackburn, Republican of Tennessee, and Richard Blumenthal, Democrat of Connecticut.
“The kinds of baby steps that you’ve suggested so far are underwhelming,” Blumenthal said, eliciting nods from other committee members.
Mosseri’s suggestions for a new industry-run standards body to institute reforms around children’s safety online, as well as tinkering with Section 230 immunity protections, didn’t impress Blackburn: “It’s not an industry body that is going to set these standards — it is going to be the U.S. Congress.”
That sure would be nice! But Congress has so far declined to do so. The last real effort was 1998’s now toothless Children’s Online Privacy Protection Act, even as other countries like Britain have tried creative approaches such as its recent Age Appropriate Design Code.
And the solutions from Big Tech — and these are problems at YouTube and TikTok too — have been to design new products, not fix the ones they have. In October, Mosseri defended Instagram’s delayed plans to introduce a service aimed at kids when he dropped into a Twitter live audio chat run by The Washington Post.
“I get it’s an easy dunk, to dunk on the idea,” he said. “But I think if you get into the details of this and you look at the actual practical realities, it’s, I think, a much more responsible path than where we are today.”
Face it, it’s easy to dunk on Meta because it lowered the hoop by not building in safety at the start.
Consider age verification. The sites aren’t supposed to be accessible to those under 13, but of course kids lie when they sign up. Yet Meta and others operate like the corner store owner who looks the other way when underage kids try to buy cigarettes or liquor using fake IDs. Now Meta wants to throw A.I. at the problem — which is maybe just creepy.
Yesterday, Mosseri tried to pass off responsibility to Google and Apple with an idea for age verification at the operating system level. Perhaps, but how about a bigger fix? That ought to include a long-overdue and robust national privacy bill. Blackburn tweeted that rather simply: “It’s time to pass a national consumer privacy bill, and kids’ specific legislation to keep minors safe online.” That’s just more talk and no action.
So, while the lawmakers most definitely seemed smarter and asked much better question in this round, I’d rather see less Mosseri and more law.
Jeff Kosseff answered questions about how best to regulate Big Tech. He is an associate professor of cybersecurity law at the United States Naval Academy’s Center for Cyber Security Studies and the author of “The Twenty-Six Words That Created the Internet,” a history of Section 230 of the Communications Decency Act. His next book, “The United States of Anonymous: How the First Amendment Shaped Online Speech,” will be published in March. His answers have been edited.
You have been vocal in your opposition to a full overhaul of Section 230. Why is it still necessary to keep it largely intact?
Congress passed Section 230 in response to court decisions that created a perverse incentive for online services to avoid moderating user content. Legislators wanted to give platforms the breathing room to develop moderation policies without facing existential litigation costs. Section 230 has also enabled the business models of Meta, Wikipedia, community bulletin boards and so many other platforms that host user content. That said, I have long argued that Congress should evaluate whether it can improve Section 230, including to address niche sites that traffic in defamatory content. Unfortunately, many of the current Section 230 proposals try to regulate speech that the First Amendment protects, or they chill protected speech.
One of the few and powerful levers Congress has is making adjustments to Section 230. One proposal being discussed is to remove tech companies’ legal immunity for algorithmically recommended content. Do you think this is a good or bad direction and why?
Proposals to regulate algorithms are tempting, but many likely would run into the same First Amendment barriers as direct speech regulations. The First Amendment protects hate speech, misinformation and a good deal of other harmful speech. Regulating the algorithm would not avoid these problems. Courts have held that laws chilling the distribution of protected speech raise First Amendment concerns.
Some concerns that people raise about algorithms involve the platforms’ collection and use of personal data to target users with harmful content. Congress could address these issues more directly — and without the same constitutional problems as speech restrictions — via a strong national privacy law.
What is the most dangerous bill that has been proposed, and what is the best idea you have seen?
It’s hard to pick just one dangerous bill. I’m concerned about many proposals at the state and federal levels to restrict platforms’ ability to moderate. Conservatives have argued that platforms have unfairly blocked them from expressing their views. But the First Amendment protects platforms’ ability to exercise this discretion, no matter how unfair it might seem. I’m worried about the Klobuchar-Lujan bill, which, during a public-health emergency, would remove platforms’ Section 230 protection for any “health misinformation” that a platform algorithmically promotes in a nonneutral manner. How does the bill define “health misinformation”? It leaves that to guidance issued by the secretary of health and human services. It shouldn’t be difficult to imagine a scenario in which an H.H.S. secretary abuses this remarkable authority to suppress criticism of the administration. This comes far too close to a Ministry of Truth for my comfort.
I’m intrigued by [Stanford Law professor] Nate Persily’s proposal to require platforms to provide outside researchers with access to data. One of the biggest problems with the current debate is the lack of transparency among the large social media companies. The proposal would help to address this and inform the debate. But any such requirement would need to address the very real privacy concerns of providing access to such data. Relatedly, I’ll give a plug for a nonpartisan, expert fact-finding commission that I’ve been proposing for the past few years.
I also like some elements of the PACT Act. The bill contains many reforms, including an exemption to Section 230 if a platform declines to remove content that has been found defamatory in a lawsuit between the subject and the poster. Section 230’s co-author, former congressman Chris Cox, does not think that Section 230 should cover such cases. I agree.
Will tech companies have to rely on other defenses like the First Amendment to protect themselves if Section 230 gets worn down? What does the post-230 world look like?
Many platforms have relied on other defenses in recent years, particularly as judges have increasingly voiced their distaste for Section 230. These defenses often involve more complex judicial inquiries than Section 230, requiring the platforms to engage in costly depositions, document production and other discovery. A trillion-dollar company like Meta could easily afford such expenses (and gee whiz, Meta is calling for Section 230 reforms). But a start-up that wants to be the next Meta probably couldn’t. We don’t know exactly what level of First Amendment protections courts would provide to online platforms, as Section 230’s passage has made it mostly unnecessary for courts to determine that. But the First Amendment precedent, as applied to bookstores and other pre-internet defendants for decades, suggests that even without Section 230, plaintiffs would have a heavy burden to persuade courts to impose liability on platforms.
Even if Section 230 repeal would not change the ultimate outcome of litigation, it likely would reduce the avenues for user content. Platforms are businesses. Businesses have lawyers. Lawyers are risk averse. For instance, when Congress in 2018 amended Section 230 to exclude certain types of claims regarding sex trafficking and prostitution, Craigslist quickly eliminated its Personals section. Sex workers report that the reduction in online platforms for them has caused substantial harm.
A more general amendment to — or repeal of — Section 230 likely would cause many platforms to rethink whether to allow user content. If I run a local news website, I might not want to assume the risk of allowing user comments in a Section 230-free world.
On Capitol Hill this week were a passel of cryptocurrency execs, who appeared before the House Financial Services Committee to explain the fast-growing market that is now pegged at $2 trillion but lacks any meaningful government regulation. The market is maturing quickly, and so lawmakers are right to get ahead of problems that are sure to develop. Relying on current financial regulations may seem like the answer, but that would be just like thinking the internet was bound by old media or communications laws. And we know how that turned out.
“Because of their nascent stage of development and unique underlying technology, digital assets trade in markets that are fundamentally different from traditional financial markets,” Alesia Haas, chief financial officer of the cryptocurrency exchange Coinbase, told Congress. “As a result, existing regulatory regimes often do not accommodate this new technology.”
She’s right about that, and it’s important for regulators to thread the needle on too much or too little regulation. Pointing to the kludgy financial services industry, FTX’s chief executive, Sam Bankman-Fried, was right when he pointed to the unbanked and those left out of the system: “The industry has the potential to improve a lot of people’s lives.”
Sure, but the industry is also rife with scams and fly-by-night operators. Remember Mt. Gox? Congress needs to keep pressing ahead on this.
Advertisement

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img