In October 2017, as Congress probed Russia’s suspected manipulation of Twitter's platform, the company pledged to within weeks establish an “industry-leading transparency center” that would provide visibility into political and issues-based ads. More than two months later, the center is nowhere to be found.
Twitter announced the center as it was preparing to testify before Congress following revelations that Kremlin-linked trolls used its platform in an attempt to sow discord in American politics. The initiative would offer important visibility into what ads run on Twitter, and when, regardless of the ads’ intended targets.
Twitter told BuzzFeed News that the creation of the transparency center is still in progress. But a spokesperson declined to comment on when it might debut and why it’s been delayed.
The transparency center — an opportunity for Twitter to show Congress it can regulate itself — is yet another hiccup in the company’s uneven response to Washington's concerns about foreign manipulation of its platform. Twitter’s September presentation to the Senate Intelligence Committee was so lacking in substance that Senator Mark Warner, the committee’s vice chair, said it "either shows an unwillingness to take this threat seriously or a complete lack of a fulsome effort." On Tuesday, Twitter missed a deadline to respond to questions from the Senate Intel Committee’s November hearing. (Google and Facebook, which are also under Congressional scrutiny, submitted their responses on time.) And although Twitter banned Russian television network RT from advertising on its platform in October, it did so after offering it 15% of its total US elections ad space ahead of the November 2016 vote.
"We are continuing to work closely with committee investigators to provide detailed, thorough answers to their questions,” a Twitter spokesperson told BuzzFeed News in response to the missed deadline. “As our review is ongoing, we want to ensure we are providing Congress with the most complete, accurate answers possible. We look forward to finalizing our responses soon."
Facebook promised similar, but less robust transparency measures ahead of the November hearings, but unlike Twitter, its effort is proceeding on schedule. Facebook is currently testing the initiative, which lets you see all ads a page is running when you visit it. (Twitter’s transparency center would take this a step further by showing all ads running on Twitter, how long they've been running, and some targeting information in a central place.) Facebook’s test is live in Canada, and it plans to roll it out the US later this year, ahead of the 2018 midterm elections.
The added transparency from both companies will be critical in exposing so-called “dark ads,” which can be seen only by the people they target. Since these ads are not published publicly inside feeds, they can be used to show divisive messaging to micro-targeted groups. Putting these ads in plain sight could hamper the efforts of bad actors hoping to meddle with upcoming elections.
When Twitter announced its transparency center in October it said it would “make these updates first in the U.S., and then roll them out globally." It also pledged to "share our progress here with all of you along the way.” But so far, Twitter isn’t being very transparent about its still-forthcoming key transparency initiative. It has yet to share any of its promised updates.
Have you noticed posts from accounts you don’t follow in your feed? Here is why it’s happening.
If you are starting to see posts from people and accounts you don't follow woven into your Instagram feed, know that it's a deliberate decision by the company — and one that some are speculating to create more ad inventory. Most likely, you'll only start to see more content from people and accounts you don't follow.
According to Ad Age, the videos and photos you'll start to see on your feed are most likely ones followed and engaged with by people you do follow.
In other words: Your Instagram "explore" tab will slowly be integrated into your personal feed.
It's been speculated that this is all meant to accommodate more advertising demand and increasing ad limits. It's a feature and strategy similar to Facebook —
the more users spend time on the app, and are exposed to more content, the more ads they'll see.
Thomas White / Reuters
Gabe Madway of Instagram said the app will only show you accounts and posts of people you do not follow after you've seen all the posts from those you follow.
"After you've viewed all new posts in your feed, we will suggest some additional posts you might like,” he said.
Madway stated that this function will not show any ads. Users can also snooze, or hide, posts from accounts they do not want to see, but you cannot opt out of the feature altogether, he said.
Andrew Kelly / Reuters
Edward Snowden, the former US National Security Agency contractor and whistleblower, has come out in support of the Indian journalist who exposed a huge breach in India's controversial biometric ID program, Aadhaar, and said the people responsible should be arrested.
Rachna Khaira, of Chandigarh-based Indian newspaper the Tribune, revealed she was able to buy access to the personal information of nearly 1.2 billion people in the Aadhaar database for just $8. She was then named in a criminal complaint by the Unique Identification Authority of India (UIDAI), the government body responsible for the data.
The database contains personal information collected by the Indian government since 2010, including names, ages, addresses, cell phone numbers, and iris scans. Critics have long argued that it compromises privacy and could lead to mass surveillance.
"I am happy that the concerns relating to the Aadhaar program are being highlighted internationally," Khaira told BuzzFeed News. "Mr. Snowden's tweet validates my report, and I thank him for highlighting these concerns."
Last week Snowden quoted a BuzzFeed News story about the database breach in a tweet.
A UIDAI spokesperson did not respond to BuzzFeed News' request for comment.
The legal complaint against Khaira sparked outrage in India, with critics saying the government was attempting to muzzle the press. Local journalists in Jalandhar, Khaira's home city, held a protest march on Monday.
Last year the country slipped three points from the previous year on the World Press Freedom Index and ranks 136.
The UIDAI reacted to the backlash by saying the agency respected freedom of press.
Ravi Shankar Prasad, India’s information and technology minister, doubled down on the government’s commitment to freedom of press.
Mark Pernice for BuzzFeed News
For years, the controversial right-wing activist Charles C. Johnson has threatened to sue Twitter, which banned him in 2015.
Now, following a BuzzFeed News report that revealed the internal debate behind Twitter’s 2015 decision to bar him from its service, Johnson is putting his money where his mouth has long been.
In a lawsuit filed in California Superior Court in Fresno on Monday, Johnson’s attorney Robert E. Barnes claims that the microblogging service banned his client for his political views, violating his right to free speech and breaking its contract with him in the process. In addition, the suit seeks millions of dollars of relief for alleged damage to Johnson’s media businesses. It was the second lawsuit filed today by a conservative activist against a tech superpower, following ex-Googler James Damore's suit against his former employer.
“This is going to be a very serious case over the freedom of the internet,” Johnson told BuzzFeed News. “And whether people have the right to say what they mean and mean what they say.”
Informed of the suit before it was filed, Twitter declined to comment on pending litigation.
The Johnson suit comes at a time when Americans across the political spectrum have become skeptical of the amount of power held by Silicon Valley giants and suspicious of their motives. It joins several other lawsuits by conservative parties against big tech platforms that claim tech companies like Twitter and Facebook discriminate against right-wing users. And while Johnson has a history of unsuccessful legal action, this suit hopes to test whether the various laws that have historically protected internet publishers are strong enough to withstand this new public scrutiny.
Twitter permanently suspended Johnson — a former Breitbart reporter who owns the crowdsourced investigations site WeSearchr — in May 2015 after he asked for donations to help "take out" civil rights activist DeRay Mckesson. While he claimed the tweet was taken out of context, prior to his suspension Johnson had drawn the company’s ire for his incendiary tweets —among them false rumors that President Obama was gay. In 2014 he was temporarily suspended from Twitter for posting photos and the address of an individual he claimed had been exposed to the Ebola virus. (After his suspension — as BuzzFeed News reported in December — Johnson began shorting Twitter's stock and attempting to enlist a range of conservative figures to help him sue the company. He is also partially crowdfunding his legal fees in the current suit.)
While the complaint takes issue with Twitter’s vague rules and inability to “convey a sufficiently definite warning” to Johnson for his behavior, the suit alleges that emails published by BuzzFeed News prove that the ban was “a political hit job on a politically disfavored individual.” In one January 2016 email to executives including current CEO Jack Dorsey, Tina Bhatnagar, Twitter’s VP of user services, suggested that Johnson’s suspension was a judgment call rather than a strict interpretation of company rules. “We perma suspended Chuck Johnson even though it wasn't direct violent threats. It was just a call that the policy team made,” she wrote.
"That account is permanently suspended and nobody for no reason may reactivate it. Period."
In a subsequent email, Twitter’s general counsel, Vijaya Gadde, referenced a May 25, 2015, email from Costolo, which suggested the decision to make Johnson’s suspension permanent was Costolo’s. "As for Chuck Johnson - [former Twitter CEO] Dick [Costolo] made that decision," Gadde wrote. Johnson’s complaint quotes the 2015 email from Costolo, in which the former CEO warns senior staff, “I don't want to find out we unsuspended this Chuck Johnson troll later on. That account is permanently suspended and nobody for no reason may reactivate it. Period. The press is reporting it as temporarily suspended. It is not temporarily suspended it is permanently suspended. I'm not sure why they're mistakenly reporting it as temporarily suspended but that's not the case here...don't let anybody unsuspend it.”
Costolo’s email, according to the complaint, “confirms that Twitter’s decision to permanently ban Johnson was not based on a perceived rule violation, but bias against Johnson.”
But even if Johnson’s attorneys are able to show that Twitter broke its contract with Johnson by banning him arbitrarily, the suit faces long odds. According to Eric Goldman, director of the Santa Clara University School of Law’s High Tech Law Institute, Twitter possesses a range of legal protections when it decides to ban a user.
“Twitter can choose to terminate anyone’s account at any time without repercussion,” Goldman told BuzzFeed News. “It has a categorical right to block whoever they choose.”
As a publisher, Twitter is protected by the First Amendment. And as an internet service provider, Twitter is protected by Section 230 of the Communications Decency Act — often referred to as the most influential law in the development of the modern internet — which has historically immunized provider’s decisions to terminate accounts.
The protections of Section 230 depend on the “good faith” of the provider, and Johnson’s suit argues that the emails reported by BuzzFeed demonstrate the lack thereof. And yet Johnson’s own reputation for bad faith may undercut that argument.
“It’s clear Twitter blocked him because they consider him a troll,” Goldman said.
In addition, Johnson’s suit argues that Twitter “performs an exclusively and traditionally public function,” and so it shouldn’t have the right to ban him for speech it doesn’t like. According to Goldman, such arguments have historically been unsuccessful in the courts, in part because judges are loath to set a potentially sweeping new precedent. Still, it’s an area where growing public resentment of big tech’s monopolistic power could have influence over a judge or a jury.
“We can’t ignore that there is such skepticism towards internet companies’ consolidation of power,” Goldman said. “The prevailing environment makes it dangerous for them.”
And for Johnson, who seems to want to embarrass Twitter as much as he wants to make a broader statement about the nature of internet platforms and the way they discriminate against conservatives, simply getting the suit past an initial motion to dismiss — and into discovery — might represent a victory.
“You can lose a lawsuit and still win the argument,” Johnson said.
Facebook's standalone concierge bot M will soon be no more.
On Jan. 19, Facebook is sunsetting the initial version of M, which has been available in closed beta since fall 2015. M's context-based suggestions will live on inside Messenger conversations, but the original concept for a personal, AI-powered assistant that can perform actions on your behalf appears finished.
The company says M — which was able to make restaurant reservations, book plane tickets and, for a short time, draw pictures — has largely served its purpose.
"We launched this project to learn what people needed and expected of an assistant, and we learned a lot," Facebook said in a statement to BuzzFeed News. "We're taking these useful insights to power other AI projects at Facebook. We continue to be very pleased with the performance of M suggestions in Messenger, powered by our learnings from this experiment.”
This is something of a change in course as Facebook clearly hoped to roll M out broadly when the feature first went into beta. “I think we have a good chance [at scaling], otherwise we wouldn’t be doing it,” Facebook Messenger head David Marcus told BuzzFeed News in November 2015.
Now, a few years later, the company seems content with a more dialed-back approach: "M suggestions," an M-trained feature that hops into Messenger conversations and suggests certain actions based on context. It prods you to share your location when someone asks "Where are you?" or offers simple pre-written replies within conversations, and more. But while M could perform tasks (like arguing with your cable company) M suggestions is simply a contextual recommendation feature.
M suggestions in action.
M's AI system was supposed to learn from interactions with humans. When people interacted with the bot, the system would provide a response which was then reviewed by a contractor. If the AI-generated message made sense, the contractor would pass it along to the person conversing with M, indicating to the AI it was a good response. When the message didn't make sense, the contractor would write a new message and send that one, indicating to the AI that there was a better way to answer the query.
Facebook believed that with enough experience and tweaking, it might be possible to someday roll M out to its broader user base. But ultimately, whatever happened on the backend gave Facebook reason to reconsider. The company invested serious resources in the project — M and the contractors behind it spent more than two years responding to queries 24/7. Facebook says it will offer new roles to those contractors now that the project is winding down and it has plenty of openings; it's in the process of adding 4,000 content moderators to its current staff of 3,500.
So M, the concierge, is dead. But for a moment, it was a delightful, occasionally eerie peek into a future that's perhaps a bit further away than its creators hoped.
When M debuted, it was mind blowing:
It drew some incredible pictures (or, more accurately, the humans behind it drew them):
It sent parrots to a rival news organization:
Cara Rose DeFabio
And it deftly parried human attempts to break its will:
In April 2016, when Facebook began talking about opening M's technology to developers, rather than continuing it as an internal project, the writing was on the wall: M was on the way out. It lasted just another year and a half.
Looking back, perhaps M was a relic of a more optimistic technological moment. Back in 2015, Facebook could pour resources into artificial intelligence moonshots without a second thought. Indeed, in 2016 Facebook CEO Mark Zuckerberg even made his annual challenge entirely about building his own, personal AI (he succeeded).
But now that Facebook's platform has been undermined by fake news, graphic violent content, and a Kremlin-linked campaign to sow chaos ahead of the US presidential election, there are more pressing issues. In 2018, Zuckerberg's challenge is working to fix its most serious problems such as abuse, hate, and foreign interference.
A conversation with M from April 2016.
Fire and Fury, the controversial Trump White House tell-all by Michael Wolff, may very well be the first book to achieve best-seller status by virtue of viral Twitter screenshot.
Since the moment the first quotes from the book leaked online via the Guardian, social media has been flooded by big blocks of Wolff’s prose, excerpted from advance copies of the book and magazine excerpts. For days now, the hunks of text, each one a different incendiary quote or observation from the tome, have been screenshot and breathlessly shared by journalists, pundits, and activists on either side of the aisle.
The result is a political Rorschach test of sorts. For those on the left, Wolff’s observations are vindication: reported proof of any number of long-suspected but unproven theories. Bannon thinks talks with Russia were treasonous! The president’s own staff think he’s mentally unstable! Trump never wanted to be president! His wife hates him! The commander in chief spends his evenings eating cheeseburgers in bed and screaming at the television! Similarly, Trump’s most ardent online defenders have taken to sharing chunks of the book in an effort to discredit its claims. Liberal fanfiction! Of course the president knows who John Boehner is! What about Hillary's health?!
Adding to the drama are questions of the author himself, a controversial media gadfly with a dubious reputation that includes allegations as to whether his reporting can be trusted. Errors spotted by journalists and pundits of all political persuasions have already cast doubt on what’s true in Fire and Fury and what has been inferred or even imagined by Wolff cobbling together unconfirmed anecdotes and rumored speculation.
All of which makes Wolff’s book the perfect chronicle for 2018’s fractured and toxic media ecosystem. More than that, Fire and Fury is, in many ways, the first real book of the post-truth hyperpartisan social media era: an incendiary piece of factually debatable content that’s perfectly engineered for virality and, depending on your side, a confirmation of every politically motivated suspicion.
The most obvious online comparison for Wolff’s book might be hyperpartisan Facebook pages, which became infamous during and after the 2016 election for, as the New York Times’ John Herrman wrote, “cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture.” Like these pages, which are painstakingly optimized to appeal to partisan emotions (and share widely), Fire and Fury blends honest reporting — real access and real quotes — with gossip, rumor, and, most important, a feeling: a bone-deep suspicion fueled by endless reporting and coverage whose confirmation is often just out of reach. Some of the screenshots are even reminiscent of 2016's more conspiratorial posts (if you're eagerly tweeting screenshots and claiming with certainty that Trump has dementia, are you that different from your uncle sharing fake Facebook news of a Clinton health crisis?). To those who’ve long suspected the Trump White House is even more dysfunctional than has been reported, Wolff’s book does more than just scratch the itch — it’s not just true, it’s truer than true.
You can see this on Twitter, where journalists are grasping publicly with Wolff’s reporting and trying to make sense of what to believe. Earlier this week political columnist Ana Marie Cox mused, “My guess about accuracy of Wolff’s book: It’s based on *something.* I believe with my whole heart Trump is in bed by 6:30, randomly calling people he thinks are his friends and gossiping about other people he thinks are his friends. They are the sources. They are not his friends.” Similarly, in a subsequent thread, Cox and writer Mary H.K. Choi grappled with the central issue of the contested claims in the book: their total plausibility. “The three screens plus cheeburger is SO plausible,” Choi tweeted. To which Cox replied, “I can make myself sick thinking about it, it sounds so true.”
To anyone following — and trusting — the palace intrigue reporting coming from the White House in 2017, the book sounds so true. Like a good post from a hyperpartisan Facebook page or a viral Twitter pundit, Fire and Fury gives just enough credible evidence to support some of its astonishing claims before moving into the territory of wishful thinking; it muddies the waters just enough to make them virtually impossible to debunk or fact-check. As the Times’ Maggie Haberman — whose reporting from inside Trump’s inner circle has helped add plausibility to even the most salacious claims in Wolff’s book — remarked on Twitter, “even if some things are inaccurate/flat-out false, there’s enough notionally accurate that people have difficulty knocking it down.”
Thanks to a deeply fractured media environment in which pro- and anti-Trumpers each live in parallel universes of information, Fire and Fury works on all the same levels for the far right. Just as the book fulfills many a liberal fantasy about the Trump administration, its publication is in many ways a justification of the pro-Trump media’s long-standing criticisms of the mainstream media. While the left got the reporting it craved, the right got what seemed to them like confirmation that mainstream reporting is biased, deceitfully obtained, salacious, and loose with the truth but hidden behind the veneer of rigorous reporting. Previous claims — from mainstream media outlets, no less — that Wolff “acknowledges that conventional reporting isn’t his bag” are bandied about on Twitter as proof that the author has no scruples. Sloppy factual errors are pointed out in support of the argument that none of the book’s claims can be trusted. Trump acolytes mentioned in the book have claimed — in viral tweets of their own — that the book is so much more fake news — I was there; it didn't happen that way. Each denial becomes its own viral piece of evidence of a corrupt and reckless mainstream media.
Since portions of it first began appearing online, Fire and Fury has sucked all the air out of a very mercurial news cycle. In a matter of days, it's prompted extensive discussion across all possible media; it's caused the president to viciously disavow his former chief strategist and call for the book to be banned; it's reignited a new narrative around Trump’s mental health and its effects on his presidency. And yet, despite all the upheaval, nobody seems any closer to knowing what in the book is true and what’s not. But that’s not stopping anyone from sharing its revelations.
Which is why Fire and Fury might be the perfect chronicle for not just the Trump era, but the social media era entire. For Wolff’s book, the truth seems almost a secondary concern to what really matters: engagement. In a hyperpartisan online age, Wolff seems to have understood for years what the Facebook’s hyperpartisan page operators found out in 2016. “The point,” Herrman wrote about those pages for the Times, “is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary."
Now, in the post-truth Facebook era, it appears the same can be true for books like Wolff’s as well. On Wednesday — as the leaked excerpts rolled out across the internet — Fire and Fury went from No. 48,449 on Amazon's best-selling books list to No. 1.
Rearz brand diapers for adults.
A company that makes diapers for the adult baby/diaper lover fetish community (known as ABDL) gave up on its attempt to trademark the term “ABDL” on Thursday after message boards for the community exploded in anger last week.
Rearz, a Canadian-based supplier of adult diapers with cutesy patterns and other adult baby accessories, like pacifiers, told BuzzFeed News, “we had no malicious or strange intentions in trying to register it, but obviously it struck a nerve with people. This is a community we love and serve, and we don't want to make people feel less valuable.”
Adult babies/diaper lovers are, as their name suggests, adults who enjoy role-playing as babies or simply wearing diapers. For some people, this is sexual; for others, it’s not. There’s a wide spectrum of ABDLs — some people want to role-play as babies; some are only interested in the diapers and not the rest of the age-play. Some want to wear the diapers, some want to just see others wearing them. There are teen ABDLs and older ones, and the community includes people of all gender identities and sexual orientations.
Rearz filed to trademark “ABDL” in October 2017, but it was only this week that someone in the community noticed. At this discovery, the /r/ABDL subreddit filled with angry threads about Rearz’s trademark filings. “This is scummy. Period,” wrote one user. In another thread, angry ABDL redditors planned to ruin Rearz’s standing on Facebook by rating it one star on its business page. On a forum for adult babies called ADISC.org, one adult baby said, “Rearz is now off my shopping list.” People even made memes about the scandal.
The owner of Rearz, a woman named Laurie who asked to use her first name only to protect her family’s privacy, says this is all a misunderstanding. After learning of the community outrage, Rearz wrote a now-deleted blog post on its website explaining that it filed for the trademark to help the company’s online sales:
“Over the last several years we have faced many challenges using the term ABDL in major online marketplaces. We have ads and accounts permanently blocked on Facebook, eBay, Kijiji, Google ads with payment processors and more simply from using the term.”
Laurie said that, starting about two years ago, eBay, which had previously accounted for about 20% of her company’s business, started taking down items because it classified them as “adult content.” Sometimes Rearz’s listings for items like adult diapers and adult-size baby clothes would be allowed to stay up, but certain keywords would get the stuff delisted. eBay does allow adult items to be sold, but its policy isn’t specific about ABDL items.
In the past, Rearz’s credit card processor for its website, as well as PayPal, blacklisted Rearz. Credit card processors have varying policies about whether they will take on clients that sell adult items or pornography. Facebook has also removed Rearz’s ads. Currently, Rearz sells directly from its website, and people can visit its brick-and-mortar location outside Toronto.
Laurie believes that if she trademarked the term “ABDL,” it would help keep her ads and eBay listings online. “In order to be able to push back to some of these larger corporations that are blacklisting it, we can say, ‘hey, this isn’t just a term; this is a trademark term we have,’” Laurie told BuzzFeed News. “Because it becomes your brand name, and they don’t blacklist brand names. If we don’t have it as a brand name, then we have nothing to stand on.”
Rearz also claims that it had no plans to enforce the trademark in a way that would hurt the community. Its blog post says, “we promise to always be good stewards of the mark and to use it to build and improve the community.”
Joshua Jarvis, a trademark lawyer at the firm Foley Hoag, points out that “[Rearz’s] purported willingness ‘to allow others to have free use of the ABDL trademark’ doesn’t seem consistent with trademark ownership, which as you may know requires that a trademark owner diligently police and enforce its trademark rights so as to avoid consumer confusion.”
Rearz also pointed out they’re not the first to trademark the term — another seller, TheABDLShop.com had already trademarked the term “The ABDL Shop” for the use of selling apparel. But that trademark has some legal quirks. In their filing, TheABDLShop.com’s lawyer says that “ABDL” has no significance or meaning, even though it is a somewhat well-known term in a community of people. It’s possible that Rearz’s trademark application would have been rejected since the term is well known for a community of people interested in the world of diapers.
Several hours after BuzzFeed News spoke with Laurie about the ABDL community wrath, she told us that she had read through the message boards and decided to drop the trademark. “These are customers we care deeply about, and we don’t want to make them feel like we’re trying to take something away from them that they value.”
Brian Snyder / Reuters
Facebook CEO Mark Zuckerberg on Thursday said his annual personal challenge this year will be tackling abuse, hate, foreign interference, and other major problems on Facebook. In essence: His 2018 will be dedicated to fixing a mess that might’ve been prevented with a little more foresight by the social giant. It’s a departure from previous challenges, many of which have been more lighthearted.
Last year was a doozy for Facebook. The company spent much of it on the defensive, explaining its bungled handling of fake news, graphic violent content, Russian interference in US elections on its platform, and its overall contribution to divisiveness and polarization around the globe. Facebook capped the year addressing early investor and employee concerns that it was "destroying how society works."
"Facebook has a lot of work to do — whether it's protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent," Zuckerberg said in a post announcing the challenge. "My personal challenge for 2018 is to focus on fixing these important issues."
"We won't prevent all mistakes or abuse," Zuckerberg continued. "But we currently make too many errors enforcing our policies and preventing misuse of our tools. If we're successful this year then we'll end 2018 on a much better trajectory."
Zuckerberg's annual challenge seems a bit more serious than those that preceded it. In 2009 he committed to wearing a tie every day to remind himself to "get serious" about developing a sustainable business model, and other years he resolved to read books, build a personal AI, and run a lot.
Last year, after a turbulent political season, Zuckerberg pledged to visit the approximately 30 US states he hadn't been to yet, generating speculation he'd run for president (he's not).
The seriousness of Zuckerberg's challenge this year seems to show an increasing reckoning with Facebook's place in the world, an important if perhaps overdue step for the steward of an opinion-shaping 2 billion+ user social platform.
Here's the full post.
Peter Thiel addresses the final night of the 2016 Republican National Convention at Quicken Loans Arena in Cleveland, Ohio.
Robyn Beck / AFP / Getty Images
Billionaire venture capitalist Peter Thiel wants to create a new conservative cable news network and his representatives have engaged the powerful Mercer family to help with funding, according to two sources familiar with the situation.
Thiel, a Facebook board member who secretly funded lawsuits to bring down Gawker Media, had originally explored a plan to create the network along with Roger Ailes, the late founder of Fox News, according to a soon-to-be published book by journalist Michael Wolff. But BuzzFeed News has learned that Thiel has continued looking into fashioning a Fox News competitor even after the May 2017 death of Ailes, according to the two sources familiar with the matter.
Wolff writes that on May 12 of last year, Ailes was scheduled to fly from Palm Beach, Florida, to New York to meet with Thiel to discuss the launch of a new cable news network that would compete with Fox News, which Ailes nurtured into a conservative powerhouse before he was ousted in the summer of 2016 in a sexual harassment scandal. Both men, Wolff writes in Fire and Fury: Inside the Trump White House “worried that Trump could bring Trumpism down.”
The plan, according to Wolff, was that Thiel — a rare tech mogul who openly supported Trump — would pay for the network. Ailes would come along and bring loyal Fox News talent Sean Hannity and Bill O’Reilly, who was forced out at Fox last year following reports about settlements he had reached with multiple women.
But two days before the meeting, Ailes fell and hit his head. Ailes told his wife, Elizabeth, not to reschedule the meeting before he slipped into a coma, Wolff writes. He died a week later.
A spokesperson for Thiel declined to comment for this story. Elizabeth Ailes could not be immediately reached for comment.
A spokesperson for the Mercer family, which is led by hedge fund billionaire and Breitbart patron Robert Mercer, didn’t return a request for comment. Robert’s daughter, Rebekah Mercer, served on the president’s executive transition team with Thiel following the election.
One person close to Thiel said he was not aware of his plans to create a news network, and was surprised when asked about the plan. That person noted that Thiel had said in private conversations that media companies were traditionally bad investments.
Thiel’s exploration, though, highlights a frequent point of discussion among conservative media executives — that there is room to flank Fox News from the right. Still, creating a cable network from scratch is an incredibly expensive endeavor that would require hundreds of millions of dollars and no guarantee of success (Rupert Murdoch, who controls Fox News' parent, 21st Century Fox, had to endure years of losses before Fox became the financial giant it is today).
For his part, Thiel has made few investments in media. He was a seed investor in the technology news site PandoDaily, and as of late last year, was trying to bid for the archives of Gawker.com, which is being shopped around by the estate of its bankrupt parent company. A person familiar with his thinking noted that the billionaire saw an opportunity to expand his influence after speaking at the RNC and serving on the transition team.
In just a matter of hours, Wolff has faced pushback from some politicians and operatives featured in the book, including vehement pushback from the White House and the president himself, who contended the book is the work of a disgruntled Steve Bannon, the former White House chief strategist. Ailes and Wolff were longtime friends (and Ailes was a generous source to Wolff). After the news executive’s death, Wolff reported for the Hollywood Reporter that he spoke to Ailes a week prior about “the possibilities for a new conservative network.” Wolff noted that Ailes was bound by a noncompete agreement with Fox News, but that he said he was taking calls.
With reporting from Charlie Warzel and Joseph Bernstein.
Logan Paul attends The Thinning Meet & Greet during the 2016 New York Comic Con.
Nicholas Hunt / Getty Images
YouTube has a content crisis — again. On the heels of the company’s child exploitation problem, it finds itself facing a new wave of criticism after high-profile YouTuber Logan Paul posted a video of a dead body while filming in Aokigahara, Japan’s so-called “suicide forest.” The Logan Paul controversy is just the latest for a company that has increasingly had to contend with criticism over what kind of content is appropriate on its platform — and how it unevenly applies its own community guidelines.
YouTube, after a decade of being the pioneer of internet video, is at an inflection point as it struggles to control the vast stream of content flowing across its platform, balancing the need for moderation with an aversion toward censorship. In the past 12 months alone, it has been embroiled in controversies including anti-Semitic rhetoric found in videos of its biggest star, PewDiePie, an advertiser exodus over videos featuring hate speech or extremist content, and the disturbing and potentially child-exploitative content promoted by its algorithm. With every new misstep, it has alternately angered the creators it depends on for content, turned off advertisers, and confused users about how, exactly, it makes decisions about which videos can remain on its platform, what should be taken down, and what can be monetized. The Paul video is just the latest manifestation of that struggle.
In this case, the sensational video of a dead body, an apparent death by suicide, was live for more than 24 hours before being taken down by Paul himself after mounting public backlash. (Paul’s PR representative did not return a request for comment.) In that time span it was viewed more than 6.3 million times, according to New York magazine. The video fits within a larger pattern of controversial content and highlights how YouTube has created a system of incentives for creators on its platform to push boundaries.
"To what extent is YouTube overtly and tacitly encouraging individuals to push on the outrageousness factor?"
“Let’s be honest, this flare-up on Logan Paul is going to die out eventually,” Sarah Roberts, an assistant professor at UCLA who has been studying content moderation for seven years, told BuzzFeed News. “But there’s a bigger conversation to be had: To what extent is YouTube overtly and tacitly encouraging individuals to push on the outrageousness factor [in producing content]? Do they need that to keep the engagement going?”
YouTube on Tuesday acknowledged that the video did violate its policies for being a graphic video posted in a “shocking, sensational or disrespectful manner.” “If a video is graphic, it can only remain on the site when supported by appropriate educational or documentary information and in some cases it will be age gated,” a company spokesperson wrote in an emailed statement to BuzzFeed News.
But the Logan Paul incident highlights the consistently inconsistent application of YouTube’s content moderation rules. YouTube did not respond when asked if it had initially reviewed and approved the video to remain on the platform. According to a member of YouTube’s Trusted Flagger program, however, when the company manually reviewed Paul’s video, it decided that the video could remain online and didn’t need an age restriction.
YouTube also said when it removes a video for violating community guidelines, it applies a “strike” to a channel. It was unclear whether it did so with Logan Paul’s channel because Paul deleted his own video. If a channel accrues three strikes within a three-month period, YouTube shuts the channel down, per the company’s community guidelines. (YouTube did not respond to follow-up questions from BuzzFeed News asking whether it had indeed applied a strike to Paul’s account.) Notably, Paul had demonetized the video when he first posted it — meaning neither he nor YouTube earned any advertising revenue from it.
Paul’s video isn’t something artificially intelligent moderation could catch on its own, two experts with a focus on content moderation told BuzzFeed News. “What is obscene is having shown and been disrespectful about the body of a suicide victim,” said Tarleton Gillespie, who studies the impact of social media on public discourse at Microsoft Research. “This is the kind of contextual and ethical subtlety that automated tools are likely never to be able to approximate.”
What’s more, the decision that Logan Paul crossed the line is one that fundamentally involves an exercise of moral judgment, according to James Grimmelmann, a professor of law who studies social networks at Cornell. “You have to look at what's considered decent behavior in the user community YouTube has and wants to have,” Grimmelmann said. “You can't just turn a crank and have the algorithm figure out your morality for you.” In that sense, YouTube did ultimately make a value judgment on the Logan Paul video, based on the reaction of its own community, by publicly saying it violated its policies.
"You can't just turn a crank and have the algorithm figure out your morality for you."
Of course, that’s not how the company wants the public to view its role. YouTube has remained largely silent on the fiasco, while Paul has issued two apologies. “Firms have done such a good job of positioning themselves so that when something like this happens, they can wash their hands of it and say, ‘We’re just the dissemination channel,’” said Roberts. “But I would push on that and ask — what’s YouTube’s relationship with Logan Paul?”
Paul is a marquee YouTube star. He is a main character in The Thinning and Foursome, two ongoing YouTube Red Original series — high-quality exclusive shows that the company distributes on its paid subscription service, YouTube Red. Paul has had a YouTube channel since 2015, and in that time he’s accumulated 15 million subscribers and nearly 3 billion views. YouTube knows Paul’s irreverent style of video, and Paul knows what does well on the platform. “In this case, this guy is a top producer for YouTube,” said Roberts. “It becomes harder to argue the video wasn’t seen in-house.”
Compounding the problem is that YouTube itself likely has no way of knowing exactly what content is on its platform at all times — especially with users uploading nearly 600,000 hours of new video to YouTube daily. “The problem with current digital distribution platforms is the micro-targeting of content to users,” said Bart Selman, a Cornell University professor of artificial intelligence. “In fact, a well-tuned ranking algorithm will make sure that extreme content is only shown to people who will not feel offended — or may even welcome it — and won't be shown to others.” The bubble of micro-targeting is pierced when disturbing videos go viral and attract a lot of public attention and media scrutiny. But that’s the exception, not the norm.
And that leaves the public to exert pressure on YouTube. Still, exactly how YouTube’s complex system of human moderators, automated algorithms, policy enforcement, and revenue generation work together to police and promote videos remains a black box — and that’s an issue. “Those key ingredients are under lock and key,” UCLA’s Roberts said. “One positive income of these incidents is that the public asks new questions of YouTube.”
“We are all beta testers and a focus group, including how content moderation is applied,” Roberts continued. Now, YouTube will likely throw even more resources at its content moderation problem and communicate its strategy even more loudly to the public — something it has already begun to do — in an effort to outpace any regulation that might come down on the platform.
Students at the University of Tehran Saturday.
Str / AFP / Getty Images
The Iranian government's anxiety over the widespread protests that have roiled the country for the past week may be best shown by one action: the government's decision to censor Telegram, the most popular foreign messaging app still being used by average Iranians.
The country already blocks many of the world's most popular internet services, preventing its citizens from directly accessing news, human rights, and LGBT sites. It blocks popular services like Google, YouTube, Facebook, and the main download page for the Tor Browser, which lets users easily circumvent such restrictions. Tor connections from Iran have nevertheless skyrocketed in the past year.
But Telegram had remained in widespread use until this past weekend, when the government's media monopoly, Islamic Republic of Iran Broadcasting, announced it had suspended Telegram and Instagram “to preserve the peace and security of citizens.”
In a tweet — an ironic medium, as Twitter is also banned in Iran — Azari Jahromi, the country's minister of information and communications technology, insisted Monday that the block was only temporary and that rumors to the contrary were rooted in social discontent and pessimism.
Telegram founder Pavel Durov, in a defense of his company's policies, blamed the Iranian government's action on Telegram's refusal to agree with the government's recent request to shut down channels used by peaceful protesters. Even with the blockage, “many are still reaching Telegram via VPN,” Markus Ra, a spokesperson for the company, told BuzzFeed News.
Durov has long expressed a fierce anti-censorship stance, though his company reportedly has previously complied with requests to take down porn bots and remove insulting stickers.
Telegram also touts its end-to-end encryption, meaning every device with Telegram installed has its own encryption keys, which in theory would prevent a government that intercepted Telegram messages from being able to read them. However, cryptography experts have resoundingly criticized Telegram's encryption as insufficiently vetted and say it's possible governments can decipher users' messages.
Instagram declined to respond to a BuzzFeed News question about whether it was actively aiding the Iranian government’s censorship.
How broadly Tehran is interfering with internet access remains uncertain.
Many users from within Iran have reported severe difficulty in accessing any foreign websites, though domestic sites seem unaffected, said Amir Rashidi, who works as an internet security researcher at the Center for Human Rights in Iran, and who has collected dozens of complaints about usage.
“As the protests grow in Iran, the internet is getting worse and worse and worse,” Rashidi told BuzzFeed News.
“Usually it’s the afternoon, when people go out and join the protests,” he said. “Even in working hours, internet is not really that normal. It’s better, but it’s not like other days, where there wasn’t anything. There’s a high disruption.”
On Sunday, President Trump tweeted that Iran “has now closed down the Internet so that peaceful demonstrators cannot communicate.”
However, researchers at Oracle’s Internet Intelligence Team, which tracks global internet outages, say that while Iran might selectively block any number of sites from being accessed directly, the only wholesale internet outage in Iran took place on Monday and lasted about 13 minutes.
“That’s kind of a normal day in Iran, having watched this for a long time, so I don’t know that it’s that significant,” Doug Madory, the team’s director of internet analysis, told BuzzFeed News. “Even the one yesterday, which is pretty big, could be coincidental. I don’t know that a 13-minute event defines what’s going on there.
“All we can basically say is what’s happening in Iran is not like the Egyptian shutdown in January 2011, where they just pulled the plug on everything. We’re not looking at that,” Madory said. “It may give Iranians cold comfort, in the end it may not be that big a distinction, but what Iran is doing is a bit more sophisticated.”
Mike Theiler / AFP / Getty Images
Twitter temporarily locked the account of prominent Trump supporter and former Milwaukee sheriff David Clarke after Clarke encouraged violence on the platform, breaking Twitter's rules.
The lock put Clarke's account in read-only mode, preventing him from basic activities such as tweeting and retweeting. He could, however, read tweets and send direct messages to people who followed him. CNN first reported the news.
Clarke regained full access to his account only after deleting three offending tweets, including one encouraging his followers to hit the "LYING LIB MEDIA" in the face.
"Punch them in the nose & MAKE THEM TASTE THEIR OWN BLOOD," Clarke tweeted Saturday, providing no evidence of lies. He attached an image of two wrestlers beating up another that was labeled "CNN." Clarke's and President Trump's faces were superimposed on the wrestlers delivering the beatdown.
A Twitter user reported the tweet, and the company then locked Clarke's account as a result, according to an email sent to the user. A Twitter spokesperson confirmed to BuzzFeed News that the email is authentic.
"We have reviewed the account you reported and have locked it because we found it to be in violation of the Twitter Rules: https://support.twitter.com/articles/18311," the email said. "If the account owner complies with our requested actions and stated policies, the account will be unlocked."
Clarke responded to the account lock Tuesday morning. "I will NOT be Intimidated into silence by LYING LIB MEDIA," he said.
@SheriffClarke / Twitter
Social media companies are currently under immense pressure to regulate themselves after their mishandling of the Kremlin-sponsored campaign that used their platforms to sow discord before and after the US presidential election. These companies have also struggled to come up with a solution for the fake news and abuse that flourishes on their products. As a result, they're becoming more interventionist. Facebook is in the process of hiring 4,000 moderators, for instance, and Twitter took the rare step of publishing its road map to tackle abuse.