Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Saturday, November 21, 2020

Mind-Bogging Artificial Intelligence

It's Kashmir Hill, a technology reporter at the New York Times, who used to be a tech blogger back in the day. Once she commented on a blog post of mine thanking me for a link. I'm still blogging. She's at the Old Gray Lady. And I know. I know. It's a despicable left-wing partisan propaganda outlet, but even a broken clock is right twice a day. 

In any case, this is cool.


The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you feed a computer program a bunch of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.

The back-and-forth makes the end product ever more indistinguishable from the real thing. The portraits in this story were created by The Times using GAN software that was made publicly available by the computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

“When the tech first appeared in 2014, it was bad — it looked like the Sims,” said Camille François, a disinformation researcher whose job is to analyze manipulation of social networks. “It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”

Advances in facial fakery have been made possible in part because technology has become so much better at identifying key facial features. You can use your face to unlock your smartphone, or tell your photo software to sort through your thousands of pictures and show you only those of your child. Facial recognition programs are used by law enforcement to identify and arrest criminal suspects (and also by some activists to reveal the identities of police officers who cover their name tags in an attempt to remain anonymous). A company called Clearview AI scraped the web of billions of public photos — casually shared online by everyday users — to create an app capable of recognizing a stranger from just one photo. The technology promises superpowers: the ability to organize and process the world in a way that wasn’t possible before...

Keep reading.

 

'I love podcasts too...'

A classic:


Monday, September 14, 2020

Between 30 and 50 Percent of West Virginia Schools Lack Internet Access

Fascinating. Kinda sad, but fascinating.

Not sure if the figures include private schools, but either way, it's unreal.

At WSJ, "Remote Schooling Out of Reach for Many Students in West Virginia Without Internet":
HARTS, W.Va.—Just before 9 a.m., Hollee Blair sat in her boyfriend’s Toyota Tacoma in the parking lot of Chapmanville Regional High School and waited for attendance to be taken.

With no broadband internet at home, Ms. Blair, a 17-year-old honors student who plans to study nursing after high school, used her boyfriend’s iPhone to connect to the school’s Wi-Fi for an hour-long orientation over Zoom.

“I’ll do whatever it takes to keep up,” said Ms. Blair, shielding her eyes so she could see the phone’s screen. “If it means doing this every day, I’ll do it. It’s worth it.”

Much of southern West Virginia had already been struggling with a drug epidemic and persistent poverty before the coronavirus pandemic took hold here. Now, as students return to school online, the region is coming up against another longstanding challenge: a lack of broadband internet access.

Nationwide, about 21 million people lack access to broadband, according to the Federal Communications Commission. When people with slow or unreliable internet connections are included, the number swells to 157 million, nearly half the U.S. population, according to a study by Microsoft Corp.

Providing service in sparsely populated areas is typically more costly and less profitable than in suburbs and cities. In Appalachia, the terrain has made it difficult to install and maintain the infrastructure necessary for broadband.

In West Virginia, between 30% and 50% of K-12 students don’t have internet access at home, according to the state Department of Education. By the start of school on Tuesday, the state had set up nearly 850 Wi-Fi hot spots at schools, libraries, National Guard armories and state parks for students.

So far, nine of West Virginia’s 55 counties, including Logan County, where Ms. Blair lives, are teaching all classes remotely after spikes in Covid-19 cases pushed them above a threshold for new daily cases set by the state.

But in the state’s other 46 counties, many students will still need to connect online as some districts choose a blended model that mixes in-person and remote classes. Counties may also be required to halt in-person classes if case levels rise too high.

Logan County has had 536 cases of Covid-19 and 36 related deaths.

This week, Gov. Jim Justice lifted a $50 million cap on how much the state can receive from a fund created by the FCC to bring high-speed broadband to rural areas. But it isn’t clear how much the state will ultimately receive and how long it will take providers to connect homes.

“You’ve just got to step up and meet this challenge,” the governor said.

Meanwhile, Sen. Joe Manchin of West Virginia, a Democrat, is seeking federal funding to set up broadband hot spots across the country to aid remote learning during the pandemic.

“This is a short-term fix to a long-term problem, but until we treat access to broadband like the need for electricity was treated in the 1930s, our students will fall behind,” he said.

In Logan County, which is blanketed by rugged mountains, nearly a quarter of residents live below the federal poverty line, according to census data. At Logan High School, the hallways and classrooms are empty, and teachers are troubleshooting tech problems as they begin broadcasting their classes to students from laptops.

Jennifer Stillwell, a history teacher, said some poorer students won’t have transportation to get to a hot spot. She is giving students the option to use a photo of themselves rather than live video, in case they don’t feel comfortable having their home appear on screen.

She was encouraged that after three days of classes, she had been unable to reach only five students who may lack internet out of 105 on her roster.

On Thursday, her AP history class got off to a smooth start, with 16 students logging in. “Let’s see if we can chat,” she said brightly, as she introduced herself from her neat classroom.

The Logan County school district is using a $375,000 grant from the state to get students connected. Patricia Lucas, the district’s superintendent, said as many 40% of K-12 students in the county might not have internet at home...
Still more.

Thursday, January 9, 2020

Saturday, February 23, 2019

The Air Force is Buying New F-15s

This is really cool.

At Popular Mechanics, "The U.S. Air Force Is Buying New F-15s After All: The F-15X will complement the F-22 and F-35 in tomorrow's aerial battlefields."


Sunday, February 3, 2019

Life Without the 'Big 5' Tech Giants

Some time ago I posted the link to "Social Media Self-Defense."

I have not yet implemented the plan, but I do think about it often.

And it turns out, an operational defense plan for social media should be just a start. To be truly free in this day and age, you've got to unplug from all the biggies: Amazon, Apple, Facebook, Google, and Microsoft.

Who does that? Probably no one, but Kash Hill is giving it a go. She's a warrior, dang!

See, "Life Without the Tech Giants," and "I Cut Google Out Of My Life. It Screwed Up Everything."

From the latter:


Long ago, Google made the mistake of adopting the motto, “Don’t be evil,” in a jab at competitors who exploited their users. Alphabet, Google’s parent company, has since demoted the phrase in its corporate code of conduct presumably because of how hard it is to live up to it.

Google is no stranger to scandals, but 2018 was a banner year. It covered up the potential data exposure of a half million people who probably forgot they were still using Google+. It got caught trying to build a censored search engine for China. Its own employees resigned to protest Google helping the Pentagon build artificial intelligence. Thousands more employees walked out over the company paying exorbitant exit packages to executives accused of sexual misconduct. And privacy critics decried Google’s insatiable appetite for data, from capturing location information in unexpected ways—a practice Google changed when exposed—to capturing credit card transactions—a practice Google has not changed and actually seems proud of.

I’m saying goodbye to all that this week. As part of an experiment to live without the tech giants, I’m cutting Google from my life both by abandoning its products and by preventing myself, technologically, from interacting with the company in any way. Engineer Dhruv Mehrotra built a virtual private network, or VPN, for me that prevents my phone, computers, and smart devices from communicating with the 8,699,648 IP addresses controlled by Google. This will cause some huge headaches for me: The company has created countless genuinely useful products, some that we use intentionally and some invisibly. The trade-off? Google tracks us everywhere.

I’m apprehensive about entirely blocking Google from my life because of how dependent I am on its products; the company has basically taken up residence in my brain somewhere near the hippocampus.

Google Calendar tells me what I need to do any given day. Google Chrome is how I browse the internet on my computer. I use Gmail for both work and personal email. I turn to Google for every question and search. Google Docs is the home of my story drafts, my half-finished zombie novel, and a running tally of my finances. I use Google Maps to get just about everywhere.

So I am shocked when cutting Google out of my life takes just a few painful hours. Because I’m blocking Google with Dhruv’s VPN, I have to find replacements for all the useful services Google provides and without which my life would largely cease to function:

I migrate my browser bookmarks over to Firefox (made by Mozilla).
I change the default search engine on Firefox and my iPhone from Google—a privilege for which Google reportedly pays Apple up to $9 billion per year—to privacy-respecting DuckDuckGo, a search engine that also makes money off ads but doesn’t keep track of users’ searches.
I download Apple Maps and the Mapquest app to my phone. I hear Apple Maps is better than it used to be, and damn, Mapquest still lives! I don’t think I’ve used that since the 90s/a.k.a. the pre-smartphone age, back when I had to print directions for use in my car.
I switch to Apple’s calendar app.
I create new email addresses on Protonmail and Riseup.net (for work and personal email, respectively) and direct people to them via autoreplies in Gmail. Lifehack: The easiest way to get to inbox zero is to start a brand new inbox.
Going off Google doesn’t come naturally. In addition to mentally kicking myself every time I talk about “Googling” something, I have to make a “banned apps” folder on my iPhone, because otherwise, my fingers keep straying out of habit to Gmail, Google Maps, and Google Calendar—the three apps that, along with Instagram and Words With Friends, are in heaviest rotation in my life.

There’s no way I can delete my Gmail accounts completely as I did with Facebook. First off, it would be a huge security mistake; freeing up my email address for someone else to claim is just asking to be hacked. (Update: While other companies recycle email addresses, many Googlers have informed me since this piece came out that Google does not.) Secondly, I have too many documents, conversations, and contacts stored there. The infinite space offered by the tech giants has made us all digital hoarders.

And that hoarding can be a bonanza for tech giants, allowing Google, for example, to create a “Smart Reply” feature that crawls billions of emails on Gmail to predict how you’d like to respond to a friend’s missive. Yay?

This experiment is not just about boycotting Google products. I’m also preventing my devices from interacting with Google in invisible or background ways, and that makes for some big challenges.

One morning, I have a meeting downtown. I leave my apartment with enough time to get there via Uber, but when I open the app, it won’t work. Same thing with Lyft. It turns out they’re both dependent on Google Maps such that I can’t even enter my destination while blocking Google. I’m astounded. There are no taxis around, so I have to take the bus. I wind up late to the meeting.

Google is a behemoth when it comes to maps. According to various surveys, the vast majority of consumers—up to 77 percent—use Google Maps to navigate the world. And a vast majority of companies rely on Google Maps’ API to power the mapping on their websites and apps, according to data from iDataLabs, Stackshare, and BuiltWith.

Even Google’s mortal enemy, Yelp, uses it for mapping on its website (though it taps Apple maps for its iPhone app). Luther Lowe, head of policy and Google critic-in-chief at Yelp, says there aren’t great alternatives to Google when it comes to mapping, forcing the company to pay its foe for the service.

In its Maps API, Google has long offered a free or very cheap product, allowing it to achieve market dominance. Now it’s making a classic monopolistic move: Google announced last year that it’s raising its mapping prices significantly, leading developers across the web to freak out because Google Maps is “light years ahead of its competitors.”

I become intimately acquainted with Google Maps competitors’ drawbacks using Mapquest for navigation; it keeps steering me into terrible traffic during my commute (probably because it doesn’t have the real-time movements of millions of people being sent to it).

Google, like Amazon, is woven deeply into the infrastructure of online services and other companies’ offerings, which is frustrating to all the connected devices in my house.

“Your smart home pings Google at the same time every hour in order to determine whether or not it’s connected to the internet,” Dhruv tells me. “Which is funny to me because these devices’ engineers decided to determine connectivity to the entire internet based on the uptime of a single company. It’s a good metaphor for how far the internet has strayed from its original promise to decentralize control.”

In some cases, the Google block means apps won’t work at all, like Lyft and Uber, or Spotify, whose music is hosted in Google Cloud. The more frequent effect of the Google block though is that the internet itself slows down dramatically for me...
Keep reading.

If you could create a social media defense anonymous identity, I suspect you could continue to use the Big Five relatively safely (anonymously), although you'd still be handing over all your data, which is valuable whether you're identified or not.

What a crazy world we live in!

Shoshana Zuboff, The Age of Surveillance Capitalism

*BUMPED.*

Released on Tuesday last week.

At Amazon, Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.



Sunday, January 27, 2019

Jacob Silverman, Terms of Service

At Amazon, Jacob Silverman, Terms of Service: Social Media and the Price of Constant Connection.



Fixing Facebook

I doubt it can be fixed, but this is interesting.

From Time's cover last week:


Monday, December 31, 2018

The Coming Age of Post-Truth Geopolitics

At Foreign Affairs, "Deepfakes and the New Disinformation War":

A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.

Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.

Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.

DAWN OF THE DEEPFAKES

Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.) Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.

This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.

Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections.

Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public. Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before.

These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics. Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better...
Keep reading.

Saturday, December 15, 2018

Tuesday, December 11, 2018

Monday, November 26, 2018

Cyber Monday

I'm a bit late posting my links, but it's a workday for me, and I've been trying to post some regular content as well.

More promotions later. Meanwhile, you can check out all the Cyber Monday sales at my Amazon associate's links.

See, Cyber Monday Deals.

And especially, LG Gram Thin and Light Laptop - 15.6" Full HD IPS Display, Intel Core i7 (8th Gen), 16GB RAM, 256GB SSD, 2.4lbs, (15Z975-U.AAS7U1).

Thanks for your support!

Friday, November 23, 2018

Perpetual War Over Political Culture

The big question is who's to blame?

Both sides?

I don't think so, personally. It was back in 1992 when Pat Buchanan that America had entered a state of cultural warfare to determine the "soul" of the country.

What's different today is the breakdown of the old media hierarchy and the institutionaliztion of the demonizing, destructive, anti-American ideologies of the campus left inside America's top ranks of cultural, educational, and economic power.

But see Politico:



Wednesday, October 24, 2018

Elon Musk's Secret Tunnel

At the Los Angeles Times, "Plans offer a peek into Elon Musk's tunnel in Hawthorne, including an elevator hidden in a garage":

When Elon Musk’s tunneling firm began digging in Hawthorne last year, the construction site next to SpaceX headquarters was barely noticeable, sandwiched between a home improvement store and a parking garage.

The engineers at work on the Boring Co.’s tunnel, which now runs for a mile beneath city streets, have signaled that they intend to finish as they started: away from the public eye.

But documents submitted to city officials by Musk’s tunneling company offer a sneak peek at the company’s plans.

The most futuristic is a blueprint for a steel elevator shaft inside the garage of a shabby house near the Hawthorne Municipal Airport that would connect with the test tunnel 40 feet below.

“We’ll be completely contained within the garage,” Boring Co. employee Brett Horton told officials last month when the project received approval from the Hawthorne City Council. “You won’t be able to see or hear it.”

The structure would serve as a covert place for engineers to practice raising and lowering vehicles into the test tunnel, a key element of the transportation system known as “Loop.”

Musk envisions a transportation network where commuters in cars, on foot or on bicycles can board platforms the size of parking spaces, dotted across the city. The platforms, called “skates,” would sink through elevator shafts, merge seamlessly into the tunnel network and whisk riders to their destinations at speeds of up to 130 mph.

Musk said Sunday that the company’s first tunnel will open to the public in December with free rides for the public. If that happens, it will be the first chance many residents have to learn anything about the tunnel, where engineers have been honing their digging skills for a year.

The tunnel has been built quietly, with comparatively little noise, congestion — or public communication. Milestones have mostly popped up through Musk’s Twitter feed, sparking excitement from traffic-weary Angelenos and skepticism from locals about the project’s feasibility.

Transportation planners and officials say they worry about the system’s effect on traffic and whether Musk can deliver on his ambitious visions. As one example, critics say, the tunnel in Hawthorne is shorter than the two-mile route that city officials approved last year.

The route was truncated because a property “became available” where the company could extricate a piece of digging equipment known as a cutter head that otherwise would have been abandoned underground, company representative Jane Labanowski said at City Hall last month...
More.

Saturday, October 20, 2018

#DeleteFacebook

Well, I rarely use it, so deleting my account won't affect me much either way. I guess I'd lose a few connections to people that are valuable. Maybe I could message my important contacts, get their cellphone numbers, and then delete the monstrosity.

I hadn't really thought of it until now, and that sounds pretty good actually, heh.

In any case, Jacob Weisberg reviews two books that I've promoted here, Siva Vaidhyanathan's, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, and Jaron Lanier's, Ten Arguments for Deleting Your Social Media Accounts Right Now.

At the New York Review, "The Autocracy App":


Facebook is a company that has lost control—not of its business, which has suffered remarkably little from its series of unfortunate events since the 2016 election, but of its consequences. Its old slogan, “Move fast and break things,” was changed a few years ago to the less memorable “Move fast with stable infra.” Around the world, however, Facebook continues to break many things indeed.

In Myanmar, hatred whipped up on Facebook Messenger has driven ethnic cleansing of the Rohingya. In India, false child abduction rumors on Facebook’s WhatsApp service have incited mobs to lynch innocent victims. In the Philippines, Turkey, and other receding democracies, gangs of “patriotic trolls” use Facebook to spread disinformation and terrorize opponents. And in the United States, the platform’s advertising tools remain conduits for subterranean propaganda.

Mark Zuckerberg now spends much of his time apologizing for data breaches, privacy violations, and the manipulation of Facebook users by Russian spies. This is not how it was supposed to be. A decade ago, Zuckerberg and the company’s chief operating officer, Sheryl Sandberg, championed Facebook as an agent of free expression, protest, and positive political change. To drive progress, Zuckerberg always argued, societies would have to get over their hang-ups about privacy, which he described as a dated concept and no longer the social norm. “If people share more, the world will become more open and connected,” he wrote in a 2010 Washington Post Op-Ed. This view served Facebook’s business model, which is based on users passively delivering personal data. That data is used to target advertising to them based on their interests, habits, and so forth. To increase its revenue, more than 98 percent of which comes from advertising, Facebook needs more users to spend more time on its site and surrender more information about themselves.

The import of a business model driven by addiction and surveillance became clearer in March, when The Observer of London and The New York Times jointly revealed that the political consulting firm Cambridge Analytica had obtained information about 50 million Facebook users in order to develop psychological profiles. That number has since risen to 87 million. Yet Zuckerberg and his company’s leadership seem incapable of imagining that their relentless pursuit of “openness and connection” has been socially destructive. With each apology, Zuckerberg’s blundering seems less like naiveté and more like malignant obliviousness. In an interview in July, he contended that sites denying the Holocaust didn’t contravene the company’s policies against hate speech because Holocaust denial might amount to good faith error. “There are things that different people get wrong,” he said. “I don’t think that they’re intentionally getting it wrong.” He had to apologize, again.

It’s not just external critics who see something fundamentally amiss at the company. People central to Facebook’s history have lately been expressing remorse over their contributions and warning others to keep their children away from it. Sean Parker, the company’s first president, acknowledged last year that Facebook was designed to cultivate addiction. He explained that the “like” button and other features had been created in response to the question, “How do we consume as much of your time and conscious attention as possible?” Chamath Palihapitiya, a crucial figure in driving Facebook’s growth, said he feels “tremendous guilt” over his involvement in developing “tools that are ripping apart the social fabric of how society works.” Roger McNamee, an early investor and mentor to Zuckerberg, has become a full-time crusader for restraining a platform that he calls “tailor-made for abuse by bad actors.”

Perhaps even more damning are the recent actions of Brian Acton and Jan Koum, the founders of WhatsApp. Facebook bought their five-year-old company for $22 billion in 2014, when it had only fifty-five employees. Acton resigned in September 2017. Koum, the only Facebook executive other than Zuckerberg and Sandberg to sit on the company’s board, quit at the end of April. By leaving before November 2018, the WhatsApp founders walked away from $1.3 billion, according to The Wall Street Journal. When he announced his departure, Koum said that he was “taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate Frisbee.”

However badly he felt about neglecting his Porsches, Koum was thoroughly fed up with Facebook. He and Acton are strong advocates of user privacy. One of the goals of WhatsApp, they said, was “knowing as little about you as possible.” They also didn’t want advertising on WhatsApp, which was supported by a 99-cent annual fee when Facebook bought it. From the start, the pair found themselves in conflict with Zuckerberg and Sandberg over Facebook’s business model of mining user data to power targeted advertising. (In late September, the cofounders of Instagram also announced their departure from Facebook, reportedly over issues of autonomy.)

At the time of the acquisition of WhatsApp, Zuckerberg had assured Acton and Koum that he wouldn’t share its user data with other applications. Facebook told the European Commission, which approved the merger, that it had no way to match Facebook profiles with WhatsApp user IDs. Then, simply by matching phone numbers, it did just that. Pooling the data let Facebook recommend that WhatsApp users’ contacts become their Facebook friends. It also allowed it to monetize WhatsApp users by enabling advertisers to target them on Facebook. In 2017 the European Commission fined Facebook $122 million for its “misleading” statements about the takeover.

Acton has been less discreet than Koum about his feelings. Upon leaving Facebook, he donated $50 million to the Signal Foundation, which he now chairs. That organization supports Signal, a fully encrypted messaging app that competes with WhatsApp. Following the Cambridge Analytica revelations, he tweeted, “It is time. #deletefacebook.”

The growing consensus is that Facebook’s power needs checking. Fewer agree on what its greatest harms are—and still fewer on what to do about them. When Mark Zuckerberg was summoned by Congress in April, the toughest questioning came from House Republicans convinced that Facebook was censoring conservatives, in particular two African-American sisters in North Carolina who make pro-Trump videos under the name “Diamond and Silk.” Facebook’s policy team charged the two with promulgating content “unsafe to the community” and indicated that it would restrict it. Facebook subsequently said the complaint was sent in error but has never explained how that happened, or how it decides that some opinions are “unsafe.”

Democrats were naturally more incensed about the twin issues of Russian interference in the 2016 election and the abuse of Facebook data by Cambridge Analytica in its work for Trump’s presidential campaign.
Keep reading.