Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Thursday, November 22, 2018

Unfriend Facebook

Heh.

On Twitter:


Saturday, October 20, 2018

#DeleteFacebook

Well, I rarely use it, so deleting my account won't affect me much either way. I guess I'd lose a few connections to people that are valuable. Maybe I could message my important contacts, get their cellphone numbers, and then delete the monstrosity.

I hadn't really thought of it until now, and that sounds pretty good actually, heh.

In any case, Jacob Weisberg reviews two books that I've promoted here, Siva Vaidhyanathan's, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, and Jaron Lanier's, Ten Arguments for Deleting Your Social Media Accounts Right Now.

At the New York Review, "The Autocracy App":


Facebook is a company that has lost control—not of its business, which has suffered remarkably little from its series of unfortunate events since the 2016 election, but of its consequences. Its old slogan, “Move fast and break things,” was changed a few years ago to the less memorable “Move fast with stable infra.” Around the world, however, Facebook continues to break many things indeed.

In Myanmar, hatred whipped up on Facebook Messenger has driven ethnic cleansing of the Rohingya. In India, false child abduction rumors on Facebook’s WhatsApp service have incited mobs to lynch innocent victims. In the Philippines, Turkey, and other receding democracies, gangs of “patriotic trolls” use Facebook to spread disinformation and terrorize opponents. And in the United States, the platform’s advertising tools remain conduits for subterranean propaganda.

Mark Zuckerberg now spends much of his time apologizing for data breaches, privacy violations, and the manipulation of Facebook users by Russian spies. This is not how it was supposed to be. A decade ago, Zuckerberg and the company’s chief operating officer, Sheryl Sandberg, championed Facebook as an agent of free expression, protest, and positive political change. To drive progress, Zuckerberg always argued, societies would have to get over their hang-ups about privacy, which he described as a dated concept and no longer the social norm. “If people share more, the world will become more open and connected,” he wrote in a 2010 Washington Post Op-Ed. This view served Facebook’s business model, which is based on users passively delivering personal data. That data is used to target advertising to them based on their interests, habits, and so forth. To increase its revenue, more than 98 percent of which comes from advertising, Facebook needs more users to spend more time on its site and surrender more information about themselves.

The import of a business model driven by addiction and surveillance became clearer in March, when The Observer of London and The New York Times jointly revealed that the political consulting firm Cambridge Analytica had obtained information about 50 million Facebook users in order to develop psychological profiles. That number has since risen to 87 million. Yet Zuckerberg and his company’s leadership seem incapable of imagining that their relentless pursuit of “openness and connection” has been socially destructive. With each apology, Zuckerberg’s blundering seems less like naiveté and more like malignant obliviousness. In an interview in July, he contended that sites denying the Holocaust didn’t contravene the company’s policies against hate speech because Holocaust denial might amount to good faith error. “There are things that different people get wrong,” he said. “I don’t think that they’re intentionally getting it wrong.” He had to apologize, again.

It’s not just external critics who see something fundamentally amiss at the company. People central to Facebook’s history have lately been expressing remorse over their contributions and warning others to keep their children away from it. Sean Parker, the company’s first president, acknowledged last year that Facebook was designed to cultivate addiction. He explained that the “like” button and other features had been created in response to the question, “How do we consume as much of your time and conscious attention as possible?” Chamath Palihapitiya, a crucial figure in driving Facebook’s growth, said he feels “tremendous guilt” over his involvement in developing “tools that are ripping apart the social fabric of how society works.” Roger McNamee, an early investor and mentor to Zuckerberg, has become a full-time crusader for restraining a platform that he calls “tailor-made for abuse by bad actors.”

Perhaps even more damning are the recent actions of Brian Acton and Jan Koum, the founders of WhatsApp. Facebook bought their five-year-old company for $22 billion in 2014, when it had only fifty-five employees. Acton resigned in September 2017. Koum, the only Facebook executive other than Zuckerberg and Sandberg to sit on the company’s board, quit at the end of April. By leaving before November 2018, the WhatsApp founders walked away from $1.3 billion, according to The Wall Street Journal. When he announced his departure, Koum said that he was “taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate Frisbee.”

However badly he felt about neglecting his Porsches, Koum was thoroughly fed up with Facebook. He and Acton are strong advocates of user privacy. One of the goals of WhatsApp, they said, was “knowing as little about you as possible.” They also didn’t want advertising on WhatsApp, which was supported by a 99-cent annual fee when Facebook bought it. From the start, the pair found themselves in conflict with Zuckerberg and Sandberg over Facebook’s business model of mining user data to power targeted advertising. (In late September, the cofounders of Instagram also announced their departure from Facebook, reportedly over issues of autonomy.)

At the time of the acquisition of WhatsApp, Zuckerberg had assured Acton and Koum that he wouldn’t share its user data with other applications. Facebook told the European Commission, which approved the merger, that it had no way to match Facebook profiles with WhatsApp user IDs. Then, simply by matching phone numbers, it did just that. Pooling the data let Facebook recommend that WhatsApp users’ contacts become their Facebook friends. It also allowed it to monetize WhatsApp users by enabling advertisers to target them on Facebook. In 2017 the European Commission fined Facebook $122 million for its “misleading” statements about the takeover.

Acton has been less discreet than Koum about his feelings. Upon leaving Facebook, he donated $50 million to the Signal Foundation, which he now chairs. That organization supports Signal, a fully encrypted messaging app that competes with WhatsApp. Following the Cambridge Analytica revelations, he tweeted, “It is time. #deletefacebook.”

The growing consensus is that Facebook’s power needs checking. Fewer agree on what its greatest harms are—and still fewer on what to do about them. When Mark Zuckerberg was summoned by Congress in April, the toughest questioning came from House Republicans convinced that Facebook was censoring conservatives, in particular two African-American sisters in North Carolina who make pro-Trump videos under the name “Diamond and Silk.” Facebook’s policy team charged the two with promulgating content “unsafe to the community” and indicated that it would restrict it. Facebook subsequently said the complaint was sent in error but has never explained how that happened, or how it decides that some opinions are “unsafe.”

Democrats were naturally more incensed about the twin issues of Russian interference in the 2016 election and the abuse of Facebook data by Cambridge Analytica in its work for Trump’s presidential campaign.
Keep reading.

Monday, September 3, 2018

Twitter Struggles to Police Bad Actors

If there were a decent alternative I'd use it.

But that alternative is not Gab.ai, which is the home to white supremacists mostly (AFAICT).

Twitter is so bad, though, I doubt it can continue to grow and maintain viability. It's too partisan and hideously biased against conservatives.

It's a joke frankly.

But like I said, it's the place for politics on social media until a genuine alternative emerges.

At WSJ, "Inside Twitter’s Long, Slow Struggle to Police Bad Actors":


When Twitter Inc. Chief Executive Jack Dorsey testifies before Congress this week, he’ll likely be asked about an issue that has been hovering over the company: Just who decides whether a user gets kicked off the site?

To some Twitter users—and even some employees—it is a mystery.

In policing content on the site and punishing bad actors, Twitter relies primarily on its users to report abuses and has a consistent set of policies so that decisions aren’t made by just one person, its executives say.

Yet, in some cases, Mr. Dorsey has weighed in on content decisions at the last minute or after they were made, sometimes resulting in changes and frustrating other executives and employees, according to people familiar with the matter.

Understanding Mr. Dorsey’s role in making content decisions is crucial, as Twitter tries to become more transparent to its 335 million users, as well as lawmakers about how it polices toxic content on its site.

In a hearing Wednesday morning before the Senate Intelligence Committee, Mr. Dorsey will appear alongside Facebook Inc. Chief Operating Officer Sheryl Sandberg to discuss how foreign actors can use the social-media platforms to spread misinformation and propaganda. Later in the day, the House Commerce Committee will question Mr. Dorsey individually in a Republican-led look at how Twitter treats conservative voices.

The latter hearing “is about pulling back the curtain on Twitter’s algorithms, how the company makes decisions about content, and how those decisions impact Americans,” said Rep. Greg Walden (R., Ore.), the chairman of the House Commerce Committee.

Twitter and rival Facebook are increasingly caught in a Catch-22 situation—criticized by some users for allowing hateful posts, but blasted by others for removing content because it curtails free speech.

Twitter has taken a different approach than Facebook, which has hired thousands of content reviewers in the last couple of years to review posts and built out technology to flag inappropriate content. Twitter has far less staff and typically only investigates harassment and abuse that has been reported by users.

Last month, after Twitter’s controversial decision to allow far-right conspiracy theorist Alex Jones to remain on its platform, Mr. Dorsey told one person that he had overruled a decision by his staff to kick Mr. Jones off, according to a person familiar with the discussion. Twitter disputes that account and says Mr. Dorsey wasn’t involved in those discussions.

Twitter’s initial inaction on Mr. Jones, after several other major tech companies banned or limited his content, drew fierce backlash from the public and Twitter’s own employees, some of whom tweeted in protest.

A similar chain of events unfolded in November 2016, when the firm’s trust and safety team kicked alt-right provocateur Richard Spencer off the platform, saying he was operating too many accounts. Mr. Dorsey, who wasn’t involved in the initial discussions, told his team that Mr. Spencer should be allowed to keep one account and stay on the site, according to a person directly involved in the discussions.

Twitter says Mr. Dorsey doesn’t overrule staffers on content issues. The company declined to make Mr. Dorsey available...
Keep reading.


Wednesday, August 8, 2018

The New Corporate Censorship

Alex Jones got the boot from four major platforms on Monday. Here's Laura Ingraham's take:



Wednesday, July 25, 2018

Facebook Shares Tumble; Mark Zuckerberg Hardest Hit

Hehe.

I haven't checked my Facebook feed all summer. I don't care about it anymore.

And I certainly don't care about Mark Zuckerberg. He lost $16 billion in personal market value today? Well, that's just too bad. I'm all torn up about it.

At Bloomberg, "Zuckerberg Loses $16.8 Billion in a Snap as Facebook Plunges."



Wednesday, July 18, 2018

Causal Link Found Between Screen Time and ADHD

Well, I didn't need a large-N study to tell me this, heh.

At LAT, "Los Angeles high school students reveal a link between copious amounts of screen time and ADHD":
What with all the swiping, scrolling, snap-chatting, surfing and streaming that consume the adolescent mind, an American parent might well watch his or her teen and wonder whether any sustained thought is even possible.

New research supports that worry, suggesting that teens who spend more time toggling among a growing number of digital media platforms exhibit a mounting array of attention difficulties and impulse-control problems.

In a group of more than 2,500 Los Angeles-area high school students who showed no evidence of attention challenges at the outset, investigators from USC, UCLA and UC San Diego found that those who engaged in more digital media activities over a two-year period reported a rising number of symptoms linked to attention-deficit/hyperactivity disorder.

The association between digital media use and ADHD symptoms in teens was modest. But it was clear enough that it could not be dismissed as a statistical fluke. On average, with each notch a teen climbed up the scale of digital engagement, his or her average level of reported ADHD symptoms rose by about 10%.

The results do not show that prolific use of digital media causes ADHD symptoms, much less that it results in a level of impairment that would warrant an ADHD diagnosis or pharmaceutical treatment.

Indeed, it’s possible the relationship is reversed — that attention problems drive an adolescent to more intensive online engagement.

But at a time when 95% of adolescents own or have access to a smartphone and 45% said they are online “almost constantly,” the new study raises some stark concerns about the future of paying attention. It was published Tuesday in the Journal of the American Medical Assn.

The findings come as mental health professionals are rethinking their understanding of ADHD, a psychiatric condition that was long thought to start in early childhood and last across a lifetime. Marked by impulsivity, hyperactivity and difficulty sustaining attention, ADHD is estimated to affect about 7% of children and adolescents.

But the disorder is increasingly being diagnosed in older teens and adults, and in some it waxes and wanes across a lifespan. Whether its symptoms were missed earlier, developed later or are brought on by changing circumstances is unclear.

The new research, involving 2,587 sophomores and juniors attending public schools in Los Angeles County, raises the possibility that, for some, ADHD symptoms are brought on or exacerbated by the hyper-stimulating entreaties of a winking, pinging, vibrating, always-on marketplace of digital offerings that is as close as the wireless device in their pocket.

“We believe we are studying the occurrence of new symptoms that weren’t present at the beginning of the study,” said USC psychologist Adam M. Leventhal, the study’s senior author.

The study “is just the latest in a series of research findings showing that excessive use of digital media may have consequences for teens' well-being,” said San Diego State University psychologist Jean M. Twenge, who has conducted research on teens and smartphone use but was not involved in the new work.

Twenge’s research, published this year in the journal Emotion, explored a sharp decline in U.S. teens’ happiness and satisfaction since 2012. Combing through the data from 1.1 million teens, Twenge and her colleagues found dissatisfaction highest among those who spent the most time locked onto a screen. As time spent in offline activities increased, so did happiness.

Leventhal and his colleagues assessed the digital engagement of their 15- and 16-year-old subjects five times over a two-year period — when they first entered the study and four more times at six-month intervals. They asked the students to think back over the last week and report whether and how much they had engaged in 14 separate online activities. Those included checking social media sites, browsing the web, posting or commenting on online content, texting, playing games, video chatting, and streaming TV or movies...

Wednesday, July 4, 2018

Facebook Algorithm Flags, Removes Declaration of Independence Text as Hate Speech

This just so so badly, it's un-American.

At Reason, "A post consisting almost entirely of text from the Declaration of Independence was flagged by Facebook, which said the post 'goes against our standards on hate speech'."


Tuesday, March 20, 2018

Facebook's Existential Crisis

Following-up from yesterday, "Facebook Breach Ignites Uproar."

I'm actually getting a kick out of this.

At CNN, "Facebook is facing an existential crisis."


And from January, at Vanity Fair, a great piece, "'This Is Serious': Facebook Begins Its Downward Spiral":
Facebook was always famous for the sign that hung in its offices, written in big red type on a white background, that said “Move Fast and Break Things.” Every time I think about the company, I realize it has done just that — to itself.

Years ago, long before Mark Zuckerberg became Mark Zuckerberg, the young founder reached out to a friend of mine who had also started a company, albeit a considerably smaller one, in the social-media space, and suggested they get together. As Facebook has grown into a global colossus that connects about a third of the globe, Zuckerberg has subsequently assumed a reputation as an aloof megalomaniac deeply out of touch with the people who use his product. But back then, when he only had 100 million users on his platform, he wasn’t perceived that way. When he reached out to my friend, Zuckerberg was solicitous. He made overtures that suggested a possible acquisition—and once rebuffed, returned with the notion that perhaps Facebook could at least partner with my friend’s company. The chief of the little start-up was excited by the seemingly harmless, even humble, proposition from the growing hegemon. Zuckerberg suggested that the two guys take a walk.

Taking a walk, it should be noted, was Zuckerberg’s thing. He regularly took potential recruits and acquisition targets on long walks in the nearby woods to try to convince them to join his company. After the walk with my friend, Zuckerberg appeared to take the relationship to the next level. He initiated a series of conference calls with his underlings in Facebook’s product group. My friend’s small start-up shared their product road map with Facebook’s business-development team. It all seemed very collegial, and really exciting. And then, after some weeks passed, the C.E.O. of the little start-up saw the news break that Facebook had just launched a new product that competed with his own.

Stories about Facebook’s ruthlessness are legend in Silicon Valley, New York, and Hollywood. The company has behaved as bullies often do when they are vying for global dominance—slurping the lifeblood out of its competitors (as it did most recently with Snap, after C.E.O. Evan Spiegel also rebuffed Zuckerberg’s acquisition attempt), blatantly copying key features (as it did with Snapchat’s Stories), taking ideas (remember those Winklevoss twins?), and poaching senior executives (Facebook is crawling with former Twitter, Google, and Apple personnel). Zuckerberg may look aloof, but there are stories of him giving rousing Braveheart-esque speeches to employees, sometimes in Latin. Twitter, Snap, and Foursquare have all been marooned, at various points, because of Facebook’s implacable desire to grow. Instagram, WhatsApp, Oculus VR, and dozens of others are breathing life because they assented to Facebook’s acquisition desires. Meanwhile, Zuckerberg moved quickly to circumnavigate regulations before governments realized the problems that Facebook created—and certainly before they understood exactly how dangerous a social network can be to their citizens’ privacy, and to a democracy as a whole.

From a business standpoint, Facebook’s barbarism seemed to work out well for the company. The social network is worth over half-a-trillion dollars, and Zuckerberg himself is worth some $76 billion. Facebook has some of the smartest engineers and executives in the entire industry. But the fallout from that success has also become increasingly obvious, especially since the 2016 election, which prompted a year of public relations battles over the company’s most fundamental problems. And now, as we enter 2018, Zuckerberg is finally owning up to it: Facebook is in real trouble.

During the past six months alone, countless executives who once worked for the company are publicly articulating the perils of social media on both their families and democracy. Chamath Palihapitiya, an early executive, said social networks “are destroying how society works”; Sean Parker, its founding president, said “God only knows what it’s doing to our children’s brains.” (Just this weekend, Tim Cook, the C.E.O. of Apple, said he won’t let his nephew on social media.) Over the past year, people I have spoken to internally at the company have voiced concerns for what Facebook is doing (or most recently, has done) to society. Many begin the conversation by rattling off a long list of great things that Facebook inarguably does for the world—bring people and communities together, help people organize around like-minded positive events—but, as if in slow motion, those same people recount the negatives. Unable to hide from the reality of what social media has wrought, Facebook has been left with no choice but to engage with people and the media to explore if it is possible to fix these problems. Zuckerberg determined that his 2018 annual challenge would be fixing his own Web site, noting that “the world feels anxious and divided,” and that Facebook might—just maybe—be contributing to that. “My personal challenge for 2018 is to focus on fixing these important issues,” he wrote. Now, the company has said it’s going to change the focus of the site to be less about news and more about human connections.

The question, of course, revolves around this underlying motivation. Is Zuckerberg saying this because he really does worry what the world might look like tomorrow if we continue headed in the direction we’re going? Is Facebook eliminating news from its site because it realizes that spotting “fake news” is too difficult to solve—even for Facebook? Or, as some people have posited to me, is Facebook rethinking the divide it has created in order to keep growing? After all, much of Zuckerberg’s remaining growth opportunity centers upon China, and the People’s Republic won’t let any product (digital or otherwise) enter its borders if there’s a chance it could disrupt the government’s control. Why would the Chinese Politburo open its doors to a force that could conspire in its own Trumpification or Brexit or similar populist unrest?

There’s another theory floating around as to why Facebook cares so much about the way it’s impacting the world, and it’s one that I happen to agree with. When Zuckerberg looks into his big-data crystal ball, he can see a troublesome trend occurring. A few years ago, for example, there wasn’t a single person I knew who didn’t have Facebook on their smartphone. These days, it’s the opposite. This is largely anecdotal, but almost everyone I know has deleted at least one social app from their devices. And Facebook is almost always the first to go. Facebook, Twitter, Instagram, Snapchat, and other sneaky privacy-piercing applications are being removed by people who simply feel icky about what these platforms are doing to them, and to society.

Some people are terrified that these services are listening in to their private conversations. (The company’s anti-privacy tentacles go so far as to track the dust on your phone to see who you might be spending time with.) Others are sick of getting into an argument with a long-lost cousin, or that guy from high school who still works in the same coffee shop, over something that Trump said, or a “news” article that is full of more bias and false facts. And then there’s the main reason I think people are abandoning these platforms: Facebook knows us better than we know ourselves, with its algorithms that can predict if we’re going to cheat on our spouse, start looking for a new job, or buy a new water bottle on Amazon in a few weeks. It knows how to send us the exact right number of pop-ups to get our endorphins going, or not show us how many Likes we really have to set off our insecurities. As a society, we feel like we’re at war with a computer algorithm, and the only winning move is not to play...
Still more.

Monday, March 19, 2018

Facebook Breach Ignites Uproar

The Facebook breach is all the rage at Memeorandum, and I love this headline, at Bloomberg, "Facebook's Mark Zuckerberg Under Pressure Over Data Breach."

Also, at LAT, "Exploiting Facebook data to influence voters? That’s a feature, not a bug, of the social network":

With each comment, like and share, users provide Facebook with a deeply personal window into their lives.

The result of that voluntary behavior? Advertisers looking to finely target their pitches can glean someone's hobbies, what they like to eat and even what makes them happy or sad — propelling Facebook's ad revenue to $40 billion last year.

This trove of rich information is now at the center of a rapidly growing controversy involving one of President Trump's campaign consultants, Cambridge Analytica, which reportedly took the advertising playbook and exploited it in a bid to influence swing voters.

Former employees accuse the firm, owned by the conservative billionaire Robert Mercer and previously headed by Trump's former chief strategist Steve Bannon, of taking advantage of ill-gotten data belonging to millions of unwitting Facebook users. News of the breach was met with calls over the weekend for stricter scrutiny of the company.

Sen. Amy Klobuchar (D-Minn.) demanded that Mark Zuckerberg, Facebook's chief executive, appear before the Senate Judiciary Committee. Maura Healey, attorney general for Massachusetts, said her office was launching an investigation. And the head of a British parliamentary inquiry into fake news called on Facebook to testify before his panel again, this time with Zuckerberg.

The accusations raise tough questions about Facebook's ability to protect user information at a time when it's already embroiled in a scandal over Russian meddling during the 2016 presidential campaign and under pressure to adhere to new European Union privacy rules.

They also highlight the power and breadth of the data Facebook holds over its 2 billion users. Whether used to sway voters or sell more detergent, the information harvested by the world's biggest social network is proving to be both vital and exploitable regardless of who's wielding it.

"The data set assembled on people by Facebook is unrivaled," said Scott Galloway, a professor of marketing at New York University Stern School of Business and author of "The Four: The Hidden DNA of Amazon, Apple, Facebook and Google." "The bad news is, people are discovering this can be used as a weapon. The worse news is that people are learning how to detonate it."

The controversy began late Friday when Facebook's vice president and deputy general counsel, Paul Grewal, announced in a blog post that the social network was suspending Strategic Communication Laboratories and its affiliate, Cambridge Analytica.

Facebook said the companies failed to delete user data they had acquired in 2015 in violation of the platform's rules. The data were supplied by a University of Cambridge psychology professor, Aleksandr Kogan, who built an app that was supposed to collect details on Facebook users for academic research. Kogan was not supposed to pass that information to a third party for commercial purposes under Facebook guidelines.

Facebook said the data collection was contained to 270,000 people who downloaded Kogan's app as well as "limited information" about their friends.

Sunday, August 27, 2017

FOMO

At Wired, "We Can't Stop Checking the News Either. Welcome to the New FOMO."

It's "Fear of Missing Out."

I've had it before, although not so much lately.



Wednesday, May 31, 2017

Mark Zuckerberg Calls for Universal Basic Income

Fine.

He should pay for it with his own money.


Wednesday, April 26, 2017

Robert Godwin's Murder Was Replayed 1.6 Million Times

That's an astonishing number, considering the subject.

See Jason Riley, "Who Watches a Murder Streamed Live on Facebook?":


The most shocking aspect of the Easter Sunday Facebook murder of 74-year-old Robert Godwin, Sr. might be that this sort of social media mayhem is losing its ability to shock.

In March, a video of a 15-year-old girl being sexually assaulted by several teenage boys was streamed on Facebook.

In February, a teenager was convicted of fatally shooting his friend; the killer implicated himself by sending a selfie with the dying victim on Snapchat.

In January, four people were arrested after broadcasting a video on Facebook that showed them taunting and beating a mentally disabled teenager who had been bound and gagged.

Already this year, a 14-year-old girl in Florida and a 33-year-old man in California have committed suicide on Facebook.

Last year, an armed woman in Maryland live-streamed her fatal standoff with the police, and a 12-year-old in Georgia recorded her own suicide by hanging via the Live.me app.

Shortly after Facebook launched its new video-streaming service last April, CEO Mark Zuckerberg told BuzzFeed that the goal was to support “the most personal and emotional and raw and visceral ways people want to communicate.” But preventing abuse of these platforms has been a challenge.

There’s been a smattering of calls from public officials and activists to suspend these streaming capabilities until better filters are in place, but the popularity and profitability of live video make that course of action unlikely. Besides, the safe-harbor provisions of the federal Communications Decency Act, passed by Congress two decades ago, give operators broad protection from liability for content posted by their users.

Sure, some grandstanding member of Congress can call for a hearing, or a state attorney general looking to boost his profile can announce a lawsuit, but neither is really necessary. Social media behemoths like Facebook, Twitter and YouTube currently have every incentive to protect their services from the freaks, sociopaths and others intent on spreading violent or disturbing images. “Facebook Murderer” or “YouTube Shooter” pasted in CNN bulletins and newspaper headlines is the kind of publicity that companies work to minimize without any prompting.

With nearly two billion users, Facebook wants to be not only the place where you connect with family and friends but also your main source of news and information...
More.

And still more at the Independent U.K., below, "Facebook under fire for failing to remove footage of Thai man killing baby daughter for almost 24 hours: Man's wife says she does not blame 'outraged' viewers for sharing disturbing footage."


Let's hope this rash of atrocious Facebook death causes real damage --- even death --- to the social media platform.

Sunday, March 12, 2017

Internet Addiction Resistance

Following-up, "Adam Alter, Irresistible."

Here's a great piece, with the reference to the book, from Ross Douthat:


Saturday, October 22, 2016

Facebook Employees Pushed to Remove Trump's Posts as Hate Speech

Facebook employees threatened to quit over Trump, angry that Mark Zuckerberg allowed Trump's posts to stay up, even though, according to employees, his comments violated the social network's terms of service.

At WSJ.

Good on Zuck. But what you're seeing is the axis of ideological conflict going forward, starting very soon, in fact. And leftists are going to win, more and more. Conservative speech will be shut down as "hate speech." First it'll be on private services like Facebook, but if Hillary gets a couple of Supreme Court nominees confirmed, Court rulings may well chip away at longstanding protections for speech. Look for cases arising from the hotbeds of political correctness, America's college campuses.

Wednesday, September 21, 2016

Low-income Families Face Eviction to Make Way for Facebook Employees in Silicon Valley

You can see why I hate Mark Zuckerberg, him and the entire culture he represents.

Entitled spoiled leftists kicking hard-working Latinos out of their homes.

Hey, that's progressive values for you! Democrat values!

At Truth Revolt, "Liberal Leftist Hypocrites: Low-Income, Hispanic Residents of Silicon Valley Apartment Evicted to Make Room for Facebook Staff."

Friday, September 9, 2016

Dear Mark Zuckerberg

At Norway's Aftenposten, "Dear Mark":
I am writing this to inform you that I shall not comply with your requirement to remove this picture.

Also at USA Today, "Facebook reinstates iconic photo of 'napalm girl'."

I hate Zuckerberg. And I don't "hate" very often. But that's the emotion that comes to me right now. I just hate that dude.