Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Sunday, February 14, 2021

Slate Star Codex? The New York Times Slammed Again for Shoddy, Muckrake 'Journalism'

I guess it really was (is) a bad week for the Old Gray Lady, as I argued yesterday, here: "The 'Woke' Takeover at the New York Times Facing Pushback."

The NYT author is Cade Metz, who I've never heard of before, but who was getting slammed yesterday on Twitter, along with his newspaper, for an article on Scott Alexander, a psychiatrist by training who blogged at Slate Star Codex (which I only vaguely recall, and that's after myself being immersed in online debates and flame wars for over a decade; so you can see, perhaps, that a lot of NYT's reporting here is "inside baseball," and one of the biggest critiques of Metz is that he gets just about everything wrong at the article, entitled "Silicon Valley’s Safe Space.")

Below is Alexander's own response, at his Substack blog, as well a screenshot with some criticism pulled from Twitter earlier. (I can't seem to cut and paste from Alexander's Substack blog, and maybe that's by design, considering.) 

See, "Statement on the New York Times Article."


Saturday, November 21, 2020

Mind-Bogging Artificial Intelligence

It's Kashmir Hill, a technology reporter at the New York Times, who used to be a tech blogger back in the day. Once she commented on a blog post of mine thanking me for a link. I'm still blogging. She's at the Old Gray Lady. And I know. I know. It's a despicable left-wing partisan propaganda outlet, but even a broken clock is right twice a day. 

In any case, this is cool.


The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you feed a computer program a bunch of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.

The back-and-forth makes the end product ever more indistinguishable from the real thing. The portraits in this story were created by The Times using GAN software that was made publicly available by the computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

“When the tech first appeared in 2014, it was bad — it looked like the Sims,” said Camille François, a disinformation researcher whose job is to analyze manipulation of social networks. “It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”

Advances in facial fakery have been made possible in part because technology has become so much better at identifying key facial features. You can use your face to unlock your smartphone, or tell your photo software to sort through your thousands of pictures and show you only those of your child. Facial recognition programs are used by law enforcement to identify and arrest criminal suspects (and also by some activists to reveal the identities of police officers who cover their name tags in an attempt to remain anonymous). A company called Clearview AI scraped the web of billions of public photos — casually shared online by everyday users — to create an app capable of recognizing a stranger from just one photo. The technology promises superpowers: the ability to organize and process the world in a way that wasn’t possible before...

Keep reading.