Add Photographs To The List Of Things That Can Now Be Faked Too Convincingly To Be Trustworthy

And the race to make sure that we’ll eventually not be able to trust any goddamn thing ever again continues. Thanks, Nvidia.

The woman in the photo seems familiar.
She looks like Jennifer Aniston, the “Friends” actress, or Selena Gomez, the child star turned pop singer. But not exactly.
She appears to be a celebrity, one of the beautiful people photographed outside a movie premiere or an awards show. And yet, you cannot quite place her.
That’s because she’s not real. She was created by a machine.
The image is one of the faux celebrity photos generated by software under development at Nvidia, the big-name computer chip maker that is investing heavily in research involving artificial intelligence.
At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.
The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.

Nvidia’s images can’t match the resolution of images produced by a top-of-the-line camera, but when viewed on even the largest smartphones, they are sharp, detailed, and, in many cases, remarkably convincing.

A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into Van Goghs. DeepMind, a London-based A.I. lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.

The technology behind this is all extremely cool and I don’t doubt for a moment that it’s going to be used extensively for its intended purpose. But like I’ve said before, I also don’t doubt for a moment that it’s eventually going to fall into the wrong hands. And when that happens, we’re all doomed. Even if a majority of us somehow simultaneously develop a keen ability to think critically and put partisan agendas aside in the name of truth, what is truth going to be when it’s so easily manipulated into whatever this or that bad actor wants it to be?

I don’t have a good answer to any of this, and I’m not saying that we shouldn’t be doing any of this research. But I certainly don’t think that the people who are doing this research are giving nearly enough thought to the implications of it, at least not publicly.

Mr. Lehtinen downplays the effect his research will have on the spread of misinformation online. But he does say that, as a time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.

What a completely irresponsible, kick the can down the road statement.

We aren’t approaching anything, sir. The moment you figured out that these things are possible, those questions arrived. And before the problem has a chance to get out of anyone’s control, it’s time to start thinking about answers.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.