the brave new world of deepfakes

what do you do when there's a deepfake of you?

the brave new world of deepfakes (edition #001)

Deepfakes are going mainstream. And they’ll be even more commonplace as the 2024 election’s dawn turns to dusk. Whether you're running for office or just sitting in one, deepfakes are now so easy, a 12-year-old can make one of their friends.

Given manipulated video is now a feature of the arena we all live and work in, we have to start asking ourselves hard, uncomfortable questions. If we handwave the magnitude and power of this technology, we’ll all simply be at the wrong end of a manipulation engine designed to get us to think, feel, vote, buy and communicate in exactly the way monetization-focused engineers (and their AI-infused algos) want us to.

Thoughtful, clear-eyed, intellectual engagement has been replaced, at least this past decade, with a default adherence to “what gets traction” and “what advertisers will pay for” without regard to whether what’s being shared is true. Those attempting to be honest, culturally aware, difference-making individuals are left with no option but to keep up with the times, operating with a new level of savvy on the socially-driven marketplaces of the modern world.

Fast Company | August 1st, 2024

By asking the right questions, we can begin to conceptually wrap our minds around this technology’s unsettling potential and expanding influence — and use the wisdom at our disposal to grapple with it.

This week’s 1-5 breakdown:

1 question waterfall

2 real-world stories

3 long-term dilemmas

4 quotes

5 predictions

1 question waterfall | a series of increasingly personal questions that make the deepfake era’s uneasiness felt

  1. How would you react if someone made a deepfake of someone you were voting for?

  2. How would you react if someone made a deepfake of someone in your community?

  3. How would you react if someone made a deepfake of your friend?

  4. How would you react if someone made a deepfake of someone in your family?

  5. How would you react if someone made a deepfake of you?

2 real-world stories | a new level of high-school bullying; the dehumanization of women trying to grow up in america

Real-World Story #1: If the question waterfall still felt too this-couldn’t-happen-to-me hypothetical — Exhibit A: this past week’s New York Times Magazine cover:

The New York Times Magazine

There are some images and videos you can’t un-see. Once they’re out there, a sinking, gut-level dread sets in for anyone unfortunate enough to be placed centerstage. Because everything sits entirely outside your control, your fate becomes more and more obvious the longer you sit with the idea that nothing’s preventing these images from being created and shared with anyone and everyone you know. This worst-case scenario becomes not just possible, but altogether likely.

“In the 1990s and aughts, as access to camera phones and the internet became widespread, it became easy for a person to share intimate photos of another person without their consent on sites like Pornhub, Reddit and 4chan. Explicit videos followed, with people uploading hundreds of thousands of hours of non-consensual content online. Deepfakes added a new element: an individual could find themselves appearing in explicit content without ever engaging in sexual activity.

Coralie Kraft, Times Culture

They cripple a victim’s psyche, creating a combination of shame, confusion and paranoia:

As Javellana registered that the police wouldn’t be able to help her remove the images, she began to panic. She had worked hard to create a career in politics for herself and earn the respect of her older colleagues. Now she felt a surge of dread as she imagined people at City Hall scrutinizing the pictures. And what about her family, her friends, her neighbors? Even if she convinced everyone that the images were fake, would the people in her life ever look at her the same way? Would she ever have professional prospects again? Shortly before discovering the images, she had decided to sign up for that spring’s state teaching-certification exam in the hope of getting a job at one of her old schools. Now she imagined herself explaining to future employers — or members of the school board — that someone had created fake explicit images without her consent, and that the images were openly accessible on the internet. Why would anyone hire her and risk damaging the school’s reputation? But if she didn’t disclose the existence of the images and someone stumbled upon them online, she would almost certainly lose her job.

Coralie Kraft, Times Culture

If high-schoolers can do it, imagine someone with serious skills at the wheel. After being threatened again, Javellana began drafting Florida Senate Bill 1798 to curb the percolation of deepfakes nationwide, including lovely provisions like “Sexual Contact with an Animal”, which modified an already-existing Florida statute (Florida s. 828.126) to do things like increase “the penalty for sexual contact with an animal…from a first degree misdemeanor to a third degree felony.” Sadism is the mother of deception.

Shortly after [her Senate committee meeting on deepfakes], someone on the internet shared Javellana’s personal phone number and address. Comments directed at her described sexual assault, and one user threatened to drive to her house. Then came the nadir: Harassers exposed her mother’s name and phone number and texted her the explicit images. Javellana was horrified. Her worst fears were becoming reality. The more she talked about her experience, the more harassment she endured.

Coralie Kraft, Times Culture

Cruel and unusual — and if we ignore it, we’re only asking for more. Which is the subject of this second example in all-too-recent memory.

Real-World Story #2: These aren’t one-offs; they’re part of a larger movement, including students creating sexually disturbing and violent images of their classmates without their consent:

Unfortunately, the police can do nearly nothing; school administrators are even less-equipped to handle these trends.

High-school kids are going to (a) be irreversibly traumatized by this technology, and/or (b) engaged in activity that — in part because they’re hardly mature adults yet — they don’t understand and which could put them into near-criminal territory when the law finally limps its away to mile two of the deepfake marathon.

But the AI-based tools that create these images are already in place for any interested high-schooler to get their hands on:

After she accepted [an Instagram] request, the male student copied photos of her and several other female schoolmates from their social media accounts, court documents say. Then he used an A.I. app to fabricate sexually explicit, “fully identifiable” images of the girls and shared them with schoolmates via a Snapchat group, court documents say.

Natasha Singer, Times Privacy Writer

Repercussions? The 15-year-old boy received just a 1-2-day suspension; the school denied the images “were widely shared” (of course they were); the police were entirely inept from start to finish. The interviewed superintendent, however, had something intelligible to say:

Dr. Bregy, the superintendent, said schools and lawmakers needed to act quickly because the abuse of A.I. was making students feel unsafe in schools.

“You hear a lot about physical safety in schools,” he said. “But what you’re not hearing about is this invasion of students’ personal, emotional safety.”

Natasha Singer, Times Privacy Writer

These AI-based tools — like DreamGF and MyPornDeepFake (which we will not be hyperlinking!) — are weapons first. They’re manufactured specifically to allow bad actors to take advantage of innocent people who don’t know any better, and they’re built on the backs of everything Silicon Valley would like you to believe is going to help bring the world into the future. Which it will. Just not quite the future they’re painting it to be.

We love having Snoop Dogg read our emails to us over morning coffee, but is it worth traumatizing thousands — if not millions — of high-schoolers the world over? Seems like the only way anyone would make that trade is…if they were being blackmailed with deepfakes.

3 long-term dilemmas | how deepfakes will shape our relationship to truth

  1. When everything online could be fake, who become the arbiters of truth? Who or what will end up being the architects of truth in an era characterized by widespread disinformation (with infinite permutations)? Through the warped situations deepfakes create and the uniquely disturbing nature of cases involving their use, are we only now seeing one of the unique weaknesses of a centralized internet infrastructure—an internet that contains essential information, but which allows its puppetmasters a cloak of convenient invisibility? And if the puppetmasters’ strings are invisible, what’s stopping him from tweaking what we see to serve his own (commercial and regulatory) interests from time to time—or all the time?

  2. When you (or someone you know) inevitably ends up in a deepfake, will the justice system be there for you? What happens if they aren’t? What happens when black-hat deepfake creators know they can dodge any material consequences? How far might they go when there’s no threat of punishment?

  3. Can deepfakes metastasize into the black plague of the internet? Do they contain the potential to become so widespread and lethal to truth that they begin consuming the usefulness of the internet itself? If what you want is truth, how will it be knowable if a candidate said X or Not-X — when two equally credible sources offer diametrically opposed “truths”? If a candidate does say something awful or hypocritical, how will anyone hold them to account — what proof could they show if any smoking gun that same candidate could just handwave as "doctored”? Will our entire conception and criteria for digital “evidence” need a moment with the drawing board?

4 quotes | on the fragility of truth, the structural integrity of the internet, and the future of information

Trust is fragile. It is as easily destroyed by suspicion as by proof.

Michael Josephson (Founder of the Joseph and Edna Josephson Institute of Ethics)

"Any sufficiently advanced technology is indistinguishable from magic."

Arthur C. Clarke, Profiles of the Future, An Inquiry into the Limits of the Possible (1962)

"The real question is not whether machines think — it’s whether men do."

B.F. Skinner

"Everybody gets so much information all day long that they lose their common sense."

Gertrude Stein, Reflection on the Atomic Bomb (1946)

5 predictions | how will we combat deepfakes? what will major players do in response to their percolation?

  1. Individuals and new companies will need to invest in “anti-album” technology — irrefutable timestamp-style, digital forensic evidence that you are who you say you are, and you didn’t do the things deepfakes will portray you as doing.

    • Deepfakes invert our normal U.S.-centric justice logic: they force us to disprove our guilt — and so tools will need developing that help us prove our innocence

    • Expect advanced detection mechanisms, but ones which will eventually go B2C — consumer-facing ways to build your own personal shield of innocence against how your face and body will be shapeshifted internet-wide

  2. Big tech players will strive to be the arbiters of “real” vs. “fake” video content in much the same way they’ve attempted to govern what we search for, what we find, and how we interpret it.

    • Why wouldn’t Google, Meta and the rest not use their proprietary AI that continue to be responsible for generating so many of these images to generate the counter-technology? If it’s monetizable, they’ll do it.

  3. Women — especially young women — will be prime targets in the short-term, given they’re already the subject of the vast majority of deepfakes and pornographic imagery.

    • Over 96% of all deepfake images originate in porn (DeepTrace 2019 Study)

    • Over 99% of all deepfakes are of women (ibid; and there isn’t a single report citing a woman consenting to this)

  4. Legislation will come fast and furious, but it remains to be seen whether it will be comprehensive enough, punitive enough, or have the enforcement mechanisms needed in the U.S. to make a real difference in the 2020s. Here are a few domestic pieces of legislation worth following:

  5. Deepfakes will reshape American politics by manipulating voters — just as they’ve reshaped politics across the globe in recent years:

Even in countries with no elections in 2024, deepfake scams are advancing at unprecedented rates year on year: this includes China (2800%), Turkey (1533%), Singapore (1100%), Hong Kong (1000%), Brazil (822%), Vietnam (541%), Colombia (433%), Ukraine (394%), Japan (243%), Ecuador (200%), Argentina (83%).

final thoughts

When you share anything about yourself in a capitalist system structured to capitalize on what you’ve shared, deepfakes and other forms of exploitation aren’t “bad outcomes” — they’re precisely the outcomes the system was designed to create.

Ben L., Staying Human

Enjoy this newsletter? Earn money by sharing Staying Human

  • If 10 (real) people sign up, we’ll send you a $25 Amazon Gift Card

  • If 25 (real) people sign up, we’ll send you another $50 Amazon Gift Card

  • If 100 (real) people sign up, we’ll send you a final $100 Amazon Gift Card

Use the custom referral link below to make that happen.

In weeks and months to come, we’ll be sharing out a referral leaderboard to give everyone the props they’ve earned.

Reply

or to participate.