Thursday, February 29, 2024
HomeTechnologyPorn deepfakes: How to talk to your kids about explicit fake images

Porn deepfakes: How to talk to your kids about explicit fake images

If the day hasn’t come yet, it’s coming: you should talk to your child about explicit deepfakes.

The issue may have seemed abstract until fake artificial intelligence-generated pornographic images of Taylor Swift went viral on social media platform X/Twitter. Now the problem simply cannot be ignored, online child safety experts say.

“When that happens (to Swift), I think kids and parents start to realize that no one is immune to this,” says Laura Ordoñez, executive editor and director of digital and family at Common Sense Media.

Whether you’re explaining the concept of deepfakes and abuse based on AI images, talking about the pain such images cause victims, or helping your child develop the critical thinking skills to make ethical decisions about deepfakes, there are many things parents can and should cover. in ongoing conversations on the topic.

SEE ALSO:

What to do if someone deepfakes you

Before you get started, here’s what you need to know:

1. You don’t have to be an expert in deepfakes to talk about them.

Adam Dodge, founder of The Tech-Savvy Parent, says parents who feel they need to fully understand deepfakes before chatting with their children shouldn’t worry about appearing or becoming an expert.

Instead, all that is required is a basic understanding of the concept that AI-powered software and algorithms make it surprisingly easy to create realistic, explicit or pornographic deepfakes, and that such technology is easy to access online. In fact, children as young as elementary school students can find apps or software with this capability and use them to create deepfakes with few challenges or technical barriers.

“What I tell parents is, ‘Look, you need to understand how early and how often kids are exposed to this technology, that it’s happening sooner than you realize, and appreciate how dangerous it is.'”

Dodge says parents should be prepared to address the possibilities of their child being the target of technology; that they will see inappropriate content; or who will engage in the creation or sharing of false explicit images.

2. Make it a conversation, not a lecture.

If you are sufficiently alarmed by these possibilities, try to avoid rushing into a hasty discussion about deepfakes. Instead, Ordoñez recommends approaching the topic openly and without prejudice, asking your child what they know or have heard about deepfakes.

He adds that it’s important to think of AI image-based abuse as a form of online manipulation that exists on the same spectrum as misinformation or disinformation. In this framework, reflecting on deepfakes becomes an exercise in critical thinking.

Ordoñez says parents can help their children learn the signs that images have been manipulated. Although the rapid evolution of AI means that some of these telltale signs no longer appear, Ordoñez says it is still useful to note that any deepfake (not just the explicit kind) can be identifiable through facial discoloration, a lighting that appears dull and blurred vision in the areas where they are located. neck and hair meet.

Parents can also learn alongside their children, says Ordoñez. This could involve reading and talking together about AI-generated non-explicit fake content, such as the song. Heart on my sleeve, released in May 2023, which claimed to use AI versions of Drake and The Weeknd’s voices. While that story has relatively low stakes for children, it can spark a meaningful conversation about how it would feel to have their voice used without their consent.

Parents can take an online quiz with their children in which the participant is asked to correctly identify which face is real and which is AI-generated, another low-risk way to confront together the ease with which AI-generated images can deceive the viewer.

The goal of these activities is to teach your child how to engage in ongoing dialogue and develop critical thinking skills that are sure to be put to the test when encountering explicit deepfakes and the technology that creates them.

3. Put your children’s curiosity about deepfakes in the right context.

While explicit deepfakes amount to digital abuse and violence against their victim, your child may not fully understand this. Instead, they might be curious about the technology and even eager to try it out.

Dodge says that while this is understandable, parents typically put reasonable limits on their children’s curiosity. Alcohol, for example, is kept out of reach. R-rated movies are prohibited until they reach a certain age. They are not allowed to drive without proper instruction and experience.

Parents should think about deepfake technology in a similar way, Dodge says: “You don’t want to punish children for being curious, but if they have unfiltered access to the Internet and artificial intelligence, that curiosity will lead them down dangerous paths.”

4. Help your child explore the consequences of deepfakes.

Children may view non-explicit deepfakes as a form of entertainment. Tweens and teens may even incorrectly believe the argument of some: that pornographic deepfakes are not harmful because they are not real.

Still, they can be persuaded to view explicit deepfakes as AI image-based abuse when the discussion incorporates concepts like consent, empathy, kindness, and intimidation. Dodge says invoking these ideas while talking about deepfakes can make a child pay attention to the victim.

If, for example, a teenager knows to ask permission before taking a physical object from a friend or classmate, the same goes for digital objects, such as photos and videos posted on social media. Using these digital files to create a naked deepfake of another person is not a joke or a harmless experiment, but a kind of theft that can cause deep suffering for the victim.

Similarly, Dodge says that just as a young man wouldn’t assault someone on the street out of the blue, it doesn’t align with his values ​​to assault someone virtually.

“These victims are not made up or fake,” Dodge says. “These are real people.”

Women, in particular, have been targeted by technology that creates explicit deepfakes.

Overall, Ordoñez says parents can talk about what it means to be a good digital citizen, helping their children think about whether it’s okay to trick people, the consequences of deepfakes, and how viewing or being a victim of the images could be. make others feel.

5. Model the behavior you want to see.

Ordoñez points out that adults, including parents, are not immune to enthusiastically participating in the latest digital trend without thinking about the implications. Take, for example, how quickly adults started taking cool AI self-portraits using the Lensa app in late 2022. Beyond the hype, there were significant concerns about privacy, user rights, and the potential for application to steal or displace artists.

Times like these are an ideal time for parents to reflect on their own digital practices and model the behavior they would like their children to adopt, Ordoñez says. When parents pause to think critically about their online choices and share the experience of that experience with their children, she demonstrates how they can take the same approach.

6. Use parental controls, but don’t rely on them.

When parents learn about the dangers posed by deepfakes, Ordoñez says they often want a “quick fix” to keep their children away from apps and software that implement the technology.

It’s important to use parental controls that restrict access to certain downloads and sites, Dodge says. However, these controls are not infallible. Children can and will find a way around these restrictions, even if they don’t realize what they are doing.

Additionally, Dodge says a child could see deepfakes or find the technology at a friend’s house or on another person’s mobile device. That’s why it’s still critical to have conversations about AI image-based abuse, “even if we impose powerful restrictions through parental controls or take devices away at night,” Dodge says.

7. Empower instead of scare.

The prospect of your child hurting their peers with AI-based abuse, or becoming a victim themselves, is terrifying. But Ordoñez warns against using scare tactics as a way to discourage a child or teenager from interacting with technology and content.

When speaking to young girls, in particular, whose social media photos and videos could be used to generate explicit deepfakes, Ordoñez suggests talking to them about how posting images of themselves makes them feel and the potential risks. These conversations shouldn’t place blame on girls who want to participate on social media. However, talking about the risks can help girls reflect on their own privacy environments.

While there’s no guarantee that a photo or video of them won’t be used against them at some point, they can feel empowered by making intentional choices about what they share.

And all teenagers can benefit from knowing that encountering technology capable of making explicit deepfakes, in a period of development when they are very vulnerable to making rash decisions, can lead to decisions that seriously harm others, Ordoñez says.

Encouraging young people to learn to take a step back and ask themselves how they feel before doing something like making a deepfake can make a big difference.

“When you take a step back, (our children) do have this awareness, you just have to empower it, support it and guide it in the right direction,” says Ordoñez.

Topics
Social good Family and parenting

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments