How to talk to your kids and grandkids about ai

How to talk to your kids and grandkids about ai


Play all audios:


“The thing that feels the most daunting to parents is learning about it, trying it and having to step out of your comfort zone,” says Jasmine Hood Miller, director of community content and


engagement at Common Sense Media in Philadelphia. The nonprofit advocacy group for families is headquartered in San Francisco. Common Sense launched an AI nutrition label and rating scale


it says is designed to assess the ethical use, transparency, safety and impact of AI products. Bard, before the name change, and ChatGPT have been issued three out of five stars. 2.


DISCOVER AI TOGETHER Experiment with your kids. Try different prompts. Remind them that exchanging messages with an AI bot can seem like you’re engaging with a person, but no human being is


at the other end of an AI conversation. Explain the risks associated with AI exchanges — the possibility of misinformation, potential for plagiarism and lack of privacy — and that everything


needs to be verified. AIs may deliver answers that sound authoritative or plausible but are just plain wrong, what the tech industry calls “hallucinations.” “These tools are unreliable,”


says Jennifer King, privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence in California. “We’re already seeing what happens to


lawyers that ask ChatGPT to write a brief in a case. It literally makes up citations and facts that don’t exist.” 3. CHECK FOR SCHOOL REGULATIONS Just as adults should consult bosses about


acceptable office use of AI, see what policies your kid’s school has around AI. Even in the absence of fully baked rules, “someone already is thinking about it,” Miller says. If in doubt,


ask your child’s teachers to weigh in on what’s proper. 4. DON’T CHEAT. IT’S MORALLY WRONG “Plagiarism and cheating [are] a human issue. That’s not a technology issue,” says Jenny Maxwell,


head of Grammarly for Education. The Grammarly writing tool — free and fee-based versions are available — can change a writer’s tone, supply real-time feedback, provide citations and help


students brainstorm ideas. The San Francisco company has partnerships with more than 3,000 educational institutions. It has launched generative AI features that Grammarly officials say will


augment, not replace, a student’s critical thinking skills. Cheating robs a student’s opportunity to learn, says Michael Steven Marx, an associate professor of English and the director of


expository writing at Skidmore College in Saratoga Springs, New York. Sure, ChatGPT can correct simple errors in a school paper. “But it’s like a free lunch, and we know there’s no free


lunch,” he says. “It’s going to come back to haunt you. You’re going to be sitting somewhere [and] have to write something, and you still don’t know how to do it.” Apart from the moral


issue, cheaters are often caught. And AI-produced school papers may come across as generic, homogeneous or, on the other end of the spectrum, total bull. “It’s often obvious to a teacher,


who might [say], ‘Wow, your writing has really changed. This doesn’t sound like you,’  ” King says.