OpenAI is giving parents more control over how their kids use ChatGPT. New parental controls come at a critical moment, as many families, schools and advocacy groups voice their concerns about the potentially dangerous role AI chatbots can play in the development of teenagers and children.

Parents will have to link their own ChatGPT account with their child’s to access the new features. However, OpenAI said that these features do not give parents access to their child’s conversations with ChatGPT and that, in cases where the company identifies “serious safety risks,” a parent will be alerted “only with the information needed to support their teen’s safety.” 

It’s a “first-of-its-kind safety notification system to alert parents if their teen may be at risk of self-harm,” said Lauren Haber Jonas, OpenAI’s head of youth well-being, in a LinkedIn post.

Once the accounts are linked, parents can set quiet hours and times when the kids won’t be able to use ChatGPT, as well as turn off image generation and voice mode capabilities. On the technical side, parents can also opt their kids out of content training and choose to have ChatGPT not save or remember their kids’ previous chats. Parents can also elect to reduce sensitive content, which enables additional content restrictions around things like graphic content. Teens can unlink their account from a parent’s, but the parent will be notified if that occurs.

ChatGPT’s parent company announced last month it would be introducing more parental controls in the wake of a lawsuit a California family filed against it. The family is alleging the AI chatbot is responsible for their 16-year-old son’s suicide earlier this year, calling ChatGPT his “suicide coach.” A rising number of AI users have their AI chatbots take on the role of a therapist or confidant. Therapists and mental health experts have expressed concerns over this, saying AI like ChatGPT isn’t trained to accurately assess, flag and intervene when encountering red flag language and behaviors.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.



Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version