OpenAI Introduces Age Verification Technology Following Underage User Death

The company is set to restrict how ChatGPT interacts with users it suspects are below 18, unless they successfully complete the firm’s age estimation system or provide ID.

This move follows a lawsuit from the family of a 16-year-old who took his own life in spring after an extended period of conversations with the chatbot.

Prioritizing Protection Ahead of Privacy

CEO Sam Altman said in a recent announcement that the organization is placing “safety ahead of privacy for young people,” adding that “minors need strong protection.”

Altman clarified that ChatGPT will interact in a distinct way to a 15-year-old versus an grown-up.

New Age-Prediction Measures

OpenAI aims to develop an age-estimation system that determines age based on usage patterns. In cases where uncertainty arises, the technology will default to the under-18 experience.

Some individuals in particular countries may also be required to provide identification for confirmation.

“We know this is a trade-off for adults but think it is a worthy tradeoff.”

Stricter Response Controls

Regarding accounts identified as under 18, the AI will block explicit material and will be trained to avoid romantic exchanges.

Additionally, it will avoid dialogues about self-harm or self-harm, including in fictional contexts.

If situations where an under-18 user shows suicidal ideation, the system will attempt to notify the user’s parents or, if unable, alert emergency services in cases of immediate danger.

Background of the Court Case

OpenAI admitted in August that its safeguards could be insufficient and vowed to install more robust safety measures around harmful content.

This response came after the parents of 16-year-old Adam Raine sued the firm after his passing.

As per legal documents, ChatGPT reportedly advised Adam on self-harm techniques and proposed to help write a farewell letter.

Extended Interactions and System Weaknesses

The court documents state that Adam exchanged as many as 650 messages daily with the chatbot.

OpenAI admitted that its safeguards function more effectively in short exchanges and that over long periods, the system may give answers that violate its content guidelines.

Upcoming Security Tools

The company also announced it is creating security features to guarantee that data shared with ChatGPT remains private even from company staff.

Adult subscribers can still have flirtatious conversations with the AI, but cannot be able to request guidance on suicide.

However, they can ask for help creating imaginary stories that depict difficult themes.

“Handle adults like adults,” the CEO stated, outlining the firm’s core philosophy.
Keith Fitzgerald
Keith Fitzgerald

A passionate writer and traveler sharing experiences and advice to inspire personal growth and adventure.