Enterprise Insider has obtained the rules that Meta contractors are reportedly now utilizing to coach its AI chatbots, displaying the way it’s making an attempt to extra successfully deal with potential youngster sexual exploitation and forestall youngsters from partaking in age-inappropriate conversations. The corporate mentioned in August that it was updating the guardrails for its AIs after Reuters reported that its insurance policies allowed the chatbots to “interact a toddler in conversations which are romantic or sensual,” which Meta mentioned on the time was “inaccurate and inconsistent” with its insurance policies and eliminated that language.
The doc, which Enterprise Insider has shared an excerpt from, outlines what sorts of content material are “acceptable” and “unacceptable” for its AI chatbots. It explicitly bars content material that “allows, encourages, or endorses” youngster sexual abuse, romantic roleplay if the person is a minor or if the AI is requested to roleplay as a minor, recommendation about doubtlessly romantic or intimate bodily contact if the person is a minor, and extra. The chatbots can talk about subjects akin to abuse, however can’t interact in conversations that would allow or encourage it.
The firm’s AI chatbots have been the topic of quite a few experiences in latest months which have raised issues about their potential harms to kids. The FTC in August launched a proper inquiry into companion AI chatbots not simply from Meta, however different firms as properly, together with Alphabet, Snap, OpenAI and X.AI.