OpenAI did not want 'Girlfriend' or 'Waifu' apps on its GPT Store. Techies had other ideas

OpenAI did not want 'Girlfriend' or 'Waifu' apps on its GPT Store. Techies had other ideas

OpenAI’s freshly launched GPT store is encountering early challenges in content moderation. The platform, which provides customized versions of ChatGPT, is grappling with users who are generating bots that violate OpenAI’s guidelines.

A recent report by Quartz reveals that a search for the term “girlfriend” or “waifu” on the platform yields a minimum of eight AI chatbots promoted as virtual companions.

“Waifu” is a term derived from the English word “wife” and is used in the anime and manga community to refer to a fictional female character with whom someone has a strong emotional attachment or affection. It’s often used humorously or playfully to describe a character fans find particularly appealing or endearing. The concept emphasizes a fan’s admiration for a character rather than any real-life romantic involvement at least in the traditional sense of the term.

However, with the advent of AI, it has been observed that many men and women have had romanic feelings for AI chatbots or waifus.

These bots, with names like “Your AI girlfriend, Tsu,” enable users to tailor a romantic partner, contravening OpenAI’s prohibition on bots exclusively designed for fostering romantic relationships.

In an attempt to address such issues, OpenAI updated its policies concurrently with the store’s launch on January 10th, 2023.

However, a breach of these policies on the second day underscores the formidable challenges associated with moderation.

The demand for relationship-oriented bots further complicates the situation. In the United States, seven out of the 30 most downloaded AI chatbots last year were virtual friends or partners, according to reports.

These applications, often seen as a response to the loneliness epidemic, pose ethical questions about whether they genuinely aid users or exploit their emotional vulnerabilities.

OpenAI asserts that a combination of automated systems, human reviews, and user reports are employed to assess GPTs. Those deemed harmful may receive warnings or face sales bans. Nevertheless, the persistence of girlfriend bots raises scepticism about the effectiveness of these measures.

The moderation struggles echo those encountered by other AI developers. OpenAI, known for its uneven safety track record with earlier bots like GPT-3, faces the challenge of maintaining effective safeguards, especially with the GPT store accessible to a broader audience.

Despite the hurdles, OpenAI has a vested interest in implementing stringent policies. In the competitive landscape of general AI development, establishing effective governance is crucial for maintaining a reputable standing.

Similar to other tech firms, OpenAI is compelled to promptly address AI-related issues to uphold its image as the race for AI advancement intensifies.

However, the early policy violations underscore the formidable moderation challenges ahead, even with narrowly-focused GPT store bots, as AI technologies continue to advance.

(With inputs from agencies)

0 Comments: