OpenAI Media Manager Faces Controversy
Advertisements
The development of a highly anticipated tool from OpenAI, aimed at addressing criticisms and avoiding legal disputes, has been met with delays and uncertaintyThis initiative, known as "Media Manager," was first announced in April 2024, promising creators the power to specify whether their works could be included in AI training datasetsHowever, more than seven months later, the company has provided no updates on the tool's progress, leading to speculation about its significance within the organizationSources close to OpenAI have revealed that "Media Manager" is not viewed as a priority project internally, with one former employee remarking, "I don't even remember if anyone was really working on it."
Given the rapid evolution of artificial intelligence and its implications for creators' rights, the potential of "Media Manager" was initially met with enthusiasmOpenAI outlined that the tool was intended to identify copyrighted materials across various formats, such as text, images, audio, and video
This would allow the firm to mitigate the strongest criticism it faced over its practices while assisting in avoiding potential legal issues related to copyright infringementDespite these good intentions, the internal perception of the tool’s importance seems to lack conviction.
One collaborator, not directly affiliated with OpenAI, stated in December that they had engaged in discussions about "Media Manager," yet there had been no recent updatesThis conversation reflects a broader concern among creators who feel uncertain about how their intellectual property may be utilized without their consentMoreover, the individual noted that Fred von Lohmann, a member of OpenAI’s legal team who previously led the "Media Manager" project, transitioned to a part-time consulting role in October of the previous year, raising further questions about the tool's commitment and progress.
The issue of intellectual property rights has become increasingly contentious in the age of AI
- US Debt vs. China’s Reserves: A Race to Depletion?
- Did you make money in the A-shares in 2024?
- How to Handle a Market Crash
- What’s Causing Volatility in Asian Markets?
- AI Progress: Models No Longer the Bottleneck
Models like those developed by OpenAI learn to make predictions by analyzing patterns within vast datasetsFor instance, these models enable tools like ChatGPT to generate coherent emails and articles, while Sora, another of OpenAI's products, can produce relatively realistic videosThis incredible capability brings forth significant challenges, particularly when the outputs closely resemble the original training data, even if that data is publicly available.
An illustration of this controversy can be found in the Sora-generated videos, which may feature iconic TikTok branding or renowned video game charactersThe New York Times even reported instances where ChatGPT could reproduce their articles verbatimOpenAI characterized this phenomenon as a "bug" rather than a featureSuch actions have understandably enraged content creators whose works have been utilized without authorization; many have sought legal redress.
At present, OpenAI finds itself embroiled in multiple lawsuits brought forth by plaintiffs who include artists, writers, YouTube content creators, computer scientists, and media organizations, all alleging that their works were used without permission
High-profile figures such as comedian Sarah Silverman, author Ta-Nehisi Coates, and major media corporations like The New York Times and the Canadian Broadcasting Corporation are among those pursuing legal actionWhile OpenAI has struck licensing agreements with certain partners, many creators feel that the terms offered do not provide adequate protection or appeal.
To provide creators with some options, OpenAI has introduced limited measures that allow them to opt-out of AI trainingIn September 2023, the company unveiled a form for artists to request that their works be excluded from future datasetsAdditionally, site administrators possess the ability to restrict web crawlers from scraping their contentHowever, many creators argue that these methods are inadequate and fragmentedFor instance, the current opt-out procedures for written works, videos, and audio recordings are vague, and the requirements for individuals to remove images are cumbersome and complex.
Envisioned as a comprehensive upgrade to existing opt-out solutions, "Media Manager" was expected to streamline this process
The announcement made by OpenAI in May suggested that the impending tool would leverage "state-of-the-art machine learning research" to help creators assert their ownership of contentOpenAI asserted it was collaborating with regulatory bodies to develop the tool, with the aim of establishing it as a standard throughout the AI industryYet, since the first mention of "Media Manager," there has been no official commentary on its statusA spokesperson indicated in August that the tool was "still in development," but subsequent inquiries in December went unanswered.
Even if "Media Manager" eventually sees the light of day, experts urge caution, suggesting it may not significantly alleviate creators' concerns or resolve the legal disputes surrounding AI’s relationship with intellectual propertyAdrian Cyhan, an intellectual property attorney with Stubbs Alderton & Markiles, contended that the ambitions for "Media Manager" are considerable
Large platforms like YouTube and TikTok have struggled with large-scale content recognition; can OpenAI do any better?
"Ensuring compliance with legal creator protections and potential compensation requirements is a challenge," Cyhan stated, pointing out the varying legal landscapes across jurisdictions can complicate mattersEd Newton-Rex, founder of Fairly Trained—a nonprofit organization that certifies AI companies respecting creators' rights—expressed concern that "Media Manager" might unjustly shift the responsibility for managing AI training usage onto creatorsIf a creator opts not to utilize the tool, it could be construed as implicit consent for their works to be used.
"The majority of creators may not even be aware of this tool, let alone use itHowever, it could be used to justify extensive use of works against creators' wishes," Newton-Rex addedJoshua Weigensberg, a media and intellectual property attorney at Pryor Cashman, highlighted that creators often find their content hosted on third-party platforms, complicating their control
"Even if creators inform all AI platforms of their opt-out choices, these firms may still train their models using copies of works found on third-party websites and services," Weigensberg observed.
As the situation stands, OpenAI has put in place some filtering mechanisms designed to prevent its models from directly copying training examples, though these measures are far from perfectIn the meantime, the company continues to assert its position of "fair use" in response to lawsuits, arguing that the outputs generated by its models are transformative rather than mere reproductions of original works.
Ultimately, courts may come to a favorable decision for OpenAI in copyright disputes, echoing the precedent set a decade ago in the publishing industry’s lawsuit against Google, in which the court ruled that Google was permitted to copy millions of books for its Google Books digital archive
Leave a Reply
Your email address will not be published. Required fields are marked *