News

As Cloud Giants Wrestle with AI-Generated Content, AWS Demands Kindle Notifications

As cloud giants continue to wrestle with AI-generated content causing copyright issues, Amazon Web Services (AWS) is now demanding that self publishers on its Kindle platform indicate whether their new content comes from machines.

The move comes as fellow cloud giant Microsoft announced it would assume responsibility for copyright risks associated with the use of its Copilot AI assistants.

"As customers ask whether they can use Microsoft's Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved," said Microsoft in a Sept. 7 post, "Microsoft announces new Copilot Copyright Commitment for customers."

The day before that announcement, AWS unveiled the "Addition of AI Questions to KDP Publishing Process."

"We are actively monitoring the rapid evolution of generative AI and the impact it is having on reading, writing, and publishing, and we remain committed to providing the best possible shopping, reading, and publishing experience for our authors and customers," AWS said. "To that end, beginning today, when you publish a new title or make edits to and re-publish an existing title through KDP, you will be asked whether your content is AI-generated. Learn more about how we define AI-generated content."

AWS didn't explicity say what it would do if publishers indicated their content was generated by AI, but its KDP Content Guidelines clearly state content will be rejected or removed from the Kindle platform if it violates the guidelines. And those guidelines -- along with prohibiting illegal or infringing content, offensive content, content contributing to a poor customer experience, or public domain content -- include this passage:

Artificial intelligence (AI) content (text, images, or translations)
We require you to inform us of AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP. AI-generated images include cover and interior images and artwork. You are not required to disclose AI-assisted content. We distinguish between AI-generated and AI-assisted content as follows:
  • AI-generated: We define AI-generated content as text, images, or translations created by an AI-based tool. If you used an AI-based tool to create the actual content (whether text, images, or translations), it is considered "AI-generated," even if you applied substantial edits afterwards.
  • AI-assisted: If you created the content yourself, and used AI-based tools to edit, refine, error-check, or otherwise improve that content (whether text or images), then it is considered "AI-assisted" and not "AI-generated." Similarly, if you used an AI-based tool to brainstorm and generate ideas, but ultimately created the text or images yourself, this is also considered "AI-assisted" and not "AI-generated." It is not necessary to inform us of the use of such tools or processes.
You are responsible for verifying that all AI-generated and/or AI-assisted content adheres to all content guidelines. For example, to confirm an AI-based tool did not create content based on copyrighted works, you're required to review and edit any AI tool outputs.
[Click on image for larger view.]Jane Friedman on fake books under her name: 'Some huckster generated them using AI' (source: X).

The updated guidelines follow complaints lodged last month by author Jane Friedman about "garbage books getting uploaded to Amazon where my name is credited as the author."

Her series of posts included this: "We desperately need guardrails on this landslide of misattribution and misinformation. Amazon and Goodreads, I beg you to create a way to verify authorship, or for authors to easily block fraudulent books credited to them. Do it now, do it quickly."

Apparently, Amazon did, or is trying to. It's not clear what the company will do if publishers answer "yes" when asked if their content is generated by AI, or how it will determine if new AI-generated content could cause legal problems and should be removed. The amended guidelines appear to put the onus on creators to ensure their AI-generated content doesn't infringe on copyrights, which wouldn't do much to address the "huckster" reverse plagiarism conundrum.

While author Friedman apparently hasn't launched any official legal action, others have.

For example, the GitHub Copilot "AI pair programmer" from the Microsoft-owned code repository and development platform came under legal fire last November when attorney Matthew Butterick announced: "Today, we've filed a class-action lawsuit in US federal court in San Francisco, CA on behalf of a proposed class of possibly millions of GitHub users. We are challenging the legality of GitHub Copilot (and a related product, OpenAI Codex, which powers Copilot). The suit has been filed against a set of defendants that includes GitHub, Microsoft (owner of GitHub), and OpenAI."

There are now many such similar lawsuits emerging, typically addressing intellectual property (IP) infringement (see this lawsuit), and Microsoft wants to ease concerns of users worried about being mired down in legal complications just because they used a Copilot AI assistant, which Microsoft has infused throughout a wide swath of its products and services, even its flagship Windows OS itself.

"While these transformative tools open doors to new possibilities, they are also raising new questions," Microsoft said. "Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI. This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services."

The third "Big 3" cloud giant, Google, has faced its own class-action problems, as detailed in the July Reuters article, "Google hit with class-action lawsuit over AI data scraping."

The whole AI-generated content mess is explored further in the April article from Harvard Business Review titled, "Generative AI Has an Intellectual Property Problem," which discusses questions like, "Does copyright, patent, trademark infringement apply to AI creations? Is it clear who owns the content that generative AI platforms create for you, or your customers? Before businesses can embrace the benefits of generative AI, they need to understand the risks -- and how to protect themselves."

It looks like the cloud giants are trying to get answers to those questions and protect themselves.

Stay tuned to see how it all shakes out.

About the Author

David Ramel is an editor and writer at Converge 360.

Featured

Subscribe on YouTube