Questions for Consideration on AI & the Commons - Creative Commons Skip to content

Questions for Consideration on AI & the Commons

About CC, Licenses & Tools, Policy
Eight eyes. Engraving after C. Le Brun” by Charles Le Brun is licensed via CC0.

The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?

History and Evolution

When Creative Commons was founded over 20 years ago, sharing on the internet was broken. With the introduction of the CC licenses, the commons flourished. Licenses that enabled open sharing were perfectly aligned with the ideals of giving creators a choice over how their works were used.

Those who embrace openly sharing their work have a myriad of motivations for doing so. Most could not have anticipated how their works might one day be used by machines: to solve complex medical questions, to create other-wordly pictures of dogs, to train facial recognition systems – the list goes on.

Can we continue to foster a vibrant and healthy commons in today’s technological environment? How can we think innovatively about creator choice in this context?

Preference Signals

Preference signals for AI are the idea that an agent (creator, rightsholder, entity of some kind) is able to signal their preference with regards to how their work is used to train AI models. Last year, we started thinking more about this concept, as did many in the responsible tech ecosystem. But to date the dialog is still fairly binary, offering only all-or-nothing choices, with no imagination for how creators or communities might want their work to be used.

Enabling Commons-Based Participation in Generative AI

What was once a world of creators making art and researchers furthering knowledge, has the risk of being reduced to a world of rightsholders owning, controlling, and commercializing data. In this bleak future, it’s no longer a photo album, a poetry book, or a family blog. It’s content, it’s data, and eventually, it’s tokens.

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon.

Posted 24 July 2024

Tags