"AI is both a magical tool and a Pandora’s box." That’s how Professor Anjana Susarla frames the promise and peril of working with AI in creative settings. For her, the story isn’t about whether AI can produce something, but about what happens to creativity, trust, and human skill when we let machines step in as collaborators.
In a recent episode of the Tech Tomorrow podcast, host David Elliman, Chief of Software Engineering at Zühlke, sat down with Anjana Susarla, Professor of Responsible AI at Michigan State University. Together, they explored the future of creativity in an AI-powered world to unpack this critical question: Can we trust AI as a creative collaborator?
Meet the guest: Professor Anjana Susarla
Anjana Susarla is a leading voice in the field of Responsible AI, holding the Omura-Saxena Professorship at the Eli Broad College of Business. With a multidisciplinary approach spanning technology, business ethics, and data science, her research focuses on AI governance, algorithmic bias, and the evolving relationship between humans and intelligent systems.
Her unique lens brings academic rigor and real-world relevance, making her the perfect guide through this rapidly shifting landscape.
Key takeaways from the episode
Creativity meets automation
AI is already helping us draft essays, generate music, or even code software. But creativity isn’t a single step — it’s an iterative process of trial, error, and refinement.
Susarla’s research reveals two opposing dynamics when people use AI in creative work: automation bias (trusting everything the system suggests) and algorithm aversion (rejecting it because it comes from a machine). The reality of collaboration sits uneasily between the two.
The de-skilling dilemma
AI can accelerate productivity, but at what cost? Susarla points to experiments where students solved problems more effectively with AI support, only to perform worse once it was taken away. Over-reliance risks what she calls “cognitive debt” — a gradual erosion of the skills needed for deep, independent thinking.
Used mindfully, AI can accelerate idea generation and expand access. Used blindly, it can dull skills and foster dependency, especially among students and early-career professionals. This is a reminder that AI should be a collaborator, not a crutch.
The homogenisation risk
Both guests wrestled with the blurred line between influence and originality. If an AI generates a piece of music by drawing on decades of human work, is it creative expression or sophisticated plagiarism?
If everyone relies on the same models, individuality could be flattened. From Spotify playlists to TikTok trends, algorithms already dictate much of what we see and hear. Pair that with AI-generated content and the danger becomes clear: creativity could lose its premium, replaced by an endless stream of sameness.
Instead of authentic, human-curated experiences, we risk being trapped in feedback loops where algorithms create and algorithms recommend. What gets lost is the serendipity of a friend’s book suggestion or the unique taste of a colleague’s music recommendation. As Susarla puts it:
“Where is that element of real human curated experiences, that authenticity? Are we going to pay a premium for it in the future?”
Transparency and auditability
For Susarla, the key isn’t rejecting AI outright but insisting on transparency. We must ask: where does the training data come from? What inferences are being made? And how do we verify outputs?
Auditability, she argues, is both technical and social. It requires not just code checks but ethical frameworks that ensure creative outputs are trustworthy, representative, and responsibly used.
The big takeaway
So, can we trust AI as a creative partner? The answer, unsurprisingly, is nuanced. AI can be a powerful assistant — a research partner, a sparring tool, a source of inspiration. But true creativity still belongs to humans.
“I’m all for transparent use of AI. Just use it with some caution, and be mindful of where the training data comes from.”