A Deep Dive into AI Ethics and Creativity

Post by Remco Livain

Hello Everyone!

Let me take you through an insightful journey we had at GANDT during an all-hands presentation led by Marcela (Ulloa) on December 13, 2023. The topic? Artificial Intelligence (AI), a field that’s becoming increasingly relevant in our daily lives.

Marcela kicked off by defining AI as the development of computer systems that replicate intelligent human behavior. She brought up examples from the beginning of AI systems like GPS navigation systems and Deep Blue, contrasting them with modern data-driven models based on machine learning techniques, such as ChatGPT. It was a fantastic way to showcase the broad spectrum of AI technologies and their applications.

“AI is the study and development of computer systems that can copy intelligent human behaviour”—Oxford Dictionary Definition December 2023

We then delved into the nuances of supervised and unsupervised machine learning models. It’s fascinating to see how these models learn and evolve, shaping the AI landscape.

However, the highlight of the discussion was the ethical implications and biases within AI, particularly with Large Language Models (LLMs) and image generator models. A striking example was the Washington Post article about Stable Diffusion’s response to creating images for several prompts. The presented example was the toys for kids in Iraq, which resulted in teddy bears with guns and full army gear. It raised important questions about the context and ethical considerations AI tools must be programmed with.

toys in Iraq - AI Ethics KI Ethics

(Image Source)


The most heated debate centered around the question of credit attribution in AI-generated content. In the U.S., solely AI-created outputs can’t be copyrighted, but our team pondered: Should the person who inputs the prompt be credited, especially when the output is generic?

Should the person who inputs the prompt be credited, especially when the output is generic?”—Marcela Ulloa

We agreed that the nature of the prompt and input should dictate credit attribution. For instance, asking ChatGPT to write a 5000-word book in the style of Charles Dickens shouldn’t credit the prompter. However, a blog post infused with personal quotes and a unique tone should be credited to the author.

Another troubling issue we discussed was the impact of AI on academic integrity. With students increasingly turning to AI for writing papers, it’s becoming difficult to ascertain the authenticity of their work. This challenges the very essence of academic learning and calls for future solutions to uphold the integrity of academic research.

The presentation ended on a high note, with everyone appreciating Marcela’s in-depth coverage and the stimulating discussions that ensued. It’s clear that AI is a double-edged sword, offering incredible potential while posing significant ethical challenges.

As we continue to explore AI’s possibilities at GANDT, it’s these discussions that keep us grounded and thoughtful about our approach. It’s not just about harnessing AI’s power; it’s also about understanding its impact on society and our responsibilities as innovators.

What are your thoughts on the ethical implications of AI? How should we balance creativity and credit in the age of AI? Share your views and let’s keep this important conversation going!

Interested in us hosting this discussion in your company or team?
Contact us now and we’ll be happy to deep-dive AI with your colleagues, online or in person.