HOMECURRENT STUDENTSSTAFFLIBRARYALUMNINEWS & EVENTSDONATE

The AI Compromise

December 20, 2023
|
Warrick Long
Image by Freepik

We cannot escape AI, and more often than not, we don’t even realize we have been interacting with it. For all you know, this could be AI instead of the real me! And there are some societal issues that we need to be aware of in relation to the use of AI. A recent online article from KelloggInsight [READ IT HERE] calls out two very specific issues that could counteract the benefits of AI.

The article delves into the paradox of generative AI, exemplified by tools like ChatGPT, which promise increased individual efficiency but may raise societal red flags. Sébastien Martin, a Kellogg assistant professor, points out the inherent conundrum: to harness the efficiency of generative AI, individuals must sacrifice fidelity, i.e., how closely the output mirrors their unique style. For individuals, the choice is binary: settle for AI-generated content as "good enough" or invest extra time in customization. However, the article scrutinizes the broader societal repercussions of this dilemma. Research by Martin, Francisco Castro, and Jian Gao of UCLA suggests that widespread generative AI use could breed content homogeneity and amplify biases from the AI's training data throughout society.

Employing a mathematical model, the study unveils a bleak trajectory where AI-generated content becomes progressively more homogeneous than user-generated content, leading to a "homogenization death spiral." As AI-generated content trains the next AI generation, uniformity compounds, exacerbating the issue. The model also hints at biases ingrained during AI training evolving into pervasive societal biases.

Despite these challenges, the article offers potential remedies. Interactive AI tools encouraging user input and manual edits could counteract adverse effects. Allowing users to express preferences and edit AI-generated content might prevent excessive homogeneity and bias. This user-involved approach may entail the AI asking users clarifying questions or presenting multiple contrasting outputs for user selection. While these solutions might momentarily impede the user experience, the article contends they are indispensable for the sustained viability of generative AI. By actively engaging users in shaping content, these measures aim to uphold diversity and mitigate the risk of biased outputs influencing society.

Martin remains hopeful about generative AI's potential in content creation, especially if it continues to encapsulate the full spectrum of human preferences. He acknowledges AI's capacity to draw more individuals into creative pursuits, enriching diversity. However, the article underscores the imperative of responsible development and usage of generative AI, ensuring it benefits individuals and society without compromising diversity, personalization, or succumbing to potential biases.

So, it’s not a case of IF you use AI, but rather HOW you use it.  Remember that investing the time to train it and help to build a better AI for us all. In case you were wondering, I wrote the first and last paragraphs of this article, but I let AI write the rest. Could you tell?

About the author: Dr Warrick Long is an experienced chief financial officer, company secretary and company director, having worked for more than 35 years in the not-for-profit sector. From 2013 to 2024, he was part of the Avondale Business School (ABS) lecturing as a leadership and governance specialist and coordinating the Master of Business Administration and a leadership and governance specialist. Since late 2024 Dr Long has been serving as the Chief Financial and Operations Officer for Avondale University and undertaking some casual lecturing in the ABS. LinkedIn

MBA with a difference

A values-based business education 
Find out more →

News Archive →

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram