Transparency is Crucial for User-Centered AI, or is it? How this Notion Manifests in the UK Press Coverage of GPT

Robot chatbot head icon. Chatbot assistant application sign. AI technology background. Speech bubble message. Dialogue cloud. Circuit board pattern. PCB printed circuit texture. Vector Illustration

Transparency is a core principle for a user-centered AI present in all recent regulatory initiatives. Is it equally present in the public discourse? In this study, we focus on a type of AI that reached the media, i.e., GPT. We collected a corpus of national newspaper articles published in the United Kingdom (UK) while GPT-3 was the latest version (June 2020-November 2022) and investigated whether transparency was mentioned and, if so, in which terms. We used a mixed quantitative and qualitative approach, through which articles are both parsed for word frequency and manually coded. The results show that transparency was rarely explicitly mentioned, but issues underpinning transparency were addresssed in most texts. As a follow-up of the initial study, the scant presence of the term transparency is confirmed in an additional corpus of UK national newspaper articles published since the launch of ChatGPT (November 2022 – May 2023). The implications of missing transparency as a reference for AI ethical concerns in the public discourse are discussed.

Read the paper here

By Mariavittoria Masotina, Elena Musi, Anna Spagnolli.