LATIF

Transparency is Crucial for User-Centered AI, or is it? How this Notion Manifests in the UK Press Coverage of GPT

Transparency is a core principle for a user-centered AI present in all recent regulatory initiatives. Is it equally present in the public discourse? In this study, we focus on a type of AI that reached the media, i.e., GPT. We collected a corpus of national newspaper articles published in the United Kingdom (UK) while GPT-3 was the latest version (June 2020-November 2022) and investigated whether transparency was mentioned and, if so, in which terms. We used a mixed quantitative and qualitative approach, through which articles are both parsed for word frequency and manually coded. The results show that transparency was rarely explicitly mentioned, but issues underpinning transparency were addresssed in most texts. As a follow-up of the initial study, the scant presence of the term transparency is confirmed in an additional corpus of UK national newspaper articles published since the launch of ChatGPT (November 2022 – May 2023). The implications of missing transparency as a reference for AI ethical concerns in the public discourse are discussed.

Read the paper here

By Mariavittoria Masotina, Elena Musi, Anna Spagnolli.

On Key

Related Posts

Enjoy our focus group – 21st June and 22nd June 2023

In the frame of the EMIF funded project “Leveraging argument technology for impartial factchecking”, we are working on eliciting standards for impartiality and build a user-friendly digital tool to help fact-checkers in the process of information verification