IRIS Publications | Interchange Forum for Reflecting on Intelligent Systems (SRF IRIS) | University of Stuttgart (2024)

  • Williams, J. R., Sindermann, C., Yang, H., Montag, C., & Elhai, J. D. (2023). Latent profiles of problematic smartphone use severity are associated with social and generalized anxiety, and fear of missing out, among Chinese high school students. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 17(5), Article 5. https://doi.org/10.5817/CP2023-5-7

    • Abstract
    • BibTeX
    • Link

    Abstract

    We explored problematic smartphone use (PSU) using latent profile analysis (LPA) and relationships with anxiety variables, including severity of generalized anxiety disorder (GAD), social anxiety disorder (SAD), and Fear of Missing Out (FoMO) in a non-clinical sample. We conducted a web-based survey (during the COVID-19 pandemic from February to March 2020) with high school students (N = 1,797; 1,164 female; ages 13–19 years) in Tianjin, China, administering the Smartphone Addiction Scale-Short Version (SAS-SV) to assess PSU, Generalized Anxiety Disorder (GAD-7) Scale, Social Interaction Anxiety Scale (SIAS), and Fear of Missing Out (FoMO) Scale. Using Mplus 8.7, we conducted LPA on SAS-SV item responses to uncover latent profiles and relations with anxiety and fear measures. A three-profile PSU model fit the data according to fit indices and likelihood ratio tests. SAS-SV item responses were lowest in profile 1, moderate in profile 2, and most severe in profile 3. Individual PSU profiles modeled by LPA demonstrated significant differences in social and generalized anxiety severity and FoMO. Controlling for age and sex, adolescents with higher levels of anxiety were more likely to be classified as profiles 2and 3 rather than profile 1. These findings will hopefully inspire future studies and treatments concerning the severity ofPSU as it relates to various psychopathology constructs.

    BibTeX

  • Hagendorff, T. (2023). Information Control and Trust in the Context of Digital Technologies. In C. Eisenmann, K. Englert, C. Schubert, & E. Voss (Eds.), Varieties of Cooperation (pp. 189--201). Springer Fachmedien Wiesbaden.

    • BibTeX

    BibTeX

  • Hagendorff, T., Fabi, S., & Kosinski, M. (2023). Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science, 1--9.

    • BibTeX

    BibTeX

  • Hagendorff, T., Bossert, L. N., Tse, Y. F., & Singer, P. (2023). Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals. AI and Ethics, 3(3), Article 3.

    • Abstract
    • BibTeX

    Abstract

    Massive efforts are made to reduce biases in both data and algorithms to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is a critical comment on current fairness research in AI. It is the first to describe the `speciesist bias' and investigate it in several different AI systems by reflecting on the problem via a normative analysis and by probing, in several case studies, image recognition, word embedding, and language models with established methods for bias detection. We claim that animals matter morally and that discriminating against them is unethical. Furthermore, we provide evidence for speciesist biases in all the mentioned areas of AI. We find that speciesist biases are solidified by many mainstream AI applications, especially in the fields of computer vision as well as natural language processing. In both cases, this occurs because the models are trained on datasets in which speciesist patterns prevail. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. To change this, AI fairness frameworks must widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, especially on farmed animals.

    BibTeX

  • Bossert, L., & Hagendorff, T. (2023). The ethics of sustainable AI: Why animals (should) matter for a sustainable use of AI. Sustainable Development, 31(5), Article 5.

    • BibTeX
  • Vetter, D., Amann, J., Bruneault, F., Coffee, M., Düdder, B., Gallucci, A., Gilbert, T. K., Hagendorff, T., van Halem, I., Hickman, E., Hildt, E., Holm, S., Kararigas, G., Kringen, P., Madai, V. I., Wiinblad Mathez, E., Tithi, J. J., Westerlund, M., Wurth, R., & Zicari, R. V. (2023). Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society, 2(3), Article 3.

    • BibTeX

    BibTeX

  • Masur, P. K., Hagendorff, T., & Trepte, S. (2023). Challenges in Studying Social Media Privacy Literacy. In S. Trepte & P. K. Masur (Eds.), The Routledge Handbook of Privacy and Social Media (pp. 110--124). Routledge.

    • BibTeX

    BibTeX

  • Erhard, L., Hanke, S., Remer, U., Falenska, A., & Heiberger, R. H. (2023). PopBERT. Detecting populism and its host ideologies in the German Bundestag. CoRR, abs/2309.14355. https://doi.org/10.48550/ARXIV.2309.14355

    • BibTeX
    • Link

    BibTeX

  • Erhard, L., & Heiberger, R. (2023). Regression and Machine Learning. In J. Skopek (Ed.), Research Handbook on Digital Sociology (pp. 129--144). Edward Elgar Publishing. https://www.e-elgar.com/shop/gbp/research-handbook-on-digital-sociology-9781789906752.html

    • Abstract
    • BibTeX
    • Link

    Abstract

    Machine learning (ML) techniques have become one of the most successful scientific tools and changed the everyday life of people around the globe (e.g., search engines). A vast amount of digital data sources on human behaviour has emerged due to the rise of the internet and opened the door for computer scientists to apply ML on social phenomena. In the social sciences, however, the adoption of ML has been less enthusiastic. To investigate the relation of traditional statistics and ML, this paper shows how ML might be used as regression analysis. For that purpose, we illustrate what a typical social science approach might look like and how using ML techniques could contribute additional insights when it comes to estimators (non-linearity) or the assessment of model fit (predictive power). In particular, we reveal how epistemological differences shape the potential usage of ML in the social sciences and discuss the methodological trade-off of applying ML compared to traditional statistics.

    BibTeX

  • Ðula, I., Berberena, T., Kepliner, K., & Wirzberger, M. (2023). Hooked on artificial agents: a systems thinking perspective. Frontiers in Behavioral Economics, 2, 1223281. https://doi.org/10.3389/frbhe.2023.1223281

    • Abstract
    • BibTeX

    Abstract

    Following recent technological developments in the artificial intelligence space, artificial agents are increasingly taking over organizational tasks typically reserved for humans. Studies have shown that humans respond differently to this, with some being appreciative of their advice (algorithm appreciation), others being averse toward them (algorithm aversion), and others still fully relinquishing control to artificial agents without adequate oversight (automation bias). Using systems thinking, we analyze the existing literature on these phenomena and develop a conceptual model that provides an underlying structural explanation for their emergence. In doing so, we create a powerful visual tool that can be used to ground discussions about the impact artificial agents have on organizations and humans within them.

    BibTeX

  • Runstedler, C. (2023). Alchemy and Exemplary Poetry in Middle English Literature. Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-26606-5

    • BibTeX
    • Link

    BibTeX

  • Zhang, Y., Yao, S., Sindermann, C., Rozgonjuk, D., Zhou, M., Riedl, R., & Montag, C. (2023). Investigating autistic traits, social phobia, fear of COVID-19, and internet use disorder variables in the context of videoconference fatigue. Telematics and Informatics Reports, 11, 100067. https://doi.org/10.1016/j.teler.2023.100067

    • Abstract
    • BibTeX
    • Link

    Abstract

    In response to the coronavirus disease 2019 (COVID-19) pandemic, many individuals turned to synchronous online video communication technologies as a substitute for real-world face-to-face interactions. Evidence indicates that some users of such technologies show symptoms of exhaustion and fatigue during and after videoconferences (VCs) – this phenomenon is referred to as Videoconference Fatigue (VC fatigue). Research characterizing the possible vulnerability factors for VC fatigue is still scarce and considered to be in its early stage. Contributing to closing this gap in the existing literature is the motivation for the present study. Survey data was collected from 311 German-speaking participants to explore the relationships of VC fatigue with several psychological factors including autistic traits, social phobia, Fear of COVID-19, tendencies towards Internet Use Disorders (IUD tendencies), and Fear of Missing Out (FoMO, trait and state variables). Results showed that VC fatigue was significantly positively correlated with all of these psychological factors except state-FoMO, and corss-sectional mediation analyses provided further evidence for the positive association between autistic traits and VC fatigue. Specifically, the relationship between autistic traits and VC fatigue was mediated by Fear of COVID-19 and IUD tendencies rather than social phobia, with the latter being a preregistered hypothesis. This study adds to the literature by revealing several possible vulnerability factors associated with VC fatigue. In essence, the present work sheds light on the complex association between autistic traits and VC fatigue. We discuss the implications of our study as well as its limitations and potential avenues for future research.

    BibTeX

  • Hagendorff, T., & Danks, D. (2023). Ethical and methodological challenges in building morally informed AI systems. AI and Ethics, 3(2), Article 2.

    • Abstract
    • BibTeX

    Abstract

    Recent progress in large language models has led to applications that can (at least) simulate possession of full moral agency due to their capacity to report context-sensitive moral assessments in open-domain conversations. However, automating moral decision-making faces several methodological as well as ethical challenges. They arise in the fields of bias mitigation, missing ground truth for moral ``correctness'', effects of bounded ethicality in machines, changes in moral norms over time, risks of using morally informed AI systems as actual advice, as well as societal implications an increasing importance of algorithmic moral decision-making would have. This paper comments on all these challenges and provides critical considerations for future research on full artificial moral agency. Importantly, some of the adduced challenges can be met by more careful technology design, but others necessarily require engagement with core problems of meta-ethics.

    BibTeX

  • Hagendorff, T., & Fabi, S. (2023). Methodological reflections for AI alignment research using human feedback. ArXiv, 1--9.

    • Abstract
    • BibTeX

    Abstract

    The field of artificial intelligence (AI) alignment aims to investigate whether AI technologies align with human interests and values and function in a safe and ethical manner. AI alignment is particularly relevant for large language models (LLMs), which have the potential to exhibit unintended behavior due to their ability to learn and adapt in ways that are difficult to predict. In this paper, we discuss methodological challenges for the alignment problem specifically in the context of LLMs trained to summarize texts. In particular, we focus on methods for collecting reliable human feedback on summaries to train a reward model which in turn improves the summarization model. We conclude by suggesting specific improvements in the experimental design of alignment studies for LLMs' summarization capabilities.

    BibTeX

  • Hagendorff, T., & Fabi, S. (2023). Why we need biased AI: How including cognitive biases can enhance AI systems. Journal of Experimental & Theoretical Artificial Intelligence, 1--14.

    • BibTeX

    BibTeX

  • Fanton, N., Falenska, A., & Roth, M. (2023). How-to Guides for Specific Audiences: A Corpus and Initial Findings. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), 321--333. https://doi.org/10.18653/v1/2023.acl-srw.46

    • Abstract
    • BibTeX
    • Link

    Abstract

    Instructional texts for specific target groups should ideally take into account the prior knowledge and needs of the readers in order to guide them efficiently to their desired goals. However, targeting specific groups also carries the risk of reflecting disparate social norms and subtle stereotypes. In this paper, we investigate the extent to which how-to guides from one particular platform, wikiHow, differ in practice depending on the intended audience. We conduct two case studies in which we examine qualitative features of texts written for specific audiences. In a generalization study, we investigate which differences can also be systematically demonstrated using computational methods. The results of our studies show that guides from wikiHow, like other text genres, are subject to subtle biases. We aim to raise awareness of these inequalities as a first step to addressing them in future work.

    BibTeX

  • Hagendorff, T. (2023). AI ethics and its pitfalls: not living up to its own standards? AI and Ethics, 3(1), Article 1.

    • BibTeX

    BibTeX

  • Sindermann, C., Scholz, R. W., Löchner, N., & Montag, C. (2023). The revenue model of mainstream social media: advancing discussions on social media based on a European perspective derived from interviews with scientific and practical experts. International Journal of Human–Computer Interaction. https://doi.org/10.1080/10447318.2023.2278292

    • Abstract
    • BibTeX

    Abstract

    Potential benefits and risks related to mainstream social media platforms and their revenue model are vigorously debated. However, a comprehensive framework of performance criteria to evaluate social media platforms and suggestions for transforming them are rare. Employing a transdiscipli­ nary approach, the present work aimed to close these gaps through semi-structured interviews with experts from academia and industry, coupled with exploratory thematic content/topic analysis.

    BibTeX

  • IRIS Publications | Interchange Forum for Reflecting on Intelligent Systems (SRF IRIS) | University of Stuttgart (2024)

    References

    Top Articles
    Latest Posts
    Article information

    Author: Velia Krajcik

    Last Updated:

    Views: 6001

    Rating: 4.3 / 5 (74 voted)

    Reviews: 81% of readers found this page helpful

    Author information

    Name: Velia Krajcik

    Birthday: 1996-07-27

    Address: 520 Balistreri Mount, South Armand, OR 60528

    Phone: +466880739437

    Job: Future Retail Associate

    Hobby: Polo, Scouting, Worldbuilding, Cosplaying, Photography, Rowing, Nordic skating

    Introduction: My name is Velia Krajcik, I am a handsome, clean, lucky, gleaming, magnificent, proud, glorious person who loves writing and wants to share my knowledge and understanding with you.