..
  • Recent Posts

  • Intern

  • Archiv

  • Meta

  • / cscw / home /
     

    Fatemeh Alizadeh, M.Sc.


    Mail:fatemeh.alizadeh(at)uni-siegen.de

    Room: –

    Phone: –

    Vita

    Fatemeh Alizadeh is a research assistant and Ph.D. candidate at the Chair of Information Systems, in particular, IT Security and Consumer Informatics at the University of Siegen. After her Bachelor’s degree in Computer Engineering, she continued her studies with a first Masters’s degree in Artificial Intelligence and the second one in Human-Computer Interaction at the University of Siegen, during which she has won the usability challenge award in Germany. Fatemeh’s research interest is in designing new and creative communication techniques between users and opaque AI-algorithmic systems to provide users a more satisfying, and engaging interaction.

    Publications

    2024


    • Alizadeh, F., Tolmie, P., Lee, M., Wintersberger, P., Pins, D. & Stevens, G. (2024)Voice Assistants’ Accountability through Explanatory Dialogues

      Proceedings of the 6th ACM Conference on Conversational User Interfaces. New York, NY, USA, Publisher: Association for Computing Machinery, Pages: 1–12 doi:10.1145/3640794.3665557
      [BibTeX] [Abstract] [Download PDF]

      As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs’ accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa’s accounts are often single, decontextualized responses that led to users’ alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users’ emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.

      @inproceedings{alizadeh_voice_2024,
      address = {New York, NY, USA},
      series = {{CUI} '24},
      title = {Voice {Assistants}' {Accountability} through {Explanatory} {Dialogues}},
      isbn = {9798400705113},
      url = {https://doi.org/10.1145/3640794.3665557},
      doi = {10.1145/3640794.3665557},
      abstract = {As voice assistants (VAs) become more advanced leveraging Large Language Models (LLMs) and natural language processing, their potential for accountable behavior expands. Yet, the long-term situational effectiveness of VAs’ accounts when errors occur remains unclear. In our 19-month exploratory study with 19 households, we investigated the impact of an Alexa feature that allows users to inquire about the reasons behind its actions. Our findings indicate that Alexa's accounts are often single, decontextualized responses that led to users’ alternative repair strategies over the long term, such as turning off the device, rather than initiating a dialogue about what went wrong. Through role-playing workshops, we demonstrate that VA interactions should facilitate explanatory dialogues as dynamic exchanges that consider a range of speech acts, recognizing users’ emotional states and the context of interaction. We conclude by discussing the implications of our findings for the design of accountable VAs.},
      urldate = {2024-07-11},
      booktitle = {Proceedings of the 6th {ACM} {Conference} on {Conversational} {User} {Interfaces}},
      publisher = {Association for Computing Machinery},
      author = {Alizadeh, Fatemeh and Tolmie, Peter and Lee, Minha and Wintersberger, Philipp and Pins, Dominik and Stevens, Gunnar},
      month = jul,
      year = {2024},
      pages = {1--12},
      }


    • Amirkhani, S., Alizadeh, F., Randall, D. & Stevens, G. (2024)Beyond Dollars: Unveiling the Deeper Layers of Online Romance Scams Introducing “Body Scam”

      Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. New York, NY, USA, Publisher: Association for Computing Machinery, Pages: 1–6 doi:10.1145/3613905.3651004
      [BibTeX] [Abstract] [Download PDF]

      As online romance surges, so does the popularity of online romance scam. While existing research predominantly emphasizes financial scams, our study introduces “body scam” which is not for financial gain, but just for sexual abuse. We conducted interviews with 20 victims of online dating fraud with sexual scam experience in Iran, delving into their story in a context where dating is legally, socially, and normatively complex. Through inductive coding, our findings reveal a notable shift in victimization risks, particularly for women, with a growing emphasis on predatory sexual intent rather than purely financial motivation. This study not only sheds light on the evolving landscape of online dating risks but also underscores the significance of understanding the nuances of sexual intentions in the Iranian context.

      @inproceedings{amirkhani_beyond_2024,
      address = {New York, NY, USA},
      series = {{CHI} {EA} '24},
      title = {Beyond {Dollars}: {Unveiling} the {Deeper} {Layers} of {Online} {Romance} {Scams} {Introducing} “{Body} {Scam}”},
      isbn = {9798400703317},
      shorttitle = {Beyond {Dollars}},
      url = {https://dl.acm.org/doi/10.1145/3613905.3651004},
      doi = {10.1145/3613905.3651004},
      abstract = {As online romance surges, so does the popularity of online romance scam. While existing research predominantly emphasizes financial scams, our study introduces “body scam” which is not for financial gain, but just for sexual abuse. We conducted interviews with 20 victims of online dating fraud with sexual scam experience in Iran, delving into their story in a context where dating is legally, socially, and normatively complex. Through inductive coding, our findings reveal a notable shift in victimization risks, particularly for women, with a growing emphasis on predatory sexual intent rather than purely financial motivation. This study not only sheds light on the evolving landscape of online dating risks but also underscores the significance of understanding the nuances of sexual intentions in the Iranian context.},
      urldate = {2024-05-16},
      booktitle = {Extended {Abstracts} of the {CHI} {Conference} on {Human} {Factors} in {Computing} {Systems}},
      publisher = {Association for Computing Machinery},
      author = {Amirkhani, Sima and Alizadeh, Fatemeh and Randall, Dave and Stevens, Gunnar},
      month = may,
      year = {2024},
      keywords = {Iran, Body Scam, Online Dating Fraud, Online Romance Scam, Sextortion, Sexual Intention},
      pages = {1--6},
      }

    2023


    • Pins, D., Jakobi, T., Stevens, G., Alizadeh, F. & Krüger, J. (2023)Finding, getting and understanding: The User Journey for the GDPR’S Right to Access

      IN , Vol. 41, Pages: 2174–2200
      [BibTeX] [Download PDF]

      @article{pins_finding_2023,
      title = {Finding, getting and understanding: {The} {User} {Journey} for the {GDPR}’{S} {Right} to {Access}},
      volume = {41},
      url = {https://www.researchgate.net/publication/370490058_Finding_getting_and_understanding_The_User_Journey_for_the_GDPR'S_Right_to_Access},
      author = {Pins, Domink and Jakobi, Timo and Stevens, Gunnar and Alizadeh, Fatemeh and Krüger, Jana},
      month = may,
      year = {2023},
      pages = {2174--2200},
      }


    • Alizadeh, F., Stevens, G., Jakobi, T. & Krüger, J. (2023)Catch Me if You Can : “Delaying” as a Social Engineering Technique in the Post-Attack Phase

      IN Proceedings of the ACM on Human-Computer Interaction, Vol. 7, Pages: 32:1–32:25 doi:10.1145/3579465
      [BibTeX] [Abstract] [Download PDF]

      Much is known about social engineering strategies (SE) during the attack phase, but little is known about the post-attack period. To address this gap, we conducted 17 narrative interviews with victims of cyber fraud. We found that while it was seen to be important for victims to act immediately and to take countermeasures against attack, they often did not do so. In this paper, we describe this “delay” in victims’ responses as entailing a period of doubt and trust in good faith. The delay in victim response is a direct consequence of various SE techniques, such as exploiting prosocial behavior with subsequent negative effects on emotional state and interpersonal relationships. Our findings contribute to shaping digital resistance by helping people identify and overcome delay techniques to combat their inaction and paralysis.

      @article{alizadeh_catch_2023,
      title = {Catch {Me} if {You} {Can} : "{Delaying}" as a {Social} {Engineering} {Technique} in the {Post}-{Attack} {Phase}},
      volume = {7},
      shorttitle = {Catch {Me} if {You} {Can}},
      url = {https://dl.acm.org/doi/10.1145/3579465},
      doi = {10.1145/3579465},
      abstract = {Much is known about social engineering strategies (SE) during the attack phase, but little is known about the post-attack period. To address this gap, we conducted 17 narrative interviews with victims of cyber fraud. We found that while it was seen to be important for victims to act immediately and to take countermeasures against attack, they often did not do so. In this paper, we describe this "delay" in victims' responses as entailing a period of doubt and trust in good faith. The delay in victim response is a direct consequence of various SE techniques, such as exploiting prosocial behavior with subsequent negative effects on emotional state and interpersonal relationships. Our findings contribute to shaping digital resistance by helping people identify and overcome delay techniques to combat their inaction and paralysis.},
      number = {CSCW1},
      urldate = {2023-04-20},
      journal = {Proceedings of the ACM on Human-Computer Interaction},
      author = {Alizadeh, Fatemeh and Stevens, Gunnar and Jakobi, Timo and Krüger, Jana},
      month = apr,
      year = {2023},
      keywords = {comping strategies, cybercrime, digital resilience, post-attack, social computing, social engineering, usable security, user behavior, victim's vulnerabilities},
      pages = {32:1--32:25},
      }

    2022


    • Alizadeh, F., Mniestri, A. & Stevens, G. (2022)Does Anyone Dream of Invisible A.I.? A Critique of the Making Invisible of A.I. Policing

      Nordic Human-Computer Interaction Conference. New York, NY, USA, Publisher: Association for Computing Machinery, Pages: 1–6 doi:10.1145/3546155.3547282
      [BibTeX] [Abstract] [Download PDF]

      For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as “us.” This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new “invisible” tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.

      @inproceedings{alizadeh_does_2022,
      address = {New York, NY, USA},
      series = {{NordiCHI} '22},
      title = {Does {Anyone} {Dream} of {Invisible} {A}.{I}.? {A} {Critique} of the {Making} {Invisible} of {A}.{I}. {Policing}},
      isbn = {978-1-4503-9699-8},
      shorttitle = {Does {Anyone} {Dream} of {Invisible} {A}.{I}.?},
      url = {https://doi.org/10.1145/3546155.3547282},
      doi = {10.1145/3546155.3547282},
      abstract = {For most people, using their body to authenticate their identity is an integral part of daily life. From our fingerprints to our facial features, our physical characteristics store the information that identifies us as "us." This biometric information is becoming increasingly vital to the way we access and use technology. As more and more platform operators struggle with traffic from malicious bots on their servers, the burden of proof is on users, only this time they have to prove their very humanity and there is no court or jury to judge, but an invisible algorithmic system. In this paper, we critique the invisibilization of artificial intelligence policing. We argue that this practice obfuscates the underlying process of biometric verification. As a result, the new "invisible" tests leave no room for the user to question whether the process of questioning is even fair or ethical. We challenge this thesis by offering a juxtaposition with the science fiction imagining of the Turing test in Blade Runner to reevaluate the ethical grounds for reverse Turing tests, and we urge the research community to pursue alternative routes of bot identification that are more transparent and responsive.},
      urldate = {2022-10-04},
      booktitle = {Nordic {Human}-{Computer} {Interaction} {Conference}},
      publisher = {Association for Computing Machinery},
      author = {Alizadeh, Fatemeh and Mniestri, Aikaterini and Stevens, Gunnar},
      month = oct,
      year = {2022},
      keywords = {Biometric data, Invisible AI, reCAPTCHA, Verification systems, Voight-Kampff test},
      pages = {1--6},
      }


    • Pins, D., Jakobi, T., Stevens, G., Alizadeh, F. & Krüger, J. (2022)Finding, getting and understanding: the user journey for the GDPR’S right to access

      IN Behaviour & Information Technology, Pages: 1–27 doi:10.1080/0144929X.2022.2074894
      [BibTeX] [Abstract] [Download PDF]

      In both data protection law and research of usable privacy, awareness and control over the collection and use of personal data are understood to be cornerstones of digital sovereignty. For example, the European General Data Protection Regulation (GDPR) provides data subjects with the right to access data collected by organisations but remains unclear on the concrete process design. However, the design of data subject rights is crucial when it comes to the ability of customers to exercise their right and fulfil regulatory aims such as transparency. To learn more about user needs in implementing the right to access as per GDPR, we conducted a two-step study. First, we defined a five-phase user experience journey regarding the right to access: finding, authentication, request, access and data use. Second, and based on this model, 59 participants exercised their right to access and evaluated the usability of each phase. Drawing on 422 datasets spanning 139 organisations, our results show several interdependencies of process design and user satisfaction. Thereby, our insights inform the community of usable privacy and especially the design of the right to access with a first, yet robust, empirical body.

      @article{pins_finding_2022,
      title = {Finding, getting and understanding: the user journey for the {GDPR}’{S} right to access},
      volume = {0},
      issn = {0144-929X},
      shorttitle = {Finding, getting and understanding},
      url = {https://doi.org/10.1080/0144929X.2022.2074894},
      doi = {10.1080/0144929X.2022.2074894},
      abstract = {In both data protection law and research of usable privacy, awareness and control over the collection and use of personal data are understood to be cornerstones of digital sovereignty. For example, the European General Data Protection Regulation (GDPR) provides data subjects with the right to access data collected by organisations but remains unclear on the concrete process design. However, the design of data subject rights is crucial when it comes to the ability of customers to exercise their right and fulfil regulatory aims such as transparency. To learn more about user needs in implementing the right to access as per GDPR, we conducted a two-step study. First, we defined a five-phase user experience journey regarding the right to access: finding, authentication, request, access and data use. Second, and based on this model, 59 participants exercised their right to access and evaluated the usability of each phase. Drawing on 422 datasets spanning 139 organisations, our results show several interdependencies of process design and user satisfaction. Thereby, our insights inform the community of usable privacy and especially the design of the right to access with a first, yet robust, empirical body.},
      number = {0},
      urldate = {2022-06-01},
      journal = {Behaviour \& Information Technology},
      author = {Pins, Dominik and Jakobi, Timo and Stevens, Gunnar and Alizadeh, Fatemeh and Krüger, Jana},
      month = may,
      year = {2022},
      note = {Publisher: Taylor \& Francis
      \_eprint: https://doi.org/10.1080/0144929X.2022.2074894},
      keywords = {GDPR, usability, usable privacy, Data literacy, human and societal aspects of security and privacy, right to access, Security and privacy, usability in security and privacy, user journey},
      pages = {1--27},
      }


    • Alizadeh, F., Mniestri, A., Uhde, A. & Stevens, G. (2022)On Appropriation and Nostalgic Reminiscence of Technology

      CHI ’22 Extended Abstracts. New Orleans, LA, USA, Publisher: Association for Computing Machinery, Pages: 6 doi:10.1145/3491101.3519676
      [BibTeX] [Abstract]

      Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, “outdated technologies. ” In other words, we assert that design practices can profit as much from imag-inaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than norma-tive uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.

      @inproceedings{alizadeh_appropriation_2022,
      address = {New Orleans, LA, USA},
      title = {On {Appropriation} and {Nostalgic} {Reminiscence} of {Technology}},
      isbn = {978-1-4503-9156-6},
      doi = {10.1145/3491101.3519676},
      abstract = {Technological objects present themselves as necessary, only to become obsolete faster than ever before. This phenomenon has led to a population that experiences a plethora of technological objects and interfaces as they age, which become associated with certain stages of life and disappear thereafter. Noting the expanding body of literature within HCI about appropriation, our work pinpoints an area that needs more attention, "outdated technologies. " In other words, we assert that design practices can profit as much from imag-inaries of the future as they can from reassessing artefacts from the past in a critical way. In a two-week fieldwork with 37 HCI students, we gathered an international collection of nostalgic devices from 14 different countries to investigate what memories people still have of older technologies and the ways in which these memories reveal normative and accidental use of technological objects. We found that participants primarily remembered older technologies with positive connotations and shared memories of how they had adapted and appropriated these technologies, rather than norma-tive uses. We refer to this phenomenon as nostalgic reminiscence. In the future, we would like to develop this concept further by discussing how nostalgic reminiscence can be operationalized to stimulate speculative design in the present.},
      booktitle = {{CHI} ’22 {Extended} {Abstracts}},
      publisher = {Association for Computing Machinery},
      author = {Alizadeh, Fatemeh and Mniestri, Aikaterini and Uhde, Alarith and Stevens, Gunnar},
      month = apr,
      year = {2022},
      pages = {6},
      }


    • Alizadeh, F., Stevens, G., Vereschak, O., Bailly, G., Caramiaux, B. & Pins, D. (2022)Building Appropriate Trust in Human-AI Interactions

      doi:10.48340/ecscw2022_ws04
      [BibTeX] [Abstract] [Download PDF]

      AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?

      @article{alizadeh_building_2022,
      title = {Building {Appropriate} {Trust} in {Human}-{AI} {Interactions}},
      issn = {2510-2591},
      url = {https://dl.eusset.eu/handle/20.500.12015/4407},
      doi = {10.48340/ecscw2022_ws04},
      abstract = {AI (artificial intelligence) systems are increasingly being used in all aspects of our lives, from mundane routines to sensitive decision-making and even creative tasks. Therefore, an appropriate level of trust is required so that users know when to rely on the system and when to override it. While research has looked extensively at fostering trust in human-AI interactions, the lack of standardized procedures for human-AI trust makes it difficult to interpret results and compare across studies. As a result, the fundamental understanding of trust between humans and AI remains fragmented. This workshop invites researchers to revisit existing approaches and work toward a standardized framework for studying AI trust to answer the open questions: (1) What does trust mean between humans and AI in different contexts? (2) How can we create and convey the calibrated level of trust in interactions with AI? And (3) How can we develop a standardized framework to address new challenges?},
      language = {en},
      urldate = {2022-06-27},
      author = {Alizadeh, Fatemeh and Stevens, Gunnar and Vereschak, Oleksandra and Bailly, Gilles and Caramiaux, Baptiste and Pins, Dominik},
      year = {2022},
      note = {Accepted: 2022-06-22T04:34:50Z
      Publisher: European Society for Socially Embedded Technologies (EUSSET)},
      }

    2021


    • Pins, D., Jakobi, T., Boden, A., Alizadeh, F. & Wulf, V. (2021)Alexa, We Need to Talk: A Data Literacy Approach on Voice Assistants

      Designing Interactive Systems Conference 2021. New York, NY, USA, Publisher: Association for Computing Machinery, Pages: 495–507 doi:10.1145/3461778.3462001
      [BibTeX] [Abstract] [Download PDF]

      Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.

      @inproceedings{pins_alexa_2021,
      address = {New York, NY, USA},
      series = {{DIS} '21},
      title = {Alexa, {We} {Need} to {Talk}: {A} {Data} {Literacy} {Approach} on {Voice} {Assistants}},
      isbn = {978-1-4503-8476-6},
      shorttitle = {Alexa, {We} {Need} to {Talk}},
      url = {https://doi.org/10.1145/3461778.3462001},
      doi = {10.1145/3461778.3462001},
      abstract = {Voice assistants (VA) collect data about users’ daily life including interactions with other connected devices, musical preferences, and unintended interactions. While users appreciate the convenience of VAs, their understanding and expectations of data collection by vendors are often vague and incomplete. By making the collected data explorable for consumers, our research-through-design approach seeks to unveil design resources for fostering data literacy and help users in making better informed decisions regarding their use of VAs. In this paper, we present the design of an interactive prototype that visualizes the conversations with VAs on a timeline and provides end users with basic means to engage with data, for instance allowing for filtering and categorization. Based on an evaluation with eleven households, our paper provides insights on how users reflect upon their data trails and presents design guidelines for supporting data literacy of consumers in the context of VAs.},
      urldate = {2021-07-05},
      booktitle = {Designing {Interactive} {Systems} {Conference} 2021},
      publisher = {Association for Computing Machinery},
      author = {Pins, Dominik and Jakobi, Timo and Boden, Alexander and Alizadeh, Fatemeh and Wulf, Volker},
      month = jun,
      year = {2021},
      pages = {495--507},
      }


    • Alizadeh, F., Stevens, G. & Esau, M. (2021)I Don’t Know, Is AI Also Used in Airbags?

      IN i-com, Vol. 20, Pages: 3–17 doi:doi:10.1515/icom-2021-0009
      [BibTeX] [Download PDF]

      @article{alizadeh_i_2021,
      title = {I {Don}’t {Know}, {Is} {AI} {Also} {Used} in {Airbags}?},
      volume = {20},
      url = {https://doi.org/10.1515/icom-2021-0009},
      doi = {doi:10.1515/icom-2021-0009},
      number = {1},
      journal = {i-com},
      author = {Alizadeh, Fatemeh and Stevens, Gunnar and Esau, Margarita},
      year = {2021},
      pages = {3--17},
      }


    • Jakobi, T., Alizadeh, F., Marburger, M. & Stevens, G. (2021)A Consumer Perspective on Privacy Risk Awareness of Connected Car Data Use

      doi:10.1145/3473856.3473891
      [BibTeX] [Abstract] [Download PDF]

      New cars are increasingly “connected” by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.

      @article{jakobi_consumer_2021,
      title = {A {Consumer} {Perspective} on {Privacy} {Risk} {Awareness} of {Connected} {Car} {Data} {Use}},
      url = {http://dl.gi.de/handle/20.500.12116/37266},
      doi = {10.1145/3473856.3473891},
      abstract = {New cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.},
      language = {en},
      urldate = {2021-09-16},
      author = {Jakobi, Timo and Alizadeh, Fatemeh and Marburger, Martin and Stevens, Gunnar},
      year = {2021},
      note = {Accepted: 2021-09-03T19:10:19Z
      Publisher: ACM},
      }

    2020


    • Alizadeh, F. (2020)“Exploration of Cyber Victimology through Victims’ Narrations to Design for Digital Resilience”

      , Siegen
      [BibTeX] [Download PDF]

      @phdthesis{alizadeh_exploration_2020,
      address = {Siegen},
      title = {“{Exploration} of {Cyber} {Victimology} through {Victims}’ {Narrations} to {Design} for {Digital} {Resilience}”},
      url = {https://www.wineme.uni-siegen.de/wp-content/uploads/2020/11/Masterarbeit-Alizadeh.pdf},
      school = {University of Siegen},
      author = {Alizadeh, Fatemeh},
      month = feb,
      year = {2020},
      keywords = {thesis},
      }


    • Alizadeh, F., Esau, M., Stevens, G. & Cassens, L. (2020)eXplainable AI: Take one Step Back, Move two Steps forward

      doi:10.18420/muc2020-ws111-369
      [BibTeX] [Abstract] [Download PDF]

      In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.

      @article{alizadeh_explainable_2020,
      title = {{eXplainable} {AI}: {Take} one {Step} {Back}, {Move} two {Steps} forward},
      shorttitle = {{eXplainable} {AI}},
      url = {http://dl.gi.de/handle/20.500.12116/33513},
      doi = {10.18420/muc2020-ws111-369},
      abstract = {In 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.},
      language = {en},
      urldate = {2021-04-15},
      author = {Alizadeh, Fatemeh and Esau, Margarita and Stevens, Gunnar and Cassens, Lena},
      year = {2020},
      note = {Accepted: 2020-08-18T15:19:49Z
      Publisher: Gesellschaft für Informatik e.V.},
      }

    2019


    • Alizadeh, F., Jakobi, T., Boldt, J. & Stevens, G. (2019)GDPR-Reality Check on the Right to Access Data: Claiming and Investigating Personally Identifiable Data from Companies

      Proceedings of Mensch und Computer 2019. New York, NY, USA, Publisher: Association for Computing Machinery, Pages: 811–814 doi:10.1145/3340764.3344913
      [BibTeX] [Abstract] [Download PDF]

      Loyalty programs are early examples of companies commercially collecting and processing personal data. Today, more than ever before, personal information is being used by companies of all types for a wide variety of purposes. To limit this, the General Data Protection Regulation (GDPR) aims to provide consumers with tools to control data collection and processing. What this right concretely means, which types of tools companies have to provide to their customers and in which way, is currently uncertain because precedents from case law are missing. Contributing to closing this gap, we turn to the example of loyalty cards to supplement current implementations of the right to claim data with a user perspective. In our hands-on approach, we had 13 households request their personal data from their respective loyalty program. We investigate expectations of GDPR in general and the right to access in particular, observe the process of claiming and receiving, and discuss the provided data takeouts. One year after the GDPR has come into force, our findings highlight the consumer’s expectations and knowledge of the GDPR and in particular the right to access to inform design of more usable privacy enhancing technologies.

      @inproceedings{alizadeh_gdpr-reality_2019,
      address = {New York, NY, USA},
      series = {{MuC}'19},
      title = {{GDPR}-{Reality} {Check} on the {Right} to {Access} {Data}: {Claiming} and {Investigating} {Personally} {Identifiable} {Data} from {Companies}},
      isbn = {978-1-4503-7198-8},
      shorttitle = {{GDPR}-{Reality} {Check} on the {Right} to {Access} {Data}},
      url = {https://doi.org/10.1145/3340764.3344913},
      doi = {10.1145/3340764.3344913},
      abstract = {Loyalty programs are early examples of companies commercially collecting and processing personal data. Today, more than ever before, personal information is being used by companies of all types for a wide variety of purposes. To limit this, the General Data Protection Regulation (GDPR) aims to provide consumers with tools to control data collection and processing. What this right concretely means, which types of tools companies have to provide to their customers and in which way, is currently uncertain because precedents from case law are missing. Contributing to closing this gap, we turn to the example of loyalty cards to supplement current implementations of the right to claim data with a user perspective. In our hands-on approach, we had 13 households request their personal data from their respective loyalty program. We investigate expectations of GDPR in general and the right to access in particular, observe the process of claiming and receiving, and discuss the provided data takeouts. One year after the GDPR has come into force, our findings highlight the consumer's expectations and knowledge of the GDPR and in particular the right to access to inform design of more usable privacy enhancing technologies.},
      urldate = {2021-04-16},
      booktitle = {Proceedings of {Mensch} und {Computer} 2019},
      publisher = {Association for Computing Machinery},
      author = {Alizadeh, Fatemeh and Jakobi, Timo and Boldt, Jens and Stevens, Gunnar},
      month = sep,
      year = {2019},
      keywords = {Claim personal data, Data takeout, GDPR, Usable Privacy},
      pages = {811--814},
      }


    • Alizadeh, F., Jakobi, T., Boldt, J. & Stevens, G. (2019)GDPR-Realitycheck on the right to access data

      doi:10.1145/3340764.3344913
      [BibTeX] [Abstract] [Download PDF]

      Loyalty programs are early examples of companies commercially collecting and processing personal data. Today, more than ever before, personal information is being used by companies of all types for a wide variety of purposes. To limit this, the General Data Protection Regulation (GDPR) aims to provide consumers with tools to control data collection and processing. What this right concretely means, which types of tools companies have to provide to their customers and in which way, is currently uncertain because precedents from case law are missing. Contributing to closing this gap, we turn to the example of loyalty cards to supplement current implementations of the right to claim data with a user perspective. In our hands-on approach, we had 13 households request their personal data from their respective loyalty program. We investigate expectations of GDPR in general and the right to access in particular, observe the process of claiming and receiving, and discuss the provided data takeouts. One year after the GDPR has come into force, our findings highlight the consumer’s expectations and knowledge of the GDPR and in particular the right to access to inform design of more usable privacy enhancing technologies.

      @article{alizadeh_gdpr-realitycheck_2019,
      title = {{GDPR}-{Realitycheck} on the right to access data},
      url = {http://dl.gi.de/handle/20.500.12116/24564},
      doi = {10.1145/3340764.3344913},
      abstract = {Loyalty programs are early examples of companies
      commercially collecting and processing personal data. Today,
      more than ever before, personal information is being used by
      companies of all types for a wide variety of purposes. To limit
      this, the General Data Protection Regulation (GDPR) aims to
      provide consumers with tools to control data collection and
      processing. What this right concretely means, which types of
      tools companies have to provide to their customers and in
      which way, is currently uncertain because precedents from
      case law are missing. Contributing to closing this gap, we turn
      to the example of loyalty cards to supplement current
      implementations of the right to claim data with a user
      perspective. In our hands-on approach, we had 13 households
      request their personal data from their respective loyalty
      program. We investigate expectations of GDPR in general and
      the right to access in particular, observe the process of claiming
      and receiving, and discuss the provided data takeouts. One year
      after the GDPR has come into force, our findings highlight the
      consumer's expectations and knowledge of the GDPR and in
      particular the right to access to inform design of more usable
      privacy enhancing technologies.},
      language = {en},
      urldate = {2021-04-16},
      author = {Alizadeh, Fatemeh and Jakobi, Timo and Boldt, Jens and Stevens, Gunnar},
      year = {2019},
      note = {Accepted: 2019-08-22T04:36:27Z
      Publisher: ACM},
      }


    • Pins, D. & Alizadeh, F. „Im Wohnzimmer kriegt die schon alles mit“ – Sprachassistentendaten im Alltag

      IN Verbraucherdatenschutz – Technik und Regulation zur Unterstützung des Individuums, Vol. Schriften der Verbraucherinformatik Band 1, Pages: 20
      [BibTeX] [Abstract]

      Sprachassistenten wie Alexa oder Google Assistant sind aus dem Alltag vieler VerbraucherInnen nicht mehr wegzudenken. Sie überzeugen insbesondere durch die sprachbasierte und somit freihändige Steuerung und mitunter auch den unterhaltsamen Charakter. Als häuslicher Lebensmittelpunkt sind die häufigsten Aufstellungsorte das Wohnzimmer und die Küche, da sich Haushaltsmitglieder dort die meiste Zeit aufhalten und das alltägliche Leben abspielt. Dies bedeutet allerdings ebenso, dass an diesen Orten potenziell viele Daten erfasst und gesammelt werden können, die nicht für den Sprachassistenten bestimmt sind. Demzufolge ist nicht auszuschließen, dass der Sprachassistent – wenn auch versehentlich – durch Gespräche oder Geräusche aktiviert wird und Aufnahmen speichert, selbst wenn eine Aktivierung unbewusst von Anwesenden bzw. von anderen Geräten (z. B. Fernseher) erfolgt oder aus anderen Räumen kommt. Im Rahmen eines Forschungsprojekts haben wir dazu NutzerInnen über Ihre Nutzungs- und Aufstellungspraktiken der Sprachassistenten befragt und zudem einen Prototyp getestet, der die gespeicherten Interaktionen mit dem Sprachassistenten sichtbar macht. Dieser Beitrag präsentiert basierend auf den Erkenntnissen aus den Interviews und abgeleiteten Leitfäden aus den darauffolgenden Nutzungstests des Prototyps eine Anwendung zur Beantragung und Visualisierung der Interaktionsdaten mit dem Sprachassistenten. Diese ermöglicht es, Interaktionen und die damit zusammenhängende Situation darzustellen, indem sie zu jeder Interaktion die Zeit, das verwendete Gerät sowie den Befehl wiedergibt und unerwartete Verhaltensweisen wie die versehentliche oder falsche Aktivierung sichtbar macht. Dadurch möchten wir VerbraucherInnen für die Fehleranfälligkeit dieser Geräte sensibilisieren und einen selbstbestimmteren und sichereren Umgang ermöglichen.

      @article{pins_im_nodate,
      title = {„{Im} {Wohnzimmer} kriegt die schon alles mit“ – {Sprachassistentendaten} im {Alltag}},
      volume = {Schriften der Verbraucherinformatik Band 1},
      abstract = {Sprachassistenten wie Alexa oder Google Assistant sind aus dem Alltag vieler VerbraucherInnen nicht mehr wegzudenken. Sie überzeugen insbesondere durch die sprachbasierte und somit freihändige Steuerung und mitunter auch den unterhaltsamen Charakter. Als häuslicher Lebensmittelpunkt sind die häufigsten Aufstellungsorte das Wohnzimmer und die Küche, da sich Haushaltsmitglieder dort die meiste Zeit aufhalten und das alltägliche Leben abspielt. Dies bedeutet allerdings ebenso, dass an diesen Orten potenziell viele Daten erfasst und gesammelt werden können, die nicht für den Sprachassistenten bestimmt sind. Demzufolge ist nicht auszuschließen, dass der Sprachassistent – wenn auch versehentlich – durch Gespräche oder Geräusche aktiviert wird und Aufnahmen speichert, selbst wenn eine Aktivierung unbewusst von Anwesenden bzw. von anderen Geräten (z. B. Fernseher) erfolgt oder aus anderen Räumen kommt. Im Rahmen eines Forschungsprojekts haben wir dazu NutzerInnen über Ihre Nutzungs- und Aufstellungspraktiken der Sprachassistenten befragt und zudem einen Prototyp getestet, der die gespeicherten Interaktionen mit dem Sprachassistenten sichtbar macht. Dieser Beitrag präsentiert basierend auf den Erkenntnissen aus den Interviews und abgeleiteten Leitfäden aus den darauffolgenden Nutzungstests des Prototyps eine Anwendung zur Beantragung und Visualisierung der Interaktionsdaten mit dem Sprachassistenten. Diese ermöglicht es, Interaktionen und die damit zusammenhängende Situation darzustellen, indem sie zu jeder Interaktion die Zeit, das verwendete Gerät sowie den Befehl wiedergibt und unerwartete Verhaltensweisen wie die versehentliche oder falsche Aktivierung sichtbar macht. Dadurch möchten wir VerbraucherInnen für die Fehleranfälligkeit dieser Geräte sensibilisieren und einen selbstbestimmteren und sichereren Umgang ermöglichen.},
      language = {de},
      journal = {Verbraucherdatenschutz – Technik und Regulation zur Unterstützung des Individuums},
      author = {Pins, Dominik and Alizadeh, Fatemeh},
      pages = {20},
      }