Carnegie Mellon University

Current Topics in Privacy Seminar

The Current Topics in Privacy Seminar is a three-credit course (17-702) taught in the Fall and Spring semesters. Members of the university community are invited to participate in the seminar even if they are not enrolled in the course. In this seminar course students will discuss recent papers and current public policy issues related to privacy. Privacy professionals from industry, government, and non-profits will deliver several guest lectures each semester. 

Members of the CMU community interested in receiving notifications about the seminars each week should contact the course instructor to request to be added to the email list. 

Instructors:  Hana Habib and Norman Sadeh

Time and location: Tuesdays, 12:30-1:50 PM.  All seminars will be held in Hamburg Hall, 1002 and will be available via Zoom

Title: 
Investigating Influencer VPN Ads on YouTube
Abstract:
One widespread, but frequently overlooked, source of security information is influencer marketing ads on YouTube for security and privacy products such as VPNs. This talk examines how widespread these ads are, what kind of information they convey, and what impact they might have. Starting from a random sample of 1.4% of YouTube, we identify 243 videos containing VPN ads with a total of 63 million views. Our estimates suggest that this scales up to billions of views across all of YouTube. Using qualitative analysis, we find that these ads commonly discuss broad security guarantees as well as specific technical features, frequently focus on internet threats, and sometimes emphasize accessing otherwise unavailable content. We find a number of potentially misleading claims, including overpromises and exaggerations that could negatively influence viewers’ mental models of internet safety. As such, we explore the relationship between YouTube VPN ad exposure and users’ mental models of VPNs, security, and privacy. We use a novel VPN ad detection model to calculate the ad exposure of 217 participants via their YouTube watch histories, and we develop scales to characterize their mental models in relation to claims commonly made in VPN ads. We find that exposure to VPN ads is significantly correlated with familiarity with VPN brands and increased belief in often-unrealistic threats. In contrast, we find no significant correlation between exposure to VPN ads and belief in factual or misleading mental models about VPNs themselves. These findings suggest that the impact of VPN ads on user mental models is predominantly emotional (i.e., perception of threats) rather than technical. We recommend increased oversight of such ads to alleviate belief in unrealistic threats.
Bio:
Omer Akgul is a postdoctoral researcher at CyLab, Carnegie Mellon University. His research broadly spans human factors in security & privacy, with a focus on investigating and improving mental models of secure communication tools, uncovering friction in security professionals’ workflows, and understanding ads on the internet. His work is regularly accepted to prominent security & privacy venues and has received a best paper award at USENIX Security ’23. Advised by Michelle Mazurek, Omer received his PhD in Computer Science from the University of Maryland, College Park.
Title: 
Privacy Risk, what is it?
Abstract:
Most organizations associate privacy risk as risks to the enterprise. In other words, risks of regulatory action, customer backlash and reputation damage and risks of cost associated with dealing with privacy breaches. Privacy risk is best thought of, however, as risk to individuals, which is an externality to the organizations generating the risk. In this talk, R. Jason Cronk, president of the Institute of Operational Privacy Design will discuss how to think about privacy risk and opportunities for improving our understanding of risk. 
Bio:
R. Jason Cronk is the author of the book Strategic Privacy by Design, published by the IAPP. He is the principal consultant with Enterprivacy Consulting Group, a boutique privacy consulting firm, where his current focus is on helping companies overcome the challenges of privacy through privacy engineering and Privacy by Design. He is an IAPP Fellow of Information Privacy, a CIPP/US, CIPT, CIPM, a Privacy by Design ambassador, a licensed attorney in Florida, a blogger, speaker and passionate advocate for understanding privacy. His unique background includes a combination of entrepreneurial ventures, work in small and large businesses, strong information technology experience and legal training.

Title: 
Meet the Game Changers: Privacy Engineers at Mastercard


Abstract: 
As our lives have become increasingly digital, more data is being collected, used and shared about important aspects of our lives: our financial activity, our health, our friends and families. This interplay between data and innovation has transformed the role of Privacy Professionals - from a niche compliance function into a strategic role. This comes with huge opportunities as well as novel challenges. Enter Privacy Engineers: the Game Changers. Walking through some of the core tenants of Mastercard’s Privacy Program, Caroline, Rachel and Grady will explain the critical role that the company’s Privacy Engineering team plays in translating legal concepts into engineering guidance, accelerating product and technology design, and instilling a “Privacy-First” mindset to design our newest suite of technological innovations at Mastercard.


Bio:
Caroline Louveaux is the Chief Privacy Officer for Mastercard. She leads the company’s work at the forefront of the policy, regulatory and legal compliance on privacy and data protection globally. Caroline spearheaded Mastercard’s global adoption of the EU General Data Protection Regulation as well as the adoption of Mastercard’s Binding Corporate Rules and APEC Cross-Border Privacy Rules to safeguard the future of Mastercard’s global data flows. She advises the company on issues that support Mastercard’s technology leadership, including cybersecurity, data portability and open banking, data localization, digital identity, blockchain, machine learning and artificial intelligence.
Caroline was recently appointed to the UK Expert Council for International Data Transfers and serves on the board of IAPP. She is a member of the advisory expert group on the OECD Privacy Guidelines and participates in the OECD Network of Experts on National AI strategies. She is also a member of the ENISA Working Group on AI Cybersecurity, the IEEE AI Systems Risk and Impact Executive Committee, and co-chairs the Privacy Project led by the US Chamber of Commerce.
Caroline is a committed privacy advocate and is passionate about the legal and societal implications of new technologies. She is also a lecturer at leading academic institutions, including Oxford Cyber Futures, IMD Business School, KU Leuven and Washington University.


Title
Engineers and Lawyers: Collaboration in the organization

Abstract
As the privacy industry grows and gets complicated, it requires the collaboration between engineers and lawyers to achieve privacy compliance for the organization. In essence, both engineers and lawyers need to be in the same room because engineers build the systems and lawyers interpret the requirements. Engineers and lawyers are different as engineers focus on building the systems by seeking practical and precise solutions whereas lawyers focus on legal compliance by analyzing legal issues and risks. Despite such differences, engineers and lawyers are similar as both of them are problem solvers with analytical skills. Understanding these differences and similarities is crucial to building team synergy between engineers and lawyers. 
Bio:
Jayson Jin is AdTech and AI Privacy counsel at Intuit. He works on enterprise and product privacy - data privacy stewardship; adtech; artificial intelligence; and data privacy legislative trends. He also worked as a privacy counsel at Electronic Arts, supporting game studios globally on legal compliance. Prior to becoming a lawyer, he worked as a tech professional supporting data strategy and analysis for a decade in various industries. 

 

Title: 
Privacy Technology Careers
Abstract:
George will be taking us through the current state of the privacy job market, with current trends and his expectations for the space over the coming year
Bio:
George heads up Stott & May's Privacy search practice. George brings almost a decade of overseeing high-profile searches across three different continents, in many different industries including
Tech, Banking and Law. Specializing in high-touch relationship management, George can deliver on aggressive time scales while maintaining quality and becoming a critical part of both your talent attraction and retention strategy.
 
Through his exposure to companies at various stages, he is a strategic talent advisor to technology-driven companies of all shapes and sizes. George is passionate about his work and enjoys digging deep to map the market and research partner companies
Title: Privacy-Empowering Mechanisms in a Hyper-Connected World
Abstract - The opaque nature of today’s Internet has made it increasingly challenging for individuals to make informed decisions about their online privacy. In this talk, I will share my recent research on how we can empower individuals to take control of their privacy through various mechanisms and techniques. I will discuss three research projects that focus on 1) exploring a LLM-enabled, empathy-based approach to increase individuals’ privacy literacy, 2) combatting dark patterns on the web using a user-centric strategy, and 3) facilitating privacy negotiation among multiple stakeholders in smart homes. I will briefly discuss key areas where research is needed to develop privacy-empowering mechanisms as well as my research agenda to address those gaps.
Bio - Yaxing Yao is an Assistant Professor in the Department of Computer Science at Virginia Tech. His research lies in the intersection of human-computer interaction, usable privacy, accessibility, and emerging technologies. He aims to understand individual’s privacy needs and expectations in our increasingly complicated, hyper-connected world, then build privacy-enhancing technologies and privacy education interfaces to enhance individual's privacy literacy and empower them with more control of their privacy. His recent work covers various technological contexts (e.g., smart homes, virtual reality, online privacy) and different user groups, including at-risk populations (e.g., people with disabilities, children, and teenagers). His research has been generously supported by the National Science Foundation, Google, and Meta. 
Yaxing finished his postdoc in the Software and Societal Systems Department at Carnegie Mellon University, his PhD in the School of Information Studies at Syracuse University, and his MS in Information Management from the Information School at the University of Washington. 
 Privacy in the age of AI: What's changed and what should we do about it?
 
Abstract: 
Privacy is a core tenet for engineering ethical AI products, but does AI change privacy risk? If so, what barriers do practitioners face in their privacy work for AI products? And, finally, what are ways we might address these barriers? Without an answer to these questions, we cannot hope to better support practitioners in engineering privacy-respecting AI products. To begin answering these questions, I will first present a taxonomy of AI privacy risk where we codify how the unique capabilities and requirements of AI technologies create new privacy risks (e.g., deepfake pornography, physiognomic classifiers) and exacerbate known ones (e.g., surveillance, aggregation). I will then present an interview study with 35 industry practitioners who work on AI products. We asked these practitioners to discuss how they approach privacy for AI products, and found that practitioners often have little awareness of the ways in which AI can create new or exacerbating existing privacy threats, face significant motivational barriers in their privacy work, and have little support for AI-specific privacy work. Finally, I will present our emerging work on "Privacy through Design" — where we are exploring how we might develop turnkey design methods and tools that help practitioners foreground and mitigate privacy risks in their AI design concepts.
Bio: 
Dr. Sauvik Das is an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University where he directs the SPUD (Security, Privacy, Usability and Design) Lab. His work, at the intersection of HCI, AI and cybersecurity, is oriented around answering the question: How can we design systems that empower people with improved agency over their personal data and experiences online? His work has recognized with several awards: a best paper at UbiComp (2013), a distinguished paper at SOUPS (2020), three best paper honorable mentions at CHI (2016, 2017, 2020), a best paper honorable mention at CSCW (2021), and an honorable mention for the NSA's Best Scientific Cybersecurity Paper (2014). He was awarded a NSF CRII in 2018 and a NSF CAREER in 2022 and has otherwise been PI on several other grants from the NSF, Meta, and Oracle. His work has also been covered by the popular press, including features in The Atlantic, The Financial Times, and Dark Reading. Dr. Das received his Ph.D. in Human-Computer Interaction from Carnegie Mellon University in 2017, following a B.S. in Computer Science at Georgia Tech. Prior to joining CMU in 2022, he was an Assistant Professor at Georgia Tech’s School of Interactive Computing from 2018 – 2022.

Title: The Digital Markets Act and the Ads Ecosystem

Bio:

Eunice Wells is currently Head of Ads Privacy, User and Regulatory at Google. Prior to her current role, she oversaw Ads Privacy as a Product Manager in companies such as Snap, LiveRamp and Workday. She considers herself a 'privacy nerd' and is passionate about privacy issues in the technology space. 

Title:  Implementing Privacy Technology Solutions

Abstract:  The privacy legal landscape has been evolving quickly for the last 6+ years.  Remaining compliant with these laws requires the use of various technology solutions, which are evolving just as quickly.  In this session Mark Melnychenko, Privacy Technology Practice Leader for BDO, will provide an overview of the types of consulting services his team provides and dive deep into some of the most common types of privacy technology solutions being implemented for BDO’s clients.

Mark is a Managing Director in the firm’s Privacy & Data Protection practice and leads the privacy technology practice area. With over 25 years of experience in privacy, technology and engineering he helps his clients to evaluate and implement global programs. Mark provides deep experience in building robust data inventories and implementing privacy rights automation, consent/preference solutions and vendor risk management processes based on US state and international privacy laws. He works closely with large, global organizations in the retail, travel and hospitality, food manufacturing, quick service restaurant, entertainment and sports, and technology industries.

Mark is a business leader who works closely with his teams to deliver quality and actionable deliverables to clients. He has formerly served in executive leadership roles as Chief Technology Officer, Vice President, and Engineering Director in the professional services, commercial real estate, healthcare, manufacturing, and digital entertainment industries.

Mark focuses his practice on providing privacy technology transformation, which allows his clients to optimize environments from manual to automated processes, comply with state, federal, and international laws, and develop sound risk management practices. He is an expert across multiple platforms where he provides architectural guidance and implementation services for tools like OneTrust, DataGrail, BigID and others.

Title: Data Minimization and Pseudonymization

Bio - Saima has over 20 years of professional experience spanning across chemical engineering, privacy engineering, law, data privacy and security sectors. She is passionate about protecting the privacy rights of consumers and facilitating cross-functional partnerships to design and implement privacy-enhancing solutions. Saima has a proven track record of advising project teams, reviewing and contributing to technical systems and data and product development life cycles, providing subject matter expertise and monitoring privacy laws, trends and best practices. She contributes to her professional ecosystems by presenting talks, lecturing at universities, guest appearing in podcasts, moderating panel discussions, mentoring, and is a strong ally for privacy, cybersecurity and AI professionals and STEM advocates for women and girls. Saima is an active member and volunteer with the IAPP, All Tech is Human, WomenTech Network, and Governing Council at UofT Faculty of Engineering. 

 

Details to come
Details to come
Details to come