Learnings at the intersection of Ethics, Technology and Public Policy
Writing for Stanford's Ethics, Technology and Public Policy for Practitioners program
As I was working on AI tools and using them in my everyday life, both professionally and personally, I felt the need to educate myself on the ethical aspects of it. I started looking for programs at the intersection of ethics, technology, and public policy.
I was already familiar with Stanford University’s Human-Centered Artificial Intelligence Center and their research. However, I was looking for a program that would equip me with practical and tactical frameworks and resources I could use in my work, and I came across Stanford’s Ethics, Technology, and Public Policy for Practitioners program offered by Stanford’s School of Engineering.
Over the past seven weeks, I have learned and worked on frameworks that have shaped my conversations around Responsible AI and how I approach these issues in developing applications with AI.
The very first reading of the course presented an incredible moral dilemma, where someone is given the choice to either live in a utopia knowing that someone else is enduring a tragic experience or to walk away from that knowledge. Do you walk away, or do you ignore it willfully so you can live in comfort?
The course continued by delving into technical topics such as algorithmic decision-making and fairness, data collection, privacy, and civil liberties, always with an undertone of philosophical dilemma at the crux of these topics.
In the first lecture, we had an enlightening conversation between Professor Mehran Sahami, Tencent Chair of the Computer Science Department, and Joaquin Quiñonero Candela, currently Head of Preparedness at OpenAI and previously Director of the Society and AI Lab at Facebook (Meta). At Facebook, he initially worked on building algorithms to increase user interaction with ads and later with news feeds. His work significantly enhanced Facebook’s ability to tailor content to individual users, which ultimately drove higher engagement by showing users more content likely to provoke strong reactions. Over time, this approach contributed to Facebook’s engagement and revenue growth but also amplified misinformation and divisive content.
I had the opportunity to ask him a question that had been on my mind as someone using ChatGPT and other OpenAI products: “Now that OpenAI has changed from a non-profit to a for-profit organization, what metrics, goals, and objectives have changed in measuring success in your role as Head of Preparedness at OpenAI?”
As he answered and the session ended, I realized not all algorithmic problems are technical; many are inherently human issues. There will always be trade-offs in the impact of algorithms, but it is crucial to carefully consider the threshold at which harms occur. Joaquin emphasized the significant role organizational culture plays in making the right decisions in these situations. He also highlighted the importance of prioritizing long-term outcomes over short-term profits when assessing the broader implications of these decisions.
The next part of the course focused on the political economy of technology, where we explored the development of Silicon Valley and other key power jurisdictions for the U.S. tech industry, including the ways in which government and policy environments have shaped their growth. Through readings and discussions with my cohort, one reading, in particular, stood out: “Confronting Tech Power” by AI Now, an institute producing diagnostics and actionable policy research on AI.
Some of the key points from this report included:
• Emphasizing the need for public oversight over AI, urging action to prevent unchecked corporate influence, and highlighting the concentration of Big Tech’s power over AI resources, data, and infrastructure. This dominance limits competition and stifles innovation from smaller players in sectors such as healthcare, education, and finance.
• Big Tech companies maintain their dominance through vast data collection and exclusive access to computing resources, making it hard for new players to compete and focusing on the political implications of Big Tech aligning with governments to maintain economic and security advantages.
• The report ends with a call for stronger regulations to counterbalance Big Tech’s power, advocating for policies that place the burden of proof on companies to demonstrate non-harmful practices, break down silos, and enact measures to prevent misuse of these technologies.
In this session, we also heard from Professor K. Sabeel Rahman of Cornell Law School. He compared present technology systems, like AI, to foundational infrastructure comparable to roads or electricity, highlighting that control over such infrastructure constitutes a form of power.
He illustrated tech’s embedded inequities through historical examples like Robert Moses’s New York City bridges, which restricted beach access for lower-income residents by barring bus access. He ended the lecture by emphasizing that technology systems, shaped by human agency, can be remade, urging action for a more democratically accountable tech ecosystem.
As mentioned, the course perfectly weaves together ethics, technology, and policy. As we delved into data collection, privacy, and civil liberties, Professor Lowry Pressly introduced Reiman’s four types of privacy harms, asking, “What do we stand to lose when our lives become generally visible?” This is a pertinent question, especially given the younger generation’s willingness to share their every moment on platforms like Instagram, YouTube, and TikTok.
Reiman highlights four types of privacy risks: extrinsic loss of freedom, intrinsic loss of freedom, symbolic risks, and psycho-political metamorphosis. These risks appear in everyday life as a lack of privacy makes people vulnerable to having their behavior controlled by others, making them fundamentally less free in thought and behavior.
Another highlight was the opportunity to hear from Dame Jacinda Ardern, former Prime Minister of New Zealand, who discussed her response to the Christchurch shooting, which was streamed on social media platforms for 17 minutes before being taken down. Despite the clip’s removal, it resurfaces periodically on the same platforms.
One emphasis of her conversation was on being a kind leader. When asked how she could remain kind when the world around her was harsh, she responded that leading with kindness centers decision-making around humanity. She acknowledged that not every leader can do this because of the conditions or limitations of their power. She also addressed the misconception that making strong decisions lacks kindness or that being kind diminishes one’s strength as a leader.
Toward the course’s end, we explored Generative AI & the Future of Work with Professor Mehran Sahami and James Manyika, SVP for Google-Alphabet, where his role focuses on Research, Technology & Society.
Sahami introduced the “employment test,” assessing how many jobs AI can perform comparably to humans. Contrary to past expectations that AI would primarily affect manufacturing, Sahami noted that AI increasingly impacts knowledge and white-collar jobs, such as in the legal, architectural, and engineering fields.
AI itself does not decide job cuts; it is a tool enabling executives and decision-makers to make choices about labor. Sahami shared two examples from publishing: one company used AI to pursue more projects, while another used it to halt hiring for open positions. This demonstrated that AI’s workforce impact varies based on company strategy.
As the course concluded, it left us with a philosophical dilemma: Is walking away the answer, or is staying and trying to fix it? Staying and trying to address these issues feels like the right choice, yet the methods we use to drive change are essential.
As I wrap up this program, I plan on incorporating frameworks from the course, such as the Ethics Canvas, to outline ethical considerations and actions in response. This framework encourages teams to articulate the ethical implications of their work clearly, fostering a shared understanding of the potential outcomes of their technologies. It also helps teams working on technological products to consider and document specific actions or adjustments to address the ethical implications for their products.
Another framework I want to highlight is Weighing Options, which is designed to help teams evaluate multiple courses of action by comparing their impacts on society, organizational implications, and alignment with their values. It also provides a systematic approach to ethical decision-making, enabling teams to consider different options in depth before making a choice.
I am now equipped with the knowledge and language needed to voice opinions on the need for responsible AI and the importance of equitable tech design and regulatory collaboration with professionals across various functions within a technology organization.
I would like to end this with a call to action: if you are someone who wants to chat about Responsible AI, you can find me on LinkedIn.
Note: Used Grammarly AI to correct spelling and grammar.