top of page

Q&A: The risks and rewards of AI in social impact


Responsible AI in social impact
What social impact leaders should know about responsible artificial intelligence.

It’s impossible to read the news without seeing artificial intelligence (AI) in the headlines. From machine learning technology that can write realistic emails to automated music generators, AI promises to streamline our lives, improve our professional processes, and deliver hyper-personalized support to nearly every aspect of society and industry.


Many social impact leaders may be wondering how AI technology could enhance their initiatives, and make a larger positive impact in the communities where their employees live and work.


To learn what AI might mean for the social impact industry, Visit.org turned to Alyssa Lefaivre Škopac, acting executive director for the Responsible AI Institute, a member-driven nonprofit created to help companies implement and assess responsible AI practices.


We caught up with Alyssa to demystify the brazen potential — and serious considerations — of what she calls a world-changing opportunity.


Visit.org: What types of responsible AI use cases excite you most?


Alyssa Lefaivre Škopac: I’m glad we’re talking, because the conversation around AI can often default to a doom and gloom perspective. I think it’s important to balance this with the huge opportunity we have with AI in making a positive social impact.


The strides we’re going to make in health excite me the most.


These include personalized medicine and virtual assistants, new drug discovery, and being able to augment predictions for radiologists in detecting cancers. The efficiencies and cost savings that can be brought to complex healthcare systems could have a massive impact in how we receive care.


From an environmental and sustainability standpoint, consider smart cities and smart buildings. The amount of carbon emissions coming from commercial buildings is staggering. We could soon be able to optimize the performance of buildings, the electrical grid, and tackle the really massive challenge of climate change.


I could go on and on … I’m so excited!


What are some of the limitations and considerations of using AI in social impact work?


We think about these things all day at the Responsible AI Institute: How do we harness the world-changing opportunity of AI without causing unintended harm? Here are the main considerations that come to mind first:

  • There's a risk of introducing bias into our systems at scale. If you’re building an automated lending solution for a financial institution, for example, now all of a sudden, you’re at risk of incorporating systemic bias into decisions, versus just one individual with bias.

  • We also need to think about transparency. In many cases, we don’t know where data is coming from or the reasoning behind AI outputs. This can hinder accountability.

  • Labor displacement is a real concern. The productivity gains and the opportunities with AI are massive, but what happens when some of these jobs are subsumed by technology? And what types of social innovations are we going to give to upskilling, retraining, and making sure we have an economic engine that’s supported by human workers?

  • We need to make sure the technology is readily available for all types of people. This includes things that are unrelated to AI, such as lack of internet in remote locations. If this isn’t considered thoughtfully, we could have a risk of creating a greater economic disparity and digital divide.

What are some guardrails that can be established to protect against bias in AI — especially in human resources?


New York City had one of the first regulations on automated employment decision technology and systems. They’re requiring an audit to demonstrate that bias hasn’t been introduced. It’s not perfect, but it’s a good bellwether to see how some of these decisioning systems are looking at being managed.


In the absence of regulation, there’s a great deal of work being done in the AI ecosystem to

The Responsible AI Institute's certification
The Responsible AI Institute's certification

create frameworks, assessments, and tests to make sure biases aren’t being introduced unintentionally. Responsible AI Institute’s soon-to-launch certification will address this, for example. It’s really about organizations thoughtfully introducing responsible AI governance to make sure they are adhering to the latest and greatest best practices.


Awareness of AI and the zeitgeist around it is so powerful because these conversations are happening. Large enterprises that are using big AI software to manage some of their HR functions know that they need to be on top of this type of risk.


What are the biggest AI opportunities relating to CSR and ESG work?


This is a new and emerging space. There’s an absence of talking about AI possibilities in corporate social responsibility and ESG objectives. So first things first, it needs to be on the executive agenda.


I’m most excited about AI’s possibilities in impact assessment because we can use data to maximize a corporation’s impact. How much of a company’s CSR dollars are being utilized properly? Can we analyze past initiatives and better understand outcomes? Can we identify patterns that can guide us to invest in the future?


I think there’s also an opportunity to incorporate AI in real-time reporting and monitoring to understand the potential of data in a safe, fair, and ethical way. If we remove the friction of gathering and analyzing data, companies can better tell the story of their CSR and quickly adjust strategies for optimal impact, hopefully in a way that drives accountability.


The above interview has been edited for clarity and brevity.


Get in touch with us to learn how Visit.org leverages tools like real-time reporting to help social impact professionals achieve their goals.

bottom of page