Insights and Highlights: AI Governance in Life Insurance
Exploring the ethical and regulatory challenges of AI in life insurance.
Subscribe to Newsletter
Related Posts
Drivers of Trust in Consumer Financial Services
View DetailsEthics Through the Lens of Philanthropic Planning
View DetailsUnpacking Fairness in Insurance
View DetailsEthics In Financial Services Insights
August 23, 2024
The American College Cary M. Maguire Center for Ethics in Financial Services held an AI Ethics in Financial Services Summit on April 2, 2024. This immersive and educational event brought together financial experts from various corporate roles, aiming to help leaders frame governance and ethics considerations related to the use of AI.
The afternoon panel on unfair discrimination in insurance underwriting was a presentation by Azish Filabi, JD, MA, managing director of the Center for Ethics in Financial Services, and Sophia Duffy, JD, CPA, AEP®, associate professor of business planning at the American College of Financial Services, about the ethical and governance challenges of artificial intelligence (AI) in the life insurance industry.
The panel highlighted the ethical and regulatory challenges of AI in the life insurance industry, drawing insights from a 2022 academic paper with the National Association of Insurance Commissioners (NAIC), "AI-Enabled Underwriting Brings New Challenges for Life Insurance: Policy and Regulatory Considerations," and a 2021 white paper, "AI Ethics and Life Insurance: Balancing Innovation with Access."
The panelists emphasized that AI differs from traditional algorithms because complex machine learning systems can obscure the decision-making rationales in underwriting, which creates new legal and ethical challenges. Moreover, once AI systems are embedded within a process, their operations become difficult to disentangle. The opacity of these systems, often referred to as "black box" systems, poses significant technical challenges, necessitating increased technical literacy and education. The proprietary nature of many AI systems adds another layer of complexity. This opacity and complexity make it difficult to ensure that these systems comply with anti-discrimination laws, particularly those that prohibit discrimination based on legally protected characteristics, like race.
AI systems can inadvertently result in unfair discrimination by using data sources that have a historical bias or serve as proxies for protected characteristics, the panelists shared. This can lead to outcomes that are not just unfair, but also potentially illegal. However, determining who is responsible for these decisions is not straightforward. The chain of data ownership involves big data aggregators, algorithm developers, and insurers/lenders. While insurers are ultimately accountable for their products, they may lack the technical expertise to fully understand the intricacies of the AI systems they use. This creates a disconnect where insurers may not have the ability to shape or even fully comprehend the systems they deploy.
Another issue presented was the difficulty in defining and measuring proxy discrimination when it comes to AI-enabled underwriting. Insurers are permitted to use an underwriting factor if it’s related to actual or reasonably anticipated experience, but there’s no clear-cut standard for how effective that factor needs to be. This ambiguity means each insurer’s justification for using a particular factor can be unique, making regulation even more challenging.
Ensuring insurers' systems align with regulations while integrating various external consumer data points is crucial. A major concern is consumers may remain unaware of which data is used, such as credit scores, credit history, and social media data, raising questions about fairness and the ability to correct inaccuracies. The use of irrelevant and incorrect data can lead to mistakes that get embedded in data chains earlier in the process. Embedded mistakes could be particularly pernicious in complex AI systems that use proxy factors to render decisions. In such systems, it's possible the mistaken data input will render an answer false.
To mitigate these risks, researchers at The College recommend a three-part framework: establishing national standards to set boundaries for acceptable design and behavior, implementing a certification system to verify that systems are developed in accordance with these standards, and conducting periodic audits of system outputs to ensure ongoing compliance.
Developing nationally accepted standards would involve the creation of guidelines to ensure AI systems adhere to best practices in system design and actuarial principles. This process requires collaborative research and careful consideration of who should define these standards. Key areas to address include: behavioral validity, or ensuring that data accurately reflects the behavior of interest; actuarial significance, assessing how inputs contribute to risk evaluation; and social welfare outcomes, defining a financially inclusive marketplace.
As the panel discussion ended, the conversation turned to the importance of testing for unfair discrimination in AI-enabled underwriting. Emerging rules suggest both objective and subjective approaches. For instance, an objective method might involve a 5% threshold for evaluating disparate impact on race, while a subjective approach would permit insurers to develop their own AI testing methodologies.
Critical questions remain. Should there be a unified approach to testing for unfair discrimination resulting from insurance underwriting? Who should have the authority to determine this approach? And how transparent should insurers be with consumers about data usage and privacy rights? These considerations are essential as we navigate the complexities of AI-enabled underwriting and strive for a fair and equitable system.
The future of insurance underwriting is undoubtedly tied to AI, and regulators and industry can together make sure that future is fair and equitable. We hope our study sparks a necessary conversation within the industry and among regulators.
To learn more about AI in financial services, you can explore further with research from the Center for Ethics in Financial Services.
Related Posts
Drivers of Trust in Consumer Financial Services
View DetailsEthics Through the Lens of Philanthropic Planning
View DetailsUnpacking Fairness in Insurance
View Details