Deploying a multidisciplinary strategy with embedded responsible AI

responsive ai
responsive ai

The finance sector is among the keenest adopters of machine learning (ML) and artificial intelligence (AI), the predictive powers of which have been demonstrated everywhere from back-office process automation to customer-facing applications. AI models excel in domains requiring pattern recognition based on well-labeled data, like fraud detection models trained on past behavior. ML can support employees as well as enhance customer experience, for example through conversational AI chatbots to assist consumers or decision-support tools for employees. Financial services companies have used ML for scenario modeling and to help traders respond quickly to fast-moving and turbulent financial markets. As a leader in AI, the finance industry is spearheading these and dozens more uses of AI.

In a highly regulated, systemically important sector like finance, companies must also proceed carefully with these powerful capabilities to ensure both compliance with existing and emerging regulations, and keep stakeholder trust by mitigating harm, protecting data, and leveraging AI to help customers, clients, and communities. “Machine learning can improve everything we do here, so we want to do it responsibly,” says Drew Cukor, firmwide head of AI/ML transformation and engagement at JPMorgan Chase. “We view responsible AI (RAI) as a critical component of our AI strategy.”

Understanding the risks and rewards

The risk landscape of AI is broad and evolving. For instance, ML models, which are often developed using vast, complex, and continuously updated datasets, require a high level of digitization and connectivity in software and engineering pipelines. Yet the eradication of IT silos, both within the enterprise and potentially with external partners, increases the attack surface for cyber criminals and hackers. Cyber security and resilience is an essential component of the digital transformation agenda on which AI depends.

Source