The rapid growth of generative artificial intelligence has created a looming cybersecurity risk for the banking sector, the Federal Reserve’s vice chair for supervision, Michael Barr, warned Thursday.
As use cases for generative AI continue to evolve, “an arms race between those who are using generative AI to attack and those who are trying to use generative AI to block the attack” will ensue in the financial sector, Barr said during a live-streamed cyber risk conference held by the Federal Reserve Bank of Cleveland.
“Technology is always evolving, and so we have to keep up with the next threat. Generative AI could be a really important new concept for the economy, for finance, for really many aspects of economic life. And as with any new technology, it will have risks,” he said.
Barr said it is critical for financial institutions to make investments in generative AI to safeguard against cyberattacks.
“There’s a real risk that we have a cyber arms race using generative AI with defenders and attackers in a constant struggle,” Barr said. “So we do need to make sure that we are, and banks are, investing in the kind of technology that is useful, not only today, but in the near future.”
Barr also highlighted third-party relationships, emphasizing that banks of all sizes have a responsibility to ensure their vendors are properly managing cyber risk.
“The banks need to treat the third-party risk management as if it were their own risk inside the institution, because legally it is, and from a supervisory perspective, it’s required. It’s just essential in terms of the safety and soundness of the financial system,” he said.
While it is mostly smaller institutions that rely heavily on third-party service providers, Barr noted that large institutions moving to cloud-based solutions also need to be vigilant.
“It’s absolutely essential that as they do that, banks are ensuring that their relationships with third parties, not only are not hurting them, but are actively helping them to manage their cyber risk,” he said. “We really need to have resiliency in critical service providers, and the banks have an absolutely central role in making sure that that’s the case.”
Due to the tightly interconnected nature of the U.S. financial system, a data breach at one firm can have devastating ripple effects across the financial services sector, even if that firm is a smaller bank, Barr said.
“If there’s a problem at a small bank, it can then infect the rest of the financial system,” Barr said. “We’re not any more safe if it’s a small bank than a big bank, because that problem can then grow into the rest of the system. We have to worry about even the weakest link in the system.”
A cyberattack at a financial institution could disrupt a firm’s payment system and liquidity provision channels, Barr said.
“If a bank has to cut itself off in order to protect the rest of the system from a cyberattack, that cutting off has ripple effects that could cause bank failures,” he said.
Barr emphasized the importance of having firms engage in internal and external testing, adding banks need to find creative ways to ensure they have the ability to recover and resume operations with the least amount of client disruption.
Barr’s comments come as Consumer Financial Protection Bureau Director Rohit Chopra has also warned of the risks of generative AI, albeit from a consumer perspective.
Chopra, in April, said the bureau is closely monitoring how generative AI and ChatGPT-like technologies, when used by banks, could undermine or create risks in customer care.
“We’re doing some work right now on how [generative AI] might undermine or create risks in customized customer care, to the extent that biases are introduced, or frankly, even the wrong information,” he said.
Meanwhile, an interagency initiative launched by the Justice Department, Federal Trade Commission and Equal Employment Opportunity Commission in April aims to crack down on “unchecked AI” in lending, housing and employment.