As businesses continue to adopt artificial intelligence technologies, corporate lawyers and in-house data scientists should prepare to get better acquainted. Lawmakers are increasingly indicating that A.I. regulations are coming, which means that businesses will need to ensure that their machine learning systems aren’t violating laws governing privacy, security, and fairness.
One upstart law firm specializing in A.I.-related legal matters is betting that companies will be increasingly investigating the various ways their machine learning systems could put their businesses in legal hot water. The bnh.ai law firm, based in Washington D.C., pitches itself as a boutique law firm that caters to both lawyers and technologists alike.
Having a solid understanding of A.I. and its family of technologies like computer vision and deep learning is crucial, the firm’s founders believe, because solving complicated legal issues related to A.I. isn’t as simple as patching a software bug. Ensuring that machine learning systems are secure from hackers and that they don’t discriminate against certain groups of people requires a deep understanding of how the software operates. Businesses need to know what comprised the underlying datasets used to train the software, how that software can potentially alter over time as it feeds on new data and user behavior, and the various ways hackers can break into the software—a difficult task considering researchers keep discovering new ways miscreants can tamper with machine learning software.
Source ; https://fortune.com/2021/08/17/why-this-law-firm-only-works-on-artificial-intelligence