Tuesday, January 28, marked a special occasion for privacy professionals—Data Privacy Day, which is part of Data Privacy Week. Last year, I reflected on the history of Data Privacy Day, which commemorates the opening for signature in 1981 of the first internationally binding instrument incorporating privacy and data protection obligations—the Council of Europe’s Convention 108.
Convention 108 was created to help address risks arising from the exponential growth in large-scale computing and large-volume global data transfers. Over time, it has evolved to address more sophisticated technologies while remaining centered on core privacy principles.
We’re certainly seeing a global proliferation of laws, regulations, and guidance to address enhanced technologies – most notably AI. Yet, as much as there is change, much also stays the same.
Last April, I participated in the fourth memorial lecture series for my former internet law professor at Fordham, Joel Reidenberg. Joel was a visionary thinker whose research, advocacy, and mentorship since the late 1990s laid the foundation for refining privacy legislation, policy positions, and technical standards to facilitate them (for some inspirational privacy reading, I recommend his seminal “Lex Informatica” from 1997). His thought leadership continues to influence privacy scholarship and development today.
The speakers that day included a healthy cross-section of perspectives–from the FTC, the media, scholars, and industry. One common theme to our narratives was the application of existing principles and laws to AI as a new technology use case and that global privacy, consumer protection, intellectual property, competition, employment, products liability, and other sectoral laws based on longstanding core principles apply to it just as they would to any other technology processing personal data–AI is not an exception. This position has been echoed by global legislators and regulators, including recent supporting guidance from various U.S. attorneys general.
The data privacy tradition in insurance
The U.S. insurance industry is well-versed in these concepts as one of the most highly regulated sectors for privacy at the federal and state levels, along with corresponding state insurance laws addressing unfair competition and unfair or deceptive practices. When I started my career in the insurance industry over a quarter century ago, federal laws like HIPAA, GLBA and GINA were coming into being, and worked in conjunction with state laws to govern the predictive underwriting and other algorithmic models already present in the industry. They continue to do so, and over 20 state insurance departments have released bulletins on the use of AI systems by insurers, affirming this position.
The insurance industry continues to be at the forefront of the intersection between privacy and emerging technologies. I am proud to work for an organization that is an essential partner to an industry developing innovative technical solutions to foster greater resilience for the economy, consumers, and society as a whole, guided by responsible analytics and ethical privacy and AI principles and governance. We also promote these concepts in the marketplace through offerings such as our FairCheck solution, which helps insurers assess their models for potential unintended bias consistent with methodologies reflected in emerging U.S. insurance regulations and related guidance.
At Verisk, we celebrate data privacy as an IAPP Diamond Member – which allows us to broaden direct access to privacy best practices and other resources to our team members most involved in data processing and governance, including access to its AI Governance Center.