top of page

Data Privacy Compliance Challenges

  • Writer: Arthur Rothrock
    Arthur Rothrock
  • Nov 23, 2022
  • 3 min read

AI requires vast amounts of data to function effectively, which can create tensions with data protection laws around the world. These laws govern the collection, use, processing, disclosure, retention, storage, security, and cross-border transfer of personal information.


In the US, there is no single comprehensive federal privacy law. Instead, there is a patchwork of sector-specific federal laws and state laws that may regulate AI and automated decision-making, such as:


  • The Fair Credit Reporting Act (FCRA), which regulates the use of consumer information in credit, employment, and other decisions.

  • The Illinois Artificial Intelligence Video Interview Act, which imposes consent, transparency, and data destruction requirements on employers using AI to analyze video interviews.

  • Recent state consumer privacy laws in California, Colorado, Connecticut, Virginia, and other states, which provide rights to opt out of profiling that fuels automated decisions with significant effects.


The California Privacy Protection Agency is also considering regulations under the California Consumer Privacy Act (CCPA) and California Privacy Right’s Act (CPRA) to address automated decision-making.


Certain other countries, particularly in Europe, have more comprehensive data protection regimes. The EU’s General Data Protection Regulation (GDPR), for example, imposes extensive restrictions on the use of personal data in solely automated decisions that significantly impact individuals. The GDPR also enshrines data protection


principles that can be in tension with AI, such as:


  1. Fairness and transparency in processing

    • GDPR Article 5(1)(a) emphasizes the need for processing personal data lawfully, fairly, and in a transparent manner in relation to the data subject. For AI systems, this means being clear with individuals about how their data will be used, not using data in ways that are unjustifiably detrimental or unexpected, and avoiding discriminatory outcomes. The complexity and opacity of AI algorithms can make it difficult to provide meaningful transparency.

  2. Purpose limitation

    • Article 5(1)(b) of the GDPR requires that personal data be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. AI systems often involve the re-use of data for novel purposes not initially anticipated. Continual learning and evolving functionality in AI can strain purpose limitation.

  3. Data minimization

    • According to GDPR Article 5(1)(c), personal data must be “adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed.” AI’s effectiveness often depends on ingesting and analyzing vast quantities of data, including data that may not seem directly relevant. Minimizing data use can be at odds with AI optimization.

  4. Accuracy

    • As stipulated by GDPR Article 5(1)(d), personal data must be accurate and, where necessary, kept up to date; any inaccurate data must be erased or rectified without delay. If AI systems are trained on inaccurate or biased data, they may produce flawed outputs. The dynamic nature of AI can also make it difficult to keep data current.

  5. Storage limitation

    • GDPR Article 5(1)(e) demands that personal data be kept in a form that permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed. The need to retain data for AI training or refinement may conflict with storage limitation, as it can be difficult to define necessity.

  6. Accountability

    • Under GDPR Article 5(2), the controller is responsible for, and must be able to demonstrate, compliance with the other principles. This means thoroughly vetting AI vendors, implementing robust contracts, auditing AI outcomes, and documenting governance measures. The complexity of AI systems and the potential need to explain how individual decisions are made can make accountability challenging.


These principles are not unique to the GDPR – they form the backbone of many global privacy frameworks. However, the GDPR is notable for its particularly stringent requirements and penalties. Complying with this complex web of global privacy laws is a significant challenge for organizations deploying AI systems.


The Federal Trade Commission (FTC) has issued guidance on managing consumer protection risks from AI, stressing that algorithms should be transparent, explainable, fair, empirically sound, and accountable. The FTC has also warned against discriminatory outcomes and consumer injury.


To help organizations develop trustworthy AI, the National Institute of Standards and Technology (NIST) recently released an AI Risk Management Framework. The White House Office of Science and Technology Policy has also issued an AI Bill of Rights. And the UK has developed guidelines for secure AI development.



Comments


Join our mailing list

Logo White.png

Quick Links

About Us

At Rothrock Legal, we understand the unique challenge that litigation presents and will give you the confidence to handle it head on. Our founder, Arthur E. Rothrock, is not only an experienced litigator but also the Co-founder and CEO of Legion LegalTech, Corp. a groundbreaking AI legal technology company.

Contact Info

Location

Silicon Valley, California

Phone

(408) 420-7034

Email Address

arothrock@rothrocklegal.com

Copyright 2024, Rothrock Legal. All rights reserved.

bottom of page