Find Mediators Near You:

Do Existing Laws Apply to AI


Incorporating artificial intelligence (AI) in business is becoming more widespread globally. A survey of Global 500 companies found that leaders choosing to invest in AI and automation business tools and software solutions expect to see significant growth within the next few years. According to PwC’s Global Artificial Intelligence Study, we can expect to see 45% of the $15.7 trillion in global economic gains by 2030 as the result of AI and automation product enhancements that are driving consumer demand across the manufacturing industry. This includes foreseeing unexpected machine and equipment breakdowns, identifying yield losses and locating detractors. Over one-third of companies are now using AI, with another 42% investigating its potential to enhance business operations. Among major corporations, like those in the Fortune 500, the adoption rate is as high as 99%.

AI offers significant advantages, such as reducing the load of mundane tasks, improving efficiency, and enabling more tailored services for businesses and consumers. However, its extensive use also introduces risks, particularly if adequate protective measures are not in place.

Therefore, AI is increasingly coming under scrutiny of legislative and regulatory bodies across the world, including in the US at various government levels and in the EU, where the EU AI Act is poised to set a leading standard for AI regulation with its risk-centric approach. Legislation is also being proposed in specific sectors, like HR Tech and Insurance, to govern AI and automation.

Although legislation has not caught up to the rapid advance of AI technology, it is crucial to remember that AI technologies are still subject to existing laws, and the rise of automation doesn’t exempt companies from legal compliance.

Regulatory authorities, such as the Equal Employment Opportunity Commission (EEOC), Financial Conduct Authority (FCA), Federal Trade Commission (FTC), and Consumer Financial Protection Bureau (CFPB), have emphasized this point. There have been many legal actions against firms that implemented AI without suitable safeguards, leading to violations of existing legislation.

There have been many legal challenges to those companies using AI systems that rely on biometrics. The existing biometric and data protection laws are being used to legally challenge the development of these systems and the data used to train them. Some of the key technology issues in these lawsuits include:

  1. Facial Recognition Systems: These are used in surveillance and identity verification but raise issues of privacy rights violations and consent under regulations like the GDPR in the EU and BIPA in Illinois, USA.
  2. Fingerprint and Iris Scanning: Common in access control and law enforcement, these systems face legal scrutiny over consent, data minimization, and storage limitations.
  3. Voice Recognition and Analysis: Deployed in customer service and security, these technologies must navigate consent and data retention concerns, especially under privacy laws like the GDPR.
  4. Health Data Analysis: Used in personalized medicine and patient monitoring, this area must comply with stringent laws like HIPAA in the USA and GDPR in the EU for sensitive health data handling.
  5. Emotion Recognition: Applied in marketing and surveillance, these tools face privacy issues, particularly regarding consent and legitimate interest under data protection laws.
  6. Gait Analysis: Utilized in sports analytics and healthcare, these technologies potentially infringe on privacy, requiring explicit consent under various laws.
  7. AI-Powered Personal Assistants: These systems process personal data for personalized interactions, raising issues with user consent, data minimization, and transparency under laws like the GDPR.
  8. Behavioral Prediction: Used in advertising and e-commerce, these technologies encounter legal challenges related to profiling, consent, and the right to explanation under data protection regulations.

In summary, AI applications are increasingly scrutinized for compliance with biometric and data protection laws, particularly concerning privacy, consent, and data handling practices.

As an example:
Clearview AI, a company that creates a facial image database from internet and social media images, has faced legal actions in various countries for its practices. The company didn’t inform individuals about collecting their facial images nor disclosed any storage period, breaching data protection laws. In Italy, the Garante per la Protezione dei Dati Personali imposed a €20 million fine under GDPR, prohibiting Clearview AI from processing biometric data of Italians and ordering the deletion of all such existing data. Similarly, in Illinois, USA, the American Civil Liberties Union took legal action against Clearview AI for violating the state’s Biometric Information Privacy Act (BIPA).

Other areas for legal challenges include Bias, Copyright Violation, and Health care analysis.  In the case of copyright:

  • Making and Changing Content:
    • What’s the issue? AI can make new texts, images, tunes, or videos. Sometimes, these might look or sound a lot like things that someone else already created and has legal rights to.
    • What’s the legal worry? If the AI makes something that’s too much like something that’s already copyrighted, and they didn’t get permission, that’s a no-no legally.
  • AI That Selects News Stories:
    • What’s the issue? Some AI programs pick out news stories from different places and put them together. This could include copying parts of news articles that someone else wrote.
    • What’s the legal worry? If the AI copies these news stories or parts of them without the right permission, or if it doesn’t count as “fair use” (like using an excerpt for education or review), that could be illegal.

Courts and government bodies are getting tougher on the misuse of AI according to existing laws. They’re highlighting the importance of continuously managing risks and following rules when using this technology.

                        author

Robert Bergman

Robert Bergman with Next Level Mediation provides full mediation services - including proprietary and confidential Decision Science (DS) analysis that assists each party in understanding their true litigation priorities as aligned with their business objectives. Each party receives a one-time user license to access our exclusive DS Application Cloud. We… MORE >

Featured Members

ad
View all

Read these next

Category

Check the Status, Please

Kluwer Meditation Blog.This story is for you if: It’s difficult to tell who the decision makers are on your cases. You are looking for different approaches to influence your adversary...

By Jeffrey Krivis
Category

Experience Creating NextLevel Mediation’s Technical Support Bot

Some History of Support Bots Despite seeming like a futuristic idea, chatbots have been around for over fifty years. One of the earliest chatbots, a psychotherapist emulator nicknamed ELIZA, was...

By Jason Bergman, Robert Bergman
Category

The Business Case for Neutral Facilitation in the Days of the Coronavirus (COVID-19)

JAMS ADR Blog by Chris PooleThe old curse disguised as a blessing “may you live in interesting times” is showing its teeth in the form of pandemic in these early...

By Gary Birnberg
×