AI and You: Let’s Be Careful Out There

AI
William “Bill” C. Davell, Esq.

The Good News provides a monthly column with important content having to do with topics from the legal community. This month features a conversation with Paul Lopez, COO of Tripp Scott.

The ChatGPT artificial intelligence app and its competitors can do very cool things. 

More than 100 million consumer and business users have reportedly employed these AI tools to compose articles, letters, essays, scripts, academic papers, resumes and job applications, work assignments and homework; generate images; develop code and software; analyze customer data; and create business plans and strategies. People in our profession have even used AI apps to write legal briefs!

But there are reasons for caution when it comes to AI apps’ originality, accuracy and potential bias. As the fictional Sgt. Phil Esterhaus would caution in the 1980s police drama Hill Street Blues, organizational users in particular need to “be careful out there.”

 

Bill Davell: What are AI apps’ issues with “originality?”

 

Paul Lopez: AI works by “scraping” billions of bits of information across cyberspace and “training” the app to replicate patterns or recombine that content into seemingly “new” product. Some artists, writers, photographers, videographers, software coders and others have argued that their works – frequently protected under the law as “intellectual property” – have been used without proper licenses, permission or attribution, and some have sued.

It’s unclear at this point whether courts will consider use of any given item of content substantial enough to trigger copyright or trade infringement violations against the apps or their users, given that so many sources are drawn from and that they may only be used for this “training.” 

After all, people with “human intelligence” do much the same thing: learn and develop writing styles, artistic and photographic techniques, business planning skills and acumen, and programming chops based on millions of impressions of others’ words and work. As Supreme Court Justice Joseph Story once pointed out, “In truth, in literature, in science and in art, there are, and can be, few, if any, things, which in an abstract sense, are strictly new and original throughout.”

Also falling under “originality” is concern about users presenting AI’s work or output as their own. For example, is it appropriate for students to use AI for homework? A Pew survey indicates that majorities or pluralities of students think research and even solving math problems are ok, but draw the line at writing essays.

 

BD: How about the other issues: “accuracy” and “bias?”

 

PL: These might be bigger concerns. It turns out that much like disinformation from people, AI apps make things up – even on their initial “demos” for the media! It’s called “hallucination,” and it has included not only incorrect facts but also false charges of wrongdoing and bad financial advice. One lawyer was sanctioned after a legal brief he filed turned out to cite fake cases generated by AI.

Sometimes this “made-up” information is generated on purpose, as with nude images of New Jersey high school girls that were faked using AI.

As for bias, AI may inadvertently discriminate when guiding hiring and financial services, or access and release information protected by privacy laws or policies.

Apps have also been known to produce so-called “hate speech,” and despite efforts by developers to avoid it, can be “hacked” actually to work around controls and generate various kinds of offensive content (such as profanity, erotica or deliberate insults or misinformation).

 

BD: What kind of problems can those issues create?

PL: It depends what AI is used for. If just to communicate or have fun with friends or family, it’s surely not as big an issue to share inaccurate information or material that inappropriately uses someone’s intellectual property.

It’s quite another thing to do any of that in a business or public church setting where users can be seen as publishing, performing, displaying or otherwise appropriating the work of others without legal permission or licensing, and where penalties for infringement can be high. 

Or where false information can cause damage to reputations, or lead to inaccurate or discriminatory decision-making and even release of private information – all of which can open businesses, churches and even individuals to serious liability.

 

BD: What should organizations do to protect themselves?

 

PL: Until AI apps improve safeguards against these problems, it’s best to limit their use outside of the personal context. If they are used at all by organizations, it might be advisable to employ them to guide decision-making but not make decisions, generate ideas for creative content but not final output, and aid in research but with the rigorous fact-checking users and organizations should normally engage in. 

Individuals have to consult with their own consciences on presenting AI’s work as their own, although schools might consider policies on AI-produced homework, as hard as they would be to enforce.

Churches might be advised to take the same care with AI-generated material as with any outside content, endeavoring to ensure that any use is licensed or falls under the fair-use or very narrow religious services exceptions. Churches should in fact have policies and safeguards in place protecting against unauthorized use of intellectual property, an especially problematic issue with the increased frequency of webcasting.

 

Depending on your organization’s role, Tripp Scott attorneys would be happy to sit down and help formulate more specific policies for using AI apps. Contact us at 954-525-7500 or via: trippscott.com/contact-us.

Read more Ask Bill at: https://www.goodnewsfl.org/author/william-c-davell/

————————————-

If you have any topics you think my be of interest to our readers, we encourage you to email us at [email protected].

Share this article

Comments