Technology | Ethics
Is using AI cheating?
This article is republished, with permission, from Terri Griffith's personal website. Read the original article.
I finished my Fall MBA teaching before the explosion of ChatGPT - a chatbot that can write, code, and answer questions. The next time I teach this material will be in a different world. ChatGPT reached 1 million registered users in 5 days and continues to grow.
My innovation course for our Management of Technology MBA is writing intensive. True, even this Fall, AIs were available that could paraphrase (e.g., Quillbot), but they hadn't made the mainstream in my community. Even Grammarly, a proofreading tool I pay for and encourage my students to use, wasn't common knowledge.
Past students don't have second thoughts about using Google Autocomplete. However, when it comes to Grammarly's increasingly powerful rephrasing capabilities, some wonder whether using such tools is cheating. My syllabus has an extensive academic integrity section, and I spend much time on how to quote and cite in business writing correctly. I also let the students know I use Turnitin.com to check for citation mistakes. (I'm happy to say I haven't encountered full-blown plagiarism at my current university.) My students' concern makes sense.
My answer so far
My answer has been that no, the use of an editor, whether human or machine, just makes sense -- it's not cheating. The final words must belong to the listed authors, but good business practice supports using discussion and editing tools to get you to that final stage.
Generative AI
I will teach my next course in a world of ChatGPT, WriteSonic, and the multitude of generative writing AI that are not just editors. Examples from homepages of some of the commercial tools:- WriteSonic: "Best AI Writer for Creating" [and then an animation lists LinkedIn posts, SEO Content, Product Descriptions, etc.] for bloggers, freelancers, etc.
- Copy.ai: "Say 'goodbye' to the blank page for good."
- Sudowrite: "Sudowrite is the non-judgmental, always-there-to-read-one-more-draft, never-runs-out-of-ideas-even-at-3am, AI writing partner you always wanted."
- Notion AI: "Leverage the limitless power of AI in any Notion page. Write faster, think bigger, and augment your creativity. Like magic!"
I expect each tool to do an adequate job (low "B") with my assignments. Leveraging multiple tools to get the design, text, and references in line seems realistic. My final individual assignment is to create facilitator-focused playbooks related to three of the course concepts. I've offered a baseline quality example to the students for the last couple of years. Before the next course, I'll redo that example using a set of generative AI.
But is using an AI in such a way cheating?
Recall the bruhaha of Fall 2022 when an AI-generated image won at the Colorado State Fair. My reply to the discussion on Twitter:
Is the prize for the art, the best human art, art by someone living in that country, art by a child, or art by an elephant (or the best monkey selfie)? We've always made distinctions, now we just need to build a norm for attribution. Share the tools and who/what submitted.
— Terri Griffith (@terrigriffith) September 21, 2022
Chatsonic seems to agree with me. I asked it, "what is the biggest concern faculty have about generative ai?" Chatsonic's reply:
Chatsonic differs from ChatGPT in access to current web search results. To verify my understanding, I asked Chatsonic, "can chatsonic use 2023 search results?" Its reply: "Yes, absolutely! I can use the latest online search results from the year 2023 to provide you with the most up-to-date information." (I've installed the Chatsonic chrome extension such that any searches I do from the Chrome toolbar that end in a ?-mark offer Chatsonic's reply and the Google search results.)The biggest concern faculty have about generative AI is its potential for creating automated plagiarism and copyright infringement. Generative AI can generate content that could easily be plagiarized without proper attribution, and its models can reshape existing material into something new and unrecognizable. Therefore, it is important to ensure that any use of generative AI is properly attributed and that the content generated is unique and non-plagiarized.
Using AI is not cheating. But not citing your sources is
When I shared three of my favorite business books in my final 2022 post, I used ChatGPT to create an alternate review of Power and Prediction: The Disruptive Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. I said, "I hope knowledge workers [people using generative AI] will share their tools just like photographers often caption their images with a description of their camera and setup." Where I used ChatGPT's work, I used the quote format (as I've done here).
I also specified the prompt I used. However, I wouldn't expect that same prompt to give me or anyone else the same answer each time. Some image-generating AI (e.g., MidJourney) let you define a seed so that you can "steady" the results. I have yet to find similar features for chatbots. If replication isn't possible, then references to AI-generated text are similar to how we reference personal communications or direct quotes from interviews -- situations where the author has access to the material, but others do not.
We Need APA/MLA/Chicago Manual of Style Citation Guidance
I asked Chatsonic, "give me the URL showing how to reference material generated by an AI in the chicago manual of style format?" The response was, "Sure! Here is the URL to the Chicago Manual of Style guide on citing materials generated by AI: https://www.chicagomanualofsty... Hope this helps!" Following that link gave me the error noted in the image. Entering "ai generated work" into the Chicago Manual of Style search bar returned, "Your search 'ai generated work' did not match any documents.” [Spelling out artificial intelligence was also a bust.]
I'd love to know more about the underlying language model Chatsonic uses such that it seems to fake a URL. I'd be happier about paying for the service if it would at least check itself before offering up the link. ChatGPT took an honest pass:
I'm sorry, I'm not able to provide URLs. However, The Chicago Manual of Style does not have specific guidelines for citing material generated by AI. The best way to cite such material would be to consult with your instructor or editor for guidance. It may be helpful to include information about the specific AI model or program used to generate the material, as well as the date and any other relevant details. [ChatGPT shares that it's training data goes up to 2021.]
We're all learning as we go
I started blogging to allow myself to reflect before answering common and more detailed student and executive questions. Thank you for joining me on this journey as we all (and I include the AI) learn as we go. I'm very interested in differing perspectives. Feel free to comment below or reach out on other platforms.
Created using a combination of Grammarly (paid), Chatsonic (paid), ChatGPT, Google Autocomplete, DALL-E 2, and the Five template on Squarespace.