Digital Learning Blog

ChatGPT FAQs 6 Months In: A Campuswide Message from OVPTL and the Campus Writing & Communication Coordinator

by | May 19, 2023 | Digital Learning Blog, DTEI Stories

This message is designed for instructors of undergraduate writing at UC Irvine (UCI), not primarily for students, staff, or field experts. It is distilled from conversations and projects happening on campus and beyond. If you have further questions, or if you would like to tell us how you are using ChatGPT in your UCI writing classroom, please email Daniel M. Gross, Campus Writing & Communication Coordinator, at dgross@uci.edu.

Q: Is there a campuswide policy on ChatGPT?

A: Just like we have no campuswide policy on graphing calculators or Wolfram Alpha, we do not have a campuswide policy on ChatGPT. See below for the current Composition Program policy on academic integrity and ChatGPT.

Q: Can students use ChatGPT?

A: Yes! If you as the instructor say so and explain exactly how students should and should not use it. The wisdom of using ChatGPT (or not) completely depends on your learning objectives and your pedagogical approach. At the bottom of this message, you can see how some UCI instructors are currently teaching with ChatGPT.

Q: Do I have to think about ChatGPT if I don’t want to or don’t have the time?

 A: Academic freedom says “no,” but you would ignore ChatGPT at your own peril and at the peril of your students. At the very least, students know about the tool and are making decisions about if, when, and how to use it. Informed instructor guidance will only help them make better decisions. At the very minimum, your students will benefit greatly from an informed explanation of your policy and approach, even if that’s a total ban.

Q: What are the steps in the writing process, and are there any that we can now automatically offload to ChatGPT? 

A: All sorts of student writing do get offloaded if you think about it in certain technical terms—offloaded to a learning apparatus created by someone else, offloaded to digital technologies, etc. However, we always want to encourage instructors who want to teach toward any component of the writing process, including content development, research, thinking, collaborative work, feedback, formal organization, and technical production.

Q: But can’t ChatGPT function like a spellcheck or auto complete—which are tools that we use passively all the time? And aren’t we using tools like Grammarly already?

A: An instructor might want to help students learn new things about spelling and grammar by asking them to work without these tools. It is incumbent on individual instructors to explicitly lay out the expectations when it comes to using tools like ChatGPT to polish their writing and offer pedagogy about how they expect students to get there.

Q: Isn’t ChatGPT basically like the UCI Writing Center (CEWC), but even more accessible?

A: No, the distinct, human feedback and instruction students receive at the CEWC is not comparable to what students may get from ChatGPT.

Q: Can a good prompt keep students from cheating with ChatGPT?

A: Yes, for now and up to a point. The more local, resource-specific, point-of-view-dependent, and unique a prompt response would have to be, the less likely it is that a student would be tempted to use ChatGPT illicitly. But be aware: this bar will keep moving as students gain access to generative AI that trains on their own writing and can be prompted to do personalized writing tasks. More information for Faculty/Staff about academic integrity may be found at: https://aisc.uci.edu/faculty-staff/academic-integrity.php.

Q: What is the current UCI Composition Program syllabus language when it comes to academic integrity and ChatGPT?

A: “The Composition Program and its teachers assume that work submitted by students—all process work, drafts, low-stakes writing, final versions, and all other submissions—will be generated by the students themselves, working individually or in groups. This means that the following would be considered violations of academic integrity by the Composition Program: if a student has another person/entity do the writing of any substantive portion of an assignment for them, which includes hiring a person or a company to write essays and drafts and/or other assignments, research-based or otherwise, and using artificial intelligence affordances like ChatGPT.”

Q: Should we be using generative AI detection tools?

A: No, they are not adequately reliable, and using them contributes both to a climate of mistrust and to the cheating-detection arms race. Also, there are problems around student privacy, unauthorized data sharing, etc.

Q: So what should we do with text that we think was generated illicitly by ChatGPT?

A: First of all, think about how you might adjust your teaching so that such opportunities are diminished. Second, evaluate and respond to the writing as you receive it, considering the new evaluation category “ChatGPT-like.” That evaluation is never good, and it can be part of your feedback. But make sure to teach students why the writing appeared that way (even if ChatGPT wasn’t used), and how they can revise.

Q: When it comes to STEM, isn’t there a virtue to de-personalizing and objectifying language so that the data can speak for itself? I.e. shouldn’t students learn how to produce a polished product using ChatGPT which is great when it comes to certain kinds of conventions?

A: Even the most technical material is situated in a human context, which is best orchestrated by the human who is writing. Students need to learn how to construct a scientifically sound point of view on the topic based on relevant, credible research. An offloaded task might be a missed opportunity to develop the knowledge, skill set, and confidence needed to do STEM work in the world. That said, you as an individual instructor might very well identify clever ways to use ChatGPT to support your learning objectives—see examples below.

Q: What are some examples from across campus when it comes to teaching with ChatGPT? 

Social Sciences (Ian Straughn): “For one of my lower-level GE archaeology courses, students had the option to use ChatGPT for a short writing assignment where they were asked to develop their own version of an archaeological conspiracy theory. In doing so, they were required to utilize some of the various pseudoscience tropes and rhetorical devices often employed by antagonists of the “archaeological establishment.” About two-thirds of the students took up this opportunity. Using the program presented some interesting challenges as the AI is designed not to engage in conspiracy theories or pseudoscience. Students easily found workarounds to this safeguard, often allowing ChatGPT to frame its product as fiction or with the addition of a mild disavowal of its work as one theory among many. Students then had to evaluate what ChatGPT produced, finding it pedestrian in its examples, full of logical contradictions, reliant on speculative language, and employing dubious statements of fact. Ultimately students exposed how the AI reified notions of an archaeological “mainstream” as the only scientific arbiter of the archaeological record.”

Humanities (Peter Krapp): “In a 139W course, you can hand out ChatGPT-generated responses to an assignment prompt and have students review and revise. In my experience this year, they catch on quickly to what it does or does not do well, and they see that it would be an inadequate cheating tool. The experiment also illustrated certain dimensions of what a good Humanities assignment asks for. In another course (on the history of computing), we spent a session interacting with a range of Chatbots, including ChatGPT3.5 and ChatGPT4.0, some of its ancestors back to Eliza, and some of its competitors to get a sense of their capabilities and their use cases. Playful interactions lower the threshold (whether of hype or anxiety or whatever).”

Physical Sciences (Steve Mang): “I have used ChatGPT in all of my classes for the last two quarters in various ways. In my two writing classes, I used ChatGPT to generate writing samples that my students practiced peer reviewing. They identified the writing as technically competent, but also noted that ChatGPT was repetitive and unable to completely address the prompts. A big point of emphasis in my classes is consideration of the audience, and ChatGPT usually fails at this as well. We also asked ChatGPT to summarize some peer reviewed articles, and noted that it can sometimes produce a small part of an annotated bibliography entry, but that it’s prone to inventing authors, journals, scientific facts, and so on. In my physical chemistry and instrumental analysis lab classes, we’ve asked ChatGPT to write MATLAB code, explain the design of instruments, and relate experimental results to theory. I really like these exercises because in all cases, ChatGPT produces output that is superficially plausible but that gets more and more incorrect the more you look closely at it. Finding the inaccuracies in the output seems to be an enjoyable experience for the students (they like to make fun of the robot), and gets them thinking in a different way about the material I want them to learn.”

Engineering (Patrick Hong): “In my ENGR 190W Communications in the Professional World Course, I use an in-class group activity to discuss AI-generated content in the context of intellectual property ownership and ethical usage considerations. Each group is assigned a question to present a YES or NO point-of-view supported by cited evidence. Question 1: There is a difference between intellectual property (creative content—ideas, text, images, music, voices, etc.) generated by a real person and assistive tools powered by artificial intelligence (ChatGPT, Dall-E, etc.). Question 2: Anyone can claim ownership of the intellectual property (creative content—ideas, text, images, music, voices, etc.) generated by creative assistive tools powered by artificial intelligence (ChatGPT, Dall-E, etc.). Question 3: It is ethical to use creative content (creative content–—ideas, text, images, music, voices, etc.) generated by assistive tools (ChatGPT, Dall-E, etc.) in school and the workplace.”

Campus Writing & Communication Coordinator (Daniel M. Gross): “A student can learn some writing technicalities with ChatGPT. You, as the instructor, can analyze with your students what ChatGPT produces when it comes to grammar, style, and rhetoric. There’s much more going on than the simple observation that ChatGPT is “grammatically correct.” What does that mean, and what does it look like in this instance? What happens formally when you prompt ChatGPT to write for different audiences, different levels of expertise, in different styles? This sort of analysis with your students will teach them new things about grammar, style, and rhetoric, while also teaching them about current ChatGPT limitations.”


About the Author

Daniel M. Gross, Campus Writing & Communication Coordinator

PhD, Rhetoric, University of California, Berkeley, 1998

Daniel M. Gross is a Professor of English and Affiliate Faculty in the Critical Theory Emphasis. As the Campus Writing & Communication Coordinator, he and his office are responsible for UCI Writing Across the Curriculum and Writing in the Disciplines (WAC/WID). His research in rhetoric runs along three tracks: writing and communication, history of the disciplines, and medical humanities.