COS 436 Human-Computer Interaction

Princeton University
Fall 2023
HCI Final Projects

photo from Project Fair Round II.
Photo from Project Fair Round II caption

Darius Jankauskas Teams 1-9 Reviewer

Michael Scornavacca Teams 10-17 Reviewer

Harvey Wang News Article Author

Kritin Vongthongsri Video Creator

Kuba Alicki Site Creator

Presenting 16 Novel Systems and Studies

December 7, 2023

On a sunny Thursday afternoon, the Friend Center atrium bustled with activity, students gathered around orange tables adorned with presentation slides and exciting projects. It’s November 30th, the day of the final Project Fair for the students enrolled in COS 436: Human-Computer Interaction.

The course, taught by Professors Andrés Monroy-Hernández and Parastoo Abtahi, offers a survey of the broad field of Human-Computer Interaction (HCI), with a focus on interactive and social computing. Over the semester, the students completed group projects that either implemented an interactive system or conducted a study on an HCI topic.

The 16 projects covered a wide variety of subfields within HCI, from exploring how people interact with AI such as ChatGPT and Dall-E, to building systems to help with mental health or leveraging AR for improving education. Here is a summary of the HCI Projects of 2023:


Summary Video


These projects not only pushed the boundaries of HCI but also illuminated the dynamic interplay between humans and technology. From AI's influence on socialization to AR's role in robotics, these projects exemplify the diverse facets of HCI and its multidisciplinary nature. We hope you have been inspired to reflect on how you interact with technology in your day-to-day life, and how systems like these can transform the way you work, learn, or communicate.

As this year draws to a close, we are grateful to the students and professors of COS 436 for contributing and sharing these incredible projects. We look forward to the work of the next students of COS 436: Human-Computer Interaction.



System Implementation Projects

Team 1
How do conversational UIs (CUIs) compare to textual prompts for AI-generated images?
Group 1 investigated whether single-prompt or conversational approaches to prompting image generation were more desirable to users. The group found that most users preferred a more conversational approach to image generation.
A picture of a dog on a beach, generated with Group 1’s conversational image generation prompting system.
[Team 1’s Presentation]
Mike Scornavacca, Pierce Maloney, Ava Crnkovic-Rubsamen, Henry Knoll
Team 2
BÆ-I
Group 2 devised a system to test how AI-generated prompts could solve the problem of remote socialization. Their system, BÆ-I, allows partners to ask each other questions, then choose between their partner’s and two AI-generated responses. Couples reported increases in quality of communication and some other relationship metrics after playing.
The flow of a BÆ-I game on Discord, from start to an incorrect guess.
[Team 2’s Project Fair Round 2 report]
Alison Lee, Ambri Ma, Christine Sun, Harvey Wang
Team 6
Exploring the Visuo-Vibrotactile Modality for Real-Time Low-Friction Feedback in the Classroom
Group 6 looked at applying real-time, low-friction feedback to the problem of delayed instructor reactions. They did so by building a system that allowed students to send haptic feedback to instructors’ smart watches based on their current sentiments.
An instructor’s visual view of live student reactions.
[Group 6’s presentation video]
Oleg Golev, Michelle Huang, Chanketya Nop, Kritin Vongthongsri
Team 7
How well does AI-generated speculative design solve the problem of myopic ethical considerations for emerging technologies?
Group 7 explored the use of AI generation to prompt participants to explore ethical questions. They built a system where users could ask AI-generated clones of themselves questions, with live feedback. This resulted in users having various conversations, but those generally were not ethical in nature.
AI-generated Bill Gates with miscellaneous beaker.
[Quartz-generated, from Group 7]
Libo Tan, Andrew Mi, Kok Wei Pua, and Jordan Bowman-Davis
Team 8
How well do AR visualizations solve the problem of bystander safety, comfort, and trust for virtual mobile robots?
Group 8 aimed to investigate the efficacy of AR in allowing users to understand the movements of robots. To do so, they virtually emulated a delivery robot within AR, and used a system of cues in order to allow users to understand its upcoming actions.
A delivery robot rendered in AR, along with a directional cue indicating its future direction.
[Group 8’s aero_recording.mov]
Maya Jairam, Devansh Sharma, Ananya Grover
Team 9
Breaking The Plane: How do 3D Augmented Reality (AR) visualizations with AR headsets compare to mobile AR and flat-screen visualizations for understanding multidimensional mathematical concepts?
Group 9 aimed to address the difficulty of understanding higher-dimensional mathematics using AR. To do so, they built a system enabling users to visualize and manipulate three-dimensional equations on a Quest 3, combined with OCR equation input. They found users generally preferred this system to alternatives.
A screen capture of the system’s AR visualization from a Quest 3 device.
[Group 9’s presentation]
Liam Esparraguera, Brian Lou, Kris Selberg, Jenny Sun
Team 16
RateRight: Reducing Extremity Bias in Online Reviews - A Low-Friction, Time, and Location-Based Approach
This project proposed an innovative system to produce more authentic and accurate ratings for lectures. It leverages native iOS and Android notifications to allow users to immediately respond to lectures and provide feedback and a rating, and filters based on the users’ locations. The results showed that users were more likely to leave a review and that this greater range of reviews could ideally reduce the extreme nature of online reviews.
[Generated by DALL-E]
Arnav Kumar, Darius Jankauskas, Stephen Dong
Team 17
Evaluating the Effectiveness of Asynchronous Collaboration in Remote Design Environments
This project focuses on remote collaboration in the context of 3-dimensional designs, as opposed to document collaboration (like Google Docs) or 2D design (such as Figma). The study followed 6 participants with experience in artistic design as they attempted to collaborate on coloring and adjusting a scanned model. The results show that despite some promising applications, the effectiveness of such a tool is limited by the interface and communication between participants.
[Generated by DALL-E]
Darren Alexis, Jakob Nogler, Ruyu Yan


Study Projects

Team 3
Artists and AI: The Opinions of Young Artists on Generative Art
Group 3 investigated the attitudes of student artists at Princeton with regards to AI-generated art, along with their ability to distinguish human-generated art from AI-generated art. They found mixed attitudes from excitement to disillusionment. They also found that AI-generated art with human elements was more likely to fool student artists.
A Princeton student concerned by his experience with AI-generated art.
[Generated by DALLE-3]
Kellen Cao & Rhim Andemichael
Team 4
ChatGPT in CS Education
Group 4 asked a wide range of educators on their views on the use of ChatGPT in CS education. They asked about problems including LLM hallucination, attribution of generated code, and the relation of LLMs to critical thinking skills. Educators surveyed were more likely than not to incorporate LLMs into course policies or within the classroom, but were often concerned about their long-term impacts.
Students and educators are grappling with the promise and perils of LLMs in an educational context.
[Group 4’s presentation video]
Alexis Wu, Ariana Lujan, Divraj Singh, Joy Patterson
Team 10
Lemme See That!: Examining how AI Art impacts engagement with textual posts on X
This project leverages X’s unique outreach (and an incredible amount of available data) to explore AI's potential in different social media contexts (such as marketing, advertisement, and research). They used Selenium to scrape data, grab Tweets from the “Latest” section, and then they manually assigned features for the images. They found that there was higher engagement on posts with human generated images, compared to that of posts with no images or AI-generated images.
[Generated by DALL-E]
Okezie Eze, Samyukta Neeraj, Meet Patel, Dylan Tran
Team 11
How do college students use ChatGPT to learn to code?
This project explores how university students have used ChatGPT in the space of learning new programming languages. They conducted a survey interviewed fellow Princeton students, and studied their experience with ChatGPT and its educational potential. Results showed that while almost all participants used ChatGPT to check syntax or generate snippets, new learners/beginners relied on ChatGPT’s output way too heavily, leading to a less effective learning environment.
[Generated by DALL-E]
Kuba Alicki, Tolulope Oshinowo, Yuhan Zheng, Seanna Zhang
Team 12
How do non-native English speakers use ChatGPT as a tool for writing?
This project is centered around how ChatGPT can enhance the efficiency and accuracy of writing done by non-native English speakers. The study found that ChatGPT helped participants construct responses significantly more efficiently for long-form responses, but in short-form, such as text messages, the efficiency was actually worse.
[Generated by DALL-E]
Jasmine Zhang, Desmond Devalu, Theo Knoll, and Emmy Song
Team 13
Understanding Visual Artists' Perspectives on Their Use of Generative AI
This study explores how artists perceive the use of generative AI tools, with a focus on ethical and artistic considerations. Both professional and student artists were interviewed, and their usage of AI was questioned. The results showed quite an interesting division between artists who do not use AI tools at all and others who use it religiously. As expected, these two groups had very different outlooks on AI's ethical and artistic use cases throughout art.
[Generated by DALL-E]
Genie Choi, Lu Esteban, Kirsten Pardo, Warren Quan
Team 14
Understanding College Students’ Opinions on LLM-assisted Mental Health Services
This project explores the potential of Large Language Models (LLMs) in the context of mental health services throughout a college campus. Researchers asked questions regarding possible benefits, concerns, and/or expectations when using applications based on LLM technology for support in mental health. Preliminary research uncovered opinions such that participants expected LLMs to be proactive and general concerns of LLMs variety in training data on mental health.
[Generated by DALL-E]
Owen Zhang, Shuyao Zhou, Jiayi Geng
Team 15
Public Perception of Front-Facing Display Headsets
This study focuses on how the public perceives virtual reality headsets that feature front-facing displays. Information was gathered via online surveys and follow-up interviews with volunteering participants, and questions were asked surrounding the naturalness of the conversation and its effect on the conversation’s efficacy. The results revealed widespread hesitation towards such technology and thus suggested that current headsets are not yet suitable for daily use in social interactions.
[Generated by DALL-E]
Dariya Brann, Ryan Vuono, Henry Wertz