Haichang Li
I am now looking for a PhD position for 2025 Fall:)
Open to a11y, LLM based app&eval, general HCI
LinkedIn / Twitter / Github / Google Scholar / Résumé
HelloWorld ✍
I am Haichang(Charles) Li(李海畅), currently a senior Undergraduate student at Purdue University studying Information and Communication. At Purdue University, I was affiliated with DE4M Lab and AIM, and I was lucky enough to be mentored by Prof. Liang He and Prof. Yung hsiang Lu. In addition, I am a founding member of SOUNDING.AI, which is a startup receiving 10M+ CNY to explore how to build AI-based systems and I built a group of AI technical teams here. Currently, I am collaborating with MEMO.AI on a project called MEMO Recall, which explores using contextual information through AI-integrated glasses to enhance human-AI interaction.:)
Before coming to Purdue University, I dropped out of the EEE joint program with the grade of the first class honor before deciding to shift paths. Those days working with circuits and chips made me realize that I enjoy creating tools and collaborating with people more than focusing on hardware. This realization clarified my research interest in "symbiosis between humans and machines.":
My research is inspired by the 1997 debate on "Direct Manipulation" and "Interface Agents," the concept of Mixed-Initiative Interaction, and the philosophy of "Taiji." , focus on making humans and machines each contribute their strengths at the most appropriate times. I want to balance the duality of machines and humans like TaiJi: machines work naturally and intelligently like humans, and humans provide control to make sure the machines conform to their intentions and expectations.
Specifically, I am dedicated to designing and evaluating next-generation intelligent systems that dynamically allocate initiative between humans and machines based on contextual information and human intent to support productivity and accessibility.
My goal is to design systems where machines act as human "friends," helping with inefficient tasks (e.g., repetitive work) and transcending human limitations (e.g., vision-dependent tasks for BLV users). These systems aim to enhance productivity and accessibility while remaining intuitive, controllable, and seamlessly integrated into workflows. If you share similar interests, feel free to drop me an email :)!
News 🌊
[2024.10] I have completed the writing of Code2Fab but unfortunately missed the CHI and IMWUT deadlines. We plan to submit this manuscript to the upcoming UIST. This is my first time independently writing a full paper, thanks for the help of my advisor and collaborator! 💪
[2024.4] The user study of "Mus2Vid" was accepted by IEEE CAI 2024! The results of our user survey will be exhibited in Singapore.🇸🇬🦁 We will officially start the creation of Mus2Vid and submit our technical paper at the end of the year!
[2024.1] The final version of "Shine Resume" is released in Chinese! Hope this will help some of the "hidden" crowd and shine with them! Thanks for the efforts of all the partners this summer. This is the first time for me to complete a commercial project from 0 to 1, from design to implementation and promotion.🎨
Past 🎯
An AI Assistive 3D Modeling Tool for Accessibility
[Equal 1st author] Supervised by a team of faculty in HCI and accessibility researchThe authors info and detailed contents anonymized to avoid potential review process conflicts
WEB / CODE / PDF(Coming Soon)An Literature Review of Accessibility Artifacts
[Near Completion] Supervised by faculty specializing in accessibility researchThe authors info and detailed contents anonymized to avoid potential review process conflicts
WEB / PDF(In Preparation)MEMO: AI-Assisted Memory Aid and Visualization
[In Progress] A full system with SOTA AI glasses and mobile app integration
The current prototype of MEMO is an AR-integrated AI glasses system that serves as an innovative platform for human-AI interaction. My research explores how to leverage contextual information through MEMO, moving beyond traditional screen-based interactions. By using the physical world as a source for AI to understand user behavior and intent, MEMO aims to provide proactive memory support and seamless integration into daily life.
WEB / Research Prototype(In Progress)Mus2Vid: Music Visualization based on Synesthesia
[Leading Project] Supervised By Prof. Yung Hsiang Lu and Prof. Yeon-Ji Yun, near completionMus2Vid explores how AI can simulate human cognitive processes to align with abstract and subjective tasks, such as music visualization. Using large language models (LLMs), the system bridges human reasoning and machine generation by hierarchically structuring storyboard-based video creation. This approach enables AI to "appreciate" music, leveraging human mental models to guide video generation. The project integrates contextual understanding and cinematic storytelling to align machine outputs with human perception, creating continuity and enhancing user acceptance. Additionally, it introduces novel evaluation metrics for assessing alignment and intent clarity in human-AI collaboration.
Prior Survey (2024 IEEE Conference On Artificial Intelligence) / WEB / CODE / Technical Paper (Coming Soon)ShineResume: Resume Writing System for Confused Graduates
[Founding Member] A "0 to 1" entrepreneurial experience obtaining 10M+ CNY funding supportShineResume leverages AI to dynamically allocate initiative between users and the system, addressing the uncertainty faced by graduates navigating the post-COVID job market. When users are unable to articulate their needs, the system takes the lead by recommending career paths and optimizing resumes based on cognitive and contextual data. Once users gain clarity, the initiative shifts back to them for decision-making, ensuring a balance between machine agency and human control. This project demonstrates how AI can simulate human reasoning to assist underserved groups in achieving their goals while maintaining ethical and compliant behavior.
WEB(CHINESE)Social Robot for the Depressed and Lonely
[Project Prototype]Project for Assistive Tech with Taehyeon Kim, instructed By Prof. Byung-Cheol MinTo address mental health for those with special needs, like the depressed or isolated elderly, we're developing social robots that use multimodal interaction, sentiment analysis, and natural language processing. This approach ensures accessibility by accommodating different communication methods and provides a more empathetic, human-like interaction. Our goal is to enhance emotional well-being by deeply understanding and supporting the mental health needs of our users.
PROPOSAL / WEB / CODE / PDFContact With Me 🍔
I always enjoy discussing and exchanging ideas with others, welcome to communicate with me.📤