ProfilePhoto

Haichang Li

Senior Undergraduate
Human-AI Collaboration
Observer Thinker Designer

Information | Communication
Expected Class of 2025
Purdue University
Email: li4650@purdue.edu

I am now looking for a PhD position for 2025 Fall:)
Open to a11y, LLM based app&eval, general HCI

LinkedIn / Twitter / Github / Google Scholar / Résumé

HelloWorld ✍

I am Haichang(Charles) Li(李海畅), currently a senior Undergraduate student at Purdue University studying Information and Communication. At Purdue University, I was affiliated with DE4M Lab and AIM, and I was lucky enough to be mentored by Prof. Liang He and Prof. Yung hsiang Lu. In addition, I am a founding member of SOUNDING.AI, which is a startup receiving 10M+ CNY to explore how to build AI-based systems.

Before coming to Purdue University, I dropped out of the EEE joint program with the grade of the first class honor before deciding to shift paths. Those days working with circuits and chips made me realize that I enjoy creating tools and collaborating with people more than focusing on hardware. This realization clarified my research interest:

Create productivity tools 1 to help users, especially those with disabilities 2, increase efficiency and improve the experience 3

1 My belief that technology should serve humanity fuels my passion for developing assistant tools in creative and productivity contexts, such as writing, music, and design.
2 This effort extends beyond supporting people with disabilities to include underserved groups. Instead of incrementally enhancing experiences for those already served, we can direct our efforts towards crafting new solutions for those previously without access (like helping the deaf experience music).
3 Improving user experiences often goes beyond artifact creation. I also assess AI technologies, exploring areas like LLMs' synesthetic abilities and legal tandards.

In general, my research interests lie at the intersection of generative AI and HCI, within the areas of productivity(/creativity) support and accessibility to explore how to amplify the help of AI to us. If you are also interested, feel free to drop an email to me😀

News 🌊

[2024.10] Code2Fab is finished, but in order to ensure the quality of the writing, we are in the process of making more rounds of revisions and have cancelled the CHI 25 'submission. This work is expected to be submitted to IWMUT in November, and we believe that the quality of the work is more important than meeting the deadline! 💪

[2024.4] The user study of "Mus2Vid" was accepted by IEEE CAI 2024! The results of our user survey will be exhibited in Singapore.🇸🇬🦁 We will officially start the creation of Mus2Vid and submit our technical paper at the end of the year!

[2024.1] The final version of "Shine Resume" is released in Chinese! Hope this will help some of the "hidden" crowd and shine with them! Thanks for the efforts of all the partners this summer. This is the first time for me to complete a commercial project from 0 to 1, from design to implementation and promotion.🎨

Past 🎯

Experience Image

Code2Fab: 3D Modeling Support for Blind and Low-Vision Programmers

[Equal 1st author] Supervised By Prof. Liang He, Collaborated faculty: Prof. Angus Forbes and Prof. Anhong Guo

This project addresses barriers faced by BLV users in creating and validating 3D models, traditionally reliant on tactile feedback, by leveraging LLM technology. It enables BLV individuals to engage with 3D modeling and printing through code-based rendering and intelligent assistance. The system transforms the 3D modeling workflow by allowing model generation via OpenSCAD using either code or natural language. The system supports hierarchical component selection and dual interaction modes, utilizing a multi-agent LLM to interpret visual information and facilitate model modifications.

WEB / CODE / PDF (Coming Soon)
Experience Image

A11Y Review: Literature Review of the Accessible Artifacts

[Near Completion] Supervised By Prof. Liang He and Prof. Huaishu Peng will be submitted in November.

The A11Y Review project involves a comprehensive analysis of accessibility artifacts from leading conferences (CHI, ASSETS, UIST) over the past 15 years. The goal is to understand design and evaluation patterns in the accessibility domain. The project includes summarizing development trends, creating a dataset with integrated coding criteria, and developing a database with dynamic visualization tools.

WEB / PDF (Coming Soon)
Experience Image

Mus2Vid: Music Visualization based on Synesthesia

[Leading Project] Supervised By Prof. Yung Hsiang Lu and Prof. Yeon-Ji Yun, near completion

Mus2Vid addresses synesthesia alignment and continuity in autonomous music visualization by developing a recurrent architecture that ensures consistency in long video generation through iterative parsing and regex-based referencing. It transforms music visualization into storyboard design then generates key frames and assembles them hierarchically for videos, adapting to continuous music performances. Additionally, it introduces a novel criterion for evaluating alignment, ensuring that visual elements consistently match the musical features.

User Study (2024 IEEE Conference On Artificial Intelligence) / WEB / CODE / Technical Paper (Coming Soon)
Experience Image

ShineResume: Resume Writing System for Confused Graduates

[Founding Member] A "0 to 1" entrepreneurial experience obtaining 10M+ CNY funding support

The Post-COVID-19 era has brought a cold winter to the Chinese job market, with layoffs and hiring freezes. Many graduates from ordinary schools, lacking clear career goals or job search strategies, are stuck in an information gap. We aim to use AI, centered on individuals, to help them by recommending industries, optimizing resumes, and offering professional photo style transfers. Our goal is to make AI valuable for underserved groups, helping them find opportunities and succeed.

WEB(CHINESE)
Experience Image

Social Robot for the Depressed and Lonely

[Project Prototype]Project for Assistive Tech with Taehyeon Kim, instructed By Prof. Byung-Cheol Min

To address mental health for those with special needs, like the depressed or isolated elderly, we're developing social robots that use multimodal interaction, sentiment analysis, and natural language processing. This approach ensures accessibility by accommodating different communication methods and provides a more empathetic, human-like interaction. Our goal is to enhance emotional well-being by deeply understanding and supporting the mental health needs of our users.

PROPOSAL / WEB / CODE / PDF

Contact With Me 🍔

I always enjoy discussing and exchanging ideas with others, welcome to communicate with me.📤