Haichang Li

Junior Student
Purdue University
Human-AI Collaboration
Observer Thinker Designer

Information | Communication
Expected Class of 2025
Purdue University
Email:  li4650 [at] purdue (dot) edu

LinkedIn / Twitter / Github


I am Haichang(Charles) Li(李海畅), currently a junior Undergraduate student at Purdue University studying Information and Communication. Before coming to Purdue University, I dropped out of the EEE joint program of XJTLU and UoL for personal planning with the grade of the first class honor.

At Purdue, I was mainly advised by Professor Yung-Hsiang Lu and Professor Liang He, but I was also co-supervised by Professor Yeon-Ji Yun in the Department of Music and worked closely with Professor Fenglong Ma at PSU about NLP(in chronological order). My interests are interdisciplinary and these professors' fields do not overlap, and I am grateful for the privilege of receiving guidance and advice from different directions to grow during my undergraduate years.

I am currently interested in Human-AI Collaboration, specifically at the intersection of AGI and HCI (in especial creative works like Music & modeling). My ideal role is to be the link and bridge between external observers and designers in this process. I hope to explore how AI can better help human beings to achieve benign coexistence, such as in the fields of multi-modal a11y, that is, to help people who need help more first and to explore the impact of AI on the world. If you are also interested, feel free to drop an email to me😀


This website has been temporarily stopped updating from 2023.10! I have stopped looking for opportunities for research intern and am currently working on the multi-agent LLM 4 Fab&Medical projects, which will be updated after February 2024. Thank you for your visit, and welcome peers to email me to discuss and get my latest update!:)

[2023.11] The final version of "Shine Resume" is released in Chinese! Hope this will help some of the "hidden" crowd and shine with them! Thanks for the efforts of all the partners this summer. This is the first time for me to complete a commercial project from 0 to 1, from design to implementation and promotion.🎨

[2023.10] I am currently fascinated by using LLM and emotional computing to help people with psychological needs, plz feel free to check out our proposal and leave any comments if you are also interested.🤖

[2023.9] "Visual Music for the Hearing Impaired through synaesthesia" enters the user study phase! Ideally, the paper will be completed by December and Coming soon!🎼


Experience Image

Synesthesia: Music visualization for the hearing impaired

[Coming Soon]Supervised By Prof. Yung Hsiang Lu and Prof. Yeon-Ji Yun

Have you ever thought about synesthesia? There are about 200 million hearing impaired people in the world, and the music we normally enjoy is a luxury for them. For example, when we hear grand piano music, we may conjure up images of medieval battlefields, but those who are equally imaginative and hard of hearing cannot. So, our project is willing to be their ears, based on emotions and AIGC and LLM, we let them use the "see" way to "hear" the music. In the face of the injustices they suffer, our answer takes its own approach, using technology to provide them with a potentially viable approach.

CODE / WEB(Long-term project, but the 1st paper will be released in 2024.Jan)
Experience Image

Social Robot for the Depressed and Lonely

[Project]Project for Assistive Tech with Taehyeon Kim, instructed By Prof. Byung-Cheol Min

To help people with special needs, such as those with depression or the elderly who are alone, cope with mental health challenges, we propose an innovative approach that uses multimodal social robots, combined with sentiment analysis and natural language interaction technologies.

The reason for using multimodality is to improve accessibility and accuracy, with multimodal data verifying each other, while also taking into account that some people may not be able to output through a certain mode (such as situations with limited facial expressions, typing difficulties, or inability to speak). The application of sentiment analysis, which helps us understand users' emotions, and LLM, which provide a more human conversational experience through natural language processing, play a key role in this approach. Through this integrated, multimodal approach, our social robots are able to more fully understand and support the mental health needs of our target population, contributing to their emotional well-being.

Experience Image

ShineResume: Multimodal AIGC platform for confused graduates

[Founding Member]An "0 to 1" entrepreneurial experience obtaining 10M+ CNY funding support

The Post-COVID-19 era has brought a cold winter to the Chinese job market, with companies announcing layoffs and hiring freezes. In such an environment, there exists a group of underserved individuals who come from ordinary schools, and during their college years, they simply follow the prescribed curriculum. They don't know what they're passionate about, what kind of work suits them, or how to start their job search. They find themselves trapped in a cocoon of information gap.

We aim to have AI collaborate with these individuals, putting people at the center, leveraging multimodal AIGC and LLM. With just the user's background information, we can recommend various industries and positions, extract key terms, and then create and optimize their resumes as per requirements. Additionally, we can perform style transfer for their official photos to meet the desired appearance.

Our purpose is to let AI be valueable for underserved people, bring benefits to humanity, and help neglected people have the opportunity to find themselves and shine.

WEB(CHINESE)(EC:The 5th avatar in the "WHO ARE WE" is mine designed by the first UI recruited)
Experience Image

Data driven model for human thermal confort during sports

[SURF] Supervised by Prof. Long Huang

In order to help architects design the interior environment, the interaction between the human body's psychological subjective thermal comfort and the external objective environment is worth studying and observing. Subjective feelings can be reflected through the designed user study, and the PMV model and a variety of data analysis and processing methods can be combined to quantify and predict such psychological feelings, so as to improve the interaction experience between people's psychological feelings and the external physical environment.

POSTER(It's rough and not relevant now, but I'll always remember my 1st enlightening work)

Contact With Me🍔

I always enjoy discussing and exchanging ideas with others, welcome to communicate with me.📤