4.1 Questioning ChatGPT
Within the 10 min, participants can interact with ChatGPT in any form and language they prefer until they get satisfactory results. At the beginning of the conversation, half of the students directly asked ChatGPT to propose ideas. “How to reuse waste towels” [S1]; “Help me to find 3 ways to reuse waste towels” [S3]; “Please help me to use waste towels to make different products” [S5]; “…using towel to reuse and transform, give examples” [S6] and “What creative product can be made by waste towels” [S10]. From the results generated by ChatGPT according to these five questions, we found that without specifying “the purpose” (for hotel to delight guests) and target users, ChatGPT provided many ideas that do not match the purpose stated in the scenario sheet. For example, cleaning cloth, wipes, kitchen towels, construction bricks, and gift wrap.
The most frequently asked question was “more ideas.“ One-third of the questions belonged to this category, such as “any newer ideas” [S5], “I need something more creative” [S6], and “Can you provide me with more ideas?” [S8], among others.
Two participants were assigned roles for ChatGPT. S4 instructed the chatbot, saying, “Your role is a fashion designer” [S9]. S4 even assigned six different roles, including recycle product designer, artist, designer, engineer, Nobel Prize winner, and Elon Musk, so that the chatbot could provide a variety of ideas. However, these questions also did not specify the purpose and the target users, resulting in generated results (e.g., biodegradable towels, recycled fibre, and garments) that did not align with the intended purpose.
S7 and S8 began their conversations by providing all the required criteria, including competition, waste towels, four market segments, and delighting the customers to ChatGPT. After the chatbot provided suggestions such as shopping bags, slippers, toiletry bags, facial cotton pads, and yoga mats, both asked the chatbot for more ideas four times before ending the conversation.
Only three participants sought advice from ChatGPT, asking questions like “Do you think… is suitable for hotel guests?” [S3], “What is the meaning of having…” [S4], and “Do you think… is feasible?” [S9]. Moreover, S9 was the only participant who engaged in humanised conversations with the chatbot. However, rather than discussing idea generation, she pressured the chatbot for more useful ideas.
Most participants were primarily focused on the outcome, a creative idea for repurposing waste towels into a product. They often overlooked the importance of considering the target market and its needs. Of the 90 ideas generated, only 13 (14%) mentioned the target users or their specific needs, while the rest focused solely on the product.
In conclusion, throughout their conversations with ChatGPT, most participants did not engage in in-depth discussions to explore their ideas further. None of them sought advice from the chatbot to refine their ideas or overcome creative fixation. Instead, they were eager to obtain creative outputs directly from ChatGPT.
4.2 Perceived Usefulness of Idea Inspiration Tools
In this study, students perceived ChatGPT as a fast and efficient tool [S10] that inspire them to think in different dimensions and discipline [S3, S6], simplify the convergent thinking process among different ideas [S7], and remind them of the discipline that was outlooked [S5]. Interestingly, some participants did not think AI Chatbot is a good tool for idea inspiration. Three of them feel that the ideas provided by ChatGPT were direct [S2], standard, generic [S5] and non-creative [S8]. S9 point out that whenever similar question and request were inputted, ChatGPT provided similar answers. Therefore, the answers were not creative at all.
For the perceived usefulness of the question guide, all Year 2 students indicated it restricted their imagination and exploration of new ideas [S1, S2, S3]. S4 agreed that the question guide “can alter thinking logic but did not strongly impact the final outcome”. On the other hand, Year 1 students have completely different perspectives. They feel that the question guide can guide them to think in different dimensions [S6] and lead to a creative direction [S8]. The question guide offered a story set so S5 could understand the target user’s needs and have a clear direction to think of good ideas. S9 pointed out that the question guide helped her to explore more ideas without barriers and “visualise” the situation [S10].
4.3 The Proposed Creative Ideas
Participants were asked to self-evaluate the creativity of the ideas they proposed in each round using a 5-point Likert scale, where 1 indicated “strongly disagree” and 5 indicated “strongly agree.“ The ANOVA test results showed that individuals with high self-creative concept scores felt significantly more creative (mean = 5.0; STD = 0; F = 13.3; p = 0.004) when suggesting ideas in Round 1 compared to those with medium (mean = 3.75, STD = 0.5) and low creative profile scores (mean = 4.0; STD = 0). However, participants with high self-creative concept scores perceived that the ideas they proposed in Round 3, after using the inspiration tools (ChatGPT [F = 0.7; p = 0.528] and question guide [F = 0.122; p = 0.887]), were not as creative as those in Round 1.
At the end of the second questionnaire, participants were asked to select the three best ideas from the nine they had proposed and express their perceived usefulness of the two inspiration tools. Eleven ideas were from both Round 1 (no tool) and Round 3 (question guide), while six ideas were from Round 2 (ChatGPT).
The results revealed two intriguing phenomena. Firstly, participants favoured the ideas they had generated over those AI recommended. Secondly, despite the expectation that ChatGPT would outperform other tools, participants did not consider the ideas proposed by the chatbot to be creative enough to make them the top three choices.