158просмотров
12 января 2026 г.
📷 ФотоScore: 174
Rethinking How AI Design Tools Are Evaluated A key insight from the study, published in the ACM journal Transactions on Interactive Intelligent Systems, is that traditional ways of evaluating AI design tools may be too narrow. Metrics such as how often users click or copy suggestions fail to capture the emotional, cognitive, and behavioral dimensions of engagement. The Swansea team argues for more holistic evaluation methods that consider how AI systems influence how people feel, think, and explore. Dr. Walton added, “Our study highlights the importance of diversity in AI output. Participants responded most positively to galleries that included a wide variety of ideas, including bad ones! These helped them move beyond their initial assumptions and explore a broader design space. This structured diversity prevented early fixation and encouraged creative risk-taking. “As AI becomes increasingly embedded in creative fields, from engineering and architecture to music and game design, understanding how humans and intelligent systems work together is essential. As the technology evolves, the question is not only what AI can do but how it can help us think, create, and collaborate more effectively.” Reference: “From Metrics to Meaning: Time to Rethink Evaluation in Human–AI Collaborative Design” by Sean P. Walton, Ben J. Evans, Alma A. M. Rahat, James Stovold and Jakub Vincalek, 7 March 2024, ACM Transactions on Interactive Intelligent Systems. https://dl.acm.org/doi/epdf/10.1145/3773292
DOI: 10.1145/3773292