Since Stanley Kubrick’s 2001: A Space Odyssey (1968), and the rebellion of computer system HAL 9000 against it’s human peers in outer space, there has been an almost morbid desire to one day see the rising of sentient artificial machines (terminator, AI, I-robot, etc.) capable not only of high order cognitive processes but moreover, of emotion, affective and creative behavior. According to artificial intelligence guru Hugo de Garis, who has worked extensively with neural networks, genetic algorithms and self-evolvable hardware, he claims that we are not far from producing artificial brains that will be infinitely more intelligent that we humans may consciously be (Satinover, 2001).
Within the field of creativity, it is starting to be acknowledged that it will be difficult to get further insight into the underlying principles of creativity from further psychological experimentation. Hence, despite the flashes of fiction and fantasy (which day-to-day are more real), AI becomes an immensely rich field of study that indeed might yield new insight into the understanding of the creative phenomenon. The main reasons for the above claim are that we have the possibility to observe models of both structure and function of the brain (where creativity lives in the human being) that have been dark areas of exploration for cognitive science in past decades (of course, brain imaging revolution in neuroscience opens another door to the exploration of the human mind). Secondly, an advantage of AI for the study of cognitive functions, even over neuroscience brain imaging experiments, is that there is no ethical issue concerning experimentation and manipulation of variables, which is the case of brain scientific research in humans and non-human species.
Now, there are two main questions with regard to AI and creativity. The first and possibly the sexier one is, are AI models (machines) capable of human creativity? The second question is, how do existing AI models of creativity, though imperfect, help us understand human creativity? I would say that with the evidence we have today, the answer to the first question is no, but AI models have been capable of replicating some aspects of human creativity, specially regarding the production of products that meet the criteria of novelty and usefulness (Hennessey & Amabile, 1988; Stein, 1974; Boden, 1998). With regard to the second question, and derived from evidence from the first question, the limitations of the AI models in capturing the essence of human creative behavior cast light on those critical components of human creativity that are unique to the specie (for now…).
Boden (1998) provides a good framework to analyze AI models and creativity. She says that there a three types of creativity: (i) combinatorial process (mainly operationalized with analogies), (ii) exploratory (within a conceptual space) and (iii) transformational, or shifts in the conceptual space so that a new conceptual space allows alternatives that would have been otherwise impossible in the old one. If we observe closer the above types of creativity described, it resembles Kirton’s (1976) adaptive–innovative continuum where the first two types of creativity would fall into the adaptive end of the continuum, in other words, producing novelty within the prevailing system while the third into the innovative type of creativity by redefining the system.
With regard to combinatorial processes, there are AI models capable of generating analogies and relationships to generate novel products as judged by humans. This is the case of JAPE (among several models), a model that relies on domain mapping processes for generating jokes and riddles that yield fairly and reliable humorous results. Nonetheless, the problem with this model is that the domains and algorithms by which the model seeks analogies to create novel combinations are predetermined by the programmer and remain unaltered once the combination has been done (Boden, 1998). This is clearly not the case of human associative process in which, as an effect of the analogical process, the elements combined are modified from its original state. More over, the model is incapable of innovating to incorporate different analogical domains from those that were initially pre-programmed so the degree of novelty is rather constricted.
There is another set of analogical models such as COPYCAT, in which through a bottom-up computing fashion (learns as it computes), it is capable of finding context sensitive analogies to derive novel solutions. Nonetheless, how does such model know what are relevant contextual domains to draw significant analogies? As already said, this illuminates the complexity of the analogical (associative) process, which has been recognized to be a crucial cognitive process in creative thinking (Runco, 2007). The same has been confirmed by neuro-imaging studies that capture the activity of the associative cortices while subjects perform creative tasks (mainly prefrontal lobe activity) and also while they incubate on tasks (almost the whole brain!) (Stein, 2007).
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103, 347-356.
Hennessey, B. A., & Amabile, T. M. (1988). Story-telling: A method for assessing children’s creativity. Journal of Creative Behavior, 22, 235-246.
Stein, K. (2007). The genius engine: Where memory, reason, passion, violence, and creativity intersect in the human brain. Hoboken, NJ: John Wiley & Sons, Inc.