Since Stanley Kubrick’s 2001: A Space Odyssey (1968), and the rebellion of computer system HAL 9000 against it’s human peers in outer space, there has been an almost morbid desire to one day see the rising of sentient artificial machines (terminator, AI, I-robot, etc.) capable not only of high order cognitive processes but moreover, of emotion, affective and creative behavior. According to artificial intelligence guru Hugo de Garis, who has worked extensively with neural networks, genetic algorithms and self-evolvable hardware, he claims that we are not far from producing artificial brains that will be infinitely more intelligent that we humans may consciously be (Satinover, 2001).
Within the field of creativity, it is starting to be acknowledged that it will be difficult to get further insight into the underlying principles of creativity from further psychological experimentation. Hence, despite the flashes of fiction and fantasy (which day-to-day are more real), AI becomes an immensely rich field of study that indeed might yield new insight into the understanding of the creative phenomenon. The main reasons for the above claim are that we have the possibility to observe models of both structure and function of the brain (where creativity lives in the human being) that have been dark areas of exploration for cognitive science in past decades (of course, brain imaging revolution in neuroscience opens another door to the exploration of the human mind). Secondly, an advantage of AI for the study of cognitive functions, even over neuroscience brain imaging experiments, is that there is no ethical issue concerning experimentation and manipulation of variables, which is the case of brain scientific research in humans and non-human species.
Now, there are two main questions with regard to AI and creativity. The first and possibly the sexier one is, are AI models (machines) capable of human creativity? The second question is, how do existing AI models of creativity, though imperfect, help us understand human creativity? I would say that with the evidence we have today, the answer to the first question is no, but AI models have been capable of replicating some aspects of human creativity, specially regarding the production of products that meet the criteria of novelty and usefulness (Hennessey & Amabile, 1988; Stein, 1974; Boden, 1998). With regard to the second question, and derived from evidence from the first question, the limitations of the AI models in capturing the essence of human creative behavior cast light on those critical components of human creativity that are unique to the specie (for now…).
Boden (1998) provides a good framework to analyze AI models and creativity. She says that there a three types of creativity: (i) combinatorial process (mainly operationalized with analogies), (ii) exploratory (within a conceptual space) and (iii) transformational, or shifts in the conceptual space so that a new conceptual space allows alternatives that would have been otherwise impossible in the old one. If we observe closer the above types of creativity described, it resembles Kirton’s (1976) adaptive–innovative continuum where the first two types of creativity would fall into the adaptive end of the continuum, in other words, producing novelty within the prevailing system while the third into the innovative type of creativity by redefining the system.
With regard to combinatorial processes, there are AI models capable of generating analogies and relationships to generate novel products as judged by humans. This is the case of JAPE (among several models), a model that relies on domain mapping processes for generating jokes and riddles that yield fairly and reliable humorous results. Nonetheless, the problem with this model is that the domains and algorithms by which the model seeks analogies to create novel combinations are predetermined by the programmer and remain unaltered once the combination has been done (Boden, 1998). This is clearly not the case of human associative process in which, as an effect of the analogical process, the elements combined are modified from its original state. More over, the model is incapable of innovating to incorporate different analogical domains from those that were initially pre-programmed so the degree of novelty is rather constricted.
There is another set of analogical models such as COPYCAT, in which through a bottom-up computing fashion (learns as it computes), it is capable of finding context sensitive analogies to derive novel solutions. Nonetheless, how does such model know what are relevant contextual domains to draw significant analogies? As already said, this illuminates the complexity of the analogical (associative) process, which has been recognized to be a crucial cognitive process in creative thinking (Runco, 2007). The same has been confirmed by neuro-imaging studies that capture the activity of the associative cortices while subjects perform creative tasks (mainly prefrontal lobe activity) and also while they incubate on tasks (almost the whole brain!) (Stein, 2007).
The type of creativity that AI has been most successful in replicating is the type of exploratory creativity, where novelty is a result from digging deeper within a conceptual space. In this sense, computational power, combination and the ability to scan huge databases of information allow models to reach for concepts, ideas and/or products that would have been practically impossible for the human mind to reach, due to our natural limitation of working memory’s capacity and ability to retrieve information and hold it online (Dietrich, 2004; Stein, 2007).
This is the case of the BACON models geared towards scientific discovery that have been able to explore exhaustibly their conceptual spaces (mathematics and physics) and replicate historical breakthrough concepts such as finding Kepler’s law, Boyles’s law and Ohm’s law among others. In addition, some of these models have even produced new breakthroughs in the field of mathematics that have led to scientific patents.
However, there are two drawbacks to the above example. The first of these is that there is a bias or manipulation from the experimenter in wanting to model to find something and hence directing its efforts towards that goal that is preconceived by the experimenter. In this, since the model has no way to know with out its human partner that it has reached, rediscovered or discovered a breakthrough concept. The second drawback is derived from the latter, which is it lacks a mechanism of making meaning out of what is finding and assigning value to the different options. This is extremely important for it is depicting a crucial aspect of the creative process in humans that is beyond the computational power of combinations and permutations. Accordingly, it seems that what defines creativity is the mechanism by which we judge novelty and usefulness and negotiate our perception of novelty and usefulness with society for acceptance (Simonton's 5th “P”, persuasion).
In another vein of thought, AI experiments reveal that seeking novelty by breaking out of the conceptual space is also a distinctive characteristic of human creativity. For example, model/machine EMI is capable of reproducing and composing music that resembles Mozart or Bach (or Charlie Parker), and its compositions may be judged to be even composed by Mozart or Bach in tests of blind judgments by humans. Yet, EMI is incapable of breaking out of the particular style of either composer at will. The same is true to the model AARON, which is capable of producing artwork of human pictures (even painting them) that have been regarded as true pieces of artwork and exhibited in galleries around the world. Nonetheless, although each picture is unique, each of them is with no doubt bound to a certain style that the model cannot break away. This is why the third type of creativity has been the most difficult to model in AI, transformational creativity.
Even so, there are a few models that have been able to perform transformational creativity that is, defining new conceptual spaces with each iteration or production. This is the case of models based on genetic algorithms, basically blind variation of components and in this case in particular, variations in the heuristics or sets of rules by which the model produces combinations. With this kind of model, it is possible to have products that by no means resemble their previous iteration, hence truly define a newly conceptual space (like if the EMI machine was capable of composing like Mozart and then by free choice, do it like Miles Davis!). Nevertheless, the question is, how does the model know that the newly conceptual space is of any value at all (either to the model or to humans)? We could think that a set of values could be pre-programmed into the model but with that, we are already killing a priori potential novel landscapes and moreover, by definition, the set of pre-assigned values will not be valid for the new conceptual space.
This is the major drawback of AI models and at the same time a big insight to human creativity, the ability to have self-criticism and evaluate (give meaning) upon a set of relevant and contextual criteria. In other words, it is crucial to creativity the ability by which we can generate novel set of criteria to appreciate a new conceptual space and shift of paradigm. In this sense, what we like or what we dislike, why we get aroused by a piece of artwork or music or why we get excited with a new scientific breakthrough has to do with our emotional appreciation and perception of our inner and outer world and the latter, has not been replicated by any AI model up to date. Perhaps, an AI model is capable of breaking the conceptual space to yield transformational creative products but we are certain that the machine doesn’t jump in excitement when it does so, it is not aware that it has done so and furthermore, has no specific purpose to the creation.
The latter might be the most crucial aspect of human creativity, the overall drive and purpose of creator even when he deliberately allows chance to rule his creative process. Somehow, his purpose and motives are embedded in the creation and we, as observers, are able to appreciate it. Personally I get thrilled and mesmerized that something novel and useful was the output of the human mind.
Regardless of the past imperfections in AI models, the field keeps moving forward in full thrust in its quest to replicate the human brain. Perhaps a holistic approach, as opposed to trying to replicate individual cognitive functions, might yield the expected result of a sentient model/machine (hence capable of creativity!). This is the case of the BLUE BRAIN project (SEED magazine, February 2008). In the basement of Lausanne University in Switzerland a group of neurologist and computer scientists, with the use of supercomputers, have already replicated successfully a two-week-old rat neocortical column containing 10.000 neurons (built a artificial neural network). The only thing stopping them to scale their project up is computing power and energy. Nonetheless, Henry Markram, head of the project is confident that in the next years they will be able to scale up their project to produce the first conscious machine and he says: “I think it will be just as interesting, perhaps more interesting, if we can’t create a conscious computer. Then the question will be: What are we missing? Why is this not enough?” Personally I have mixed feelings if I want to see conscious machines in my lifetime but to be honest, curiosity devours me.
References:
Boden, M. A. (1998). Creativity and artificial intelligence. Artificial Intelligence, 103, 347-356.
Dietrich, A. (2004). The cognitive neuroscience of creativity. Psychonomic Bulletin & Review, 11(6), 1011-1026.
Hennessey, B. A., & Amabile, T. M. (1988). Story-telling: A method for assessing children’s creativity. Journal of Creative Behavior, 22, 235-246.
Kirton, M. J. (1976). Adaptors and innovators: A description and measure. Journal of applied psychology, 61, 622-629.
Stein, K. (2007). The genius engine: Where memory, reason, passion, violence, and creativity intersect in the human brain. Hoboken, NJ: John Wiley & Sons, Inc.
Satinover, J. (2001). The quantum brain: The search for freedom and the next generation of man. New York: John Wiley & Sons, Inc.
Stein, M. I. (1974). Stimulating creativity. Individual procedures. New York: Academic Press.
Runco, M. A (2007). Creativity theories and themes: Research, development and practice.
-- Diego Uribe, Graduate Student