However, human language is not a singular example of recursive language. Computer languages, if we are to call them 'languages' at all, are recursive and can generate expressions infinitely.
Is a computer language 'creative' as well?
Notice my use of quotation marks for the term 'creative'. By that notation, I suggest of course that my second use of the term may not be exactly the same as the first use (processing infinite number of expressions with finite number of elements). I question the use of the term and I am ready to extend the use analogously.
A computer program, say a chess program, is capable of amazing amount of computation with its limited number of codes. It can deal with any situations which the rules of chess permit. It is 'creative' in this sense.
However, could the chess program, current or future, deal with the world beyond the chess board? Would it suggest to the chess competitor (we assume it to be human) that they should have a short break when the game becomes too long? Would it argue, when beaten bitterly in the game, that true intelligence cannot be measured by chess and propose to compete in a different game (or even create a new one)? Is the computer program 'creative' in this sense?
In other words, assuming that human intelligence (in comparison with the intelligence of other animals) is made possible by recursive human language, why does a computer language which is also recursive NOT make the computer as 'creative' like human beings?
Is this a matter of computational capacity of the computer? (Yet, the computational power of the current computer is far superior to that of the human brain.) Is it a matter of the complexity of the computer intelligence (Yet, the complexity of the computer intelligence could perhaps be as complex as the human intelligence if it uses its massive computational power recursively onto itself.)
So far at least, the artificial intelligence is distinctively different from the human intelligence. It is smarter (in fact much too smarter) than us only in a specific domain for which it is made. It wouldn't try to step outside its domain (as we often stupidly do). It is always precise (or too precise) in its deductive computation, whereas our 'inference' is full of deviations, leaps and bounds. It is always right, too right to make a mistake.
Can deviation, leaps and bounds be a source of the human intelligence as opposed to the computer intelligence? We differ from other animals with recursion. Are we different from the other recursive being, the computer, in aberration? Are we more intelligent because we sometimes blunder?
Here is what Walter J. Ong says in Orality and Literacy: The Technologizing of the Word (2002, Routledge, also availabel in Questia). By the word "consciousness" he means something explicitly specified, and by "unconscious" something only implicitly associated.
We are not here concerned with so-called computer ‘languages’, which resemble human languages (English, Sanskrit, Malayalam, Mandarin Chinese, Twi or Shoshone etc.) in some ways but are forever totally unlike human languages in that they do not grow out of the unconscious but directly out of consciousness. Computer language rules (‘grammar’) are stated first and thereafter used.(p. 7)
With a larger definition of 'grammar' or 'rules' than the standard definition in linguistics (notice the use of the quotation marks), Ong contines to argue that while the 'rules' of grammar of the computer language stay as they were specified, those of the human language outgrows themselves beyond our epistemological limits.
The ‘rules’ of grammar in natural human languages are used first and can be abstracted from usage and stated explicitly in words only with difficulty and never completely. (p. 7)
Would it be the case, then, by metaphors and analogies, often regarded as abberent and anomalous, we self-organize ourselves beyond our preconceptions (no matter how glorious or disastrous our new selves may be).
Julian Jaynes, the author of The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976, 1990, 2000) also argues that metaphors (and analogies) are highly essential in understanding our cognition.
Jaynes points out metaphors, rather than recursive syntax, as the 'most fascinating.'
The most fascinating property of language is its capacity to make metaphors. (p. 48)
With examples such as "the head of an army, table, page, bed, ship, household, or nail" or "force, acceleration, inertia, impedance, resistance, fields and charm", Jaynes suggests that our cognition, from daily life to science, is maintained by metaphors, just like George Lakoff does in his Metaphors we live by (1980, The University of Chicago Press).
The human body is a particularly generative metaphier, creating previously unspeakable distinctions in a throng of areas. (p. 49)
The concepts of science are all of this kind, abstract concepts generated by concrete metaphors. (p. 50)
According to Jaynes, metaphors are the devices for discovery as well as for communication.
All of these concrete metaphors increase enormously our powers of perception of the world about us and our understanding of it, and literally create new objects. Indeed, language is an organ of perception, not simply a means of communication. (p. 50)
If there is syntactic creativity, there can be metaphorical creativity.
The lexicon of language, then, is a finite set of terms that by metaphor is able to stretch out over an infinite set of circumstances, even to creating new circumstances thereby. (p. 52)
We may be more intelligent than computer because we 'err' in metaphors and analogies. (After all, to err is human; to stay correct computer.)