Digital Digital Burgess Conference Follow-up:
Steven Rooke
Conference was held August 29-September 1 1997, Banff Alberta, Canada

More on aesthetics vs. physics

Man, it's great to be back here in the boxcar near Thermal, next to these rocky granite hills, 2500 miles after Banff. I have my very own rugged adaptive fitness landscape to hike on every morning.

Bruce wrote under "Open Discussion" in the Digital Burgess webpage:

    "What are the fundamental sources of the aesthetic that can spring equally from nature or a digital ecosystem?"

I don't recall a lot of discussion of this question during the sessions, but Steve Grand and I had a nice conversation about it on our hike down from the Shale. I had described my attempt to build a co-evolutionary systems with populations of image generators (like my artwork at the conference) and image commentators (evolving neural networks that initially train themselves on past generations of my own aesthetic fitness selection on the ancestors of the images). Steve pointed out a fatal flaw in my design: there has to be something underlying the whole thing for it to break into new territory once it moves beyond my own aesthetic fitness training set: like maybe physics. And there would have to be more of a reward/punishment system involved in the co-evolutionary fitness determination of the two populations. You have to skin your knees in order to learn to walk (as I am reminded occasionally hiking around on the nearby granite fitness landscape...).

You can't help noticing the natural aesthetic of the Canadian Rockies on that hike down from the Burgess Shale. Would other organisms relate similarly to an aesthetic based on natural landscapes (obviously, some of us would say, but unprovable). Cosmologists and subatomic particle physicists refer to "beauty" in physical law formulations as probably something fundamental in nature. Several people commented on studies showing the value of facial symmetry in mate selection.

Karl Sims and others pointed out during the sessions that we are a long way from understanding enough about how a cell works to simulate something of that level of complexity. Yet existing Alife systems already show great promise within their own domains.

Is there a domain within which we _do_ know enough to model not real physics, but its simplified equivalent for the interaction with the "environment" and as part of fitness determination? For example, could I substitute human rules of visual aesthetics and design principles for physics in my co-evolutionary image generator / image commentator populations?

A book called "Design Basics" by Lauer & Pentak arrived while I was gone; I know nothing about design principles. If I were coming from the other side, knowing nothing about principles of physics or biology, I would find a plethora of books on the subjects, but it would be futile to try to codify those principles to simulate a cell at present -- yet I wouldn't be able to determine that without years of study.

Does anyone out there have an opinion about the state of our knowledge of visual design basics that might help me decide whether this is a path worth pursuing? I'm picturing a rather narrowly-defined system, using my existing image generator code, plus a separate population of virtual organisms that get to "look" at only the pixels (phenotype) of the image generator population. I would supply a bunch of low-level image processing functions that would sit between the actual pixel values and the part of the commentator genome that would "know" about the design principles, and let evolution by automated selection on a training set take care of wiring the low-level IP functions up with the codified design principles. I'm not above `cheating' to get the ball rolling, i.e. I would probably start with an existing sample of images and my own aesthetic fitness assignments (perhaps analogous to inaugurating Tierra with its initial 80-byte self-replicating program).

But Steve Grand is right; for such a system to be anything more than a copycat laboratory assistant in generating my kind of images, it has to have something like physics, e.g. codified principles of aesthetics. With a genetic/evolutionary system like this, we wouldn't have to know *how* to piece together the parts that look at pixels and the various design principles -- you let evolution and your design of the image-commentator genotype take care of that. But the design principles themselves would have to be sufficiently sound for me to make the image-commentator population skin its knees while learning to walk. Once it starts walking (evolves commentators within epsilon of a training set of aesthetic fitness assignments), you turn over the aesthetic fitness assignment of the image-generator population to the commentators, and step out of the loop, see where it goes over thousands of generations.

(Is this topic too tangential to Bill Riedel's call for writeups of alife systems helping out in the natural sciences to be included there?)

by Steven Rooke

Conference Home