Remko
Scha and Eric Vreedenburgh
Towards
a different architecture
Introduction.
Post-war housing
has been characterized by uniformity in space and time. Large numbers
of identically layed out blocks were built, consisting of large
numbers of identical houses. And these houses seem to be designed
for eternity: there is no room for change.
If we want to build in a more flexible and varied manner, the architect
must give up the idea of being personally responsible for every
detail of every building he designs. He must design at a higher
level of abstraction -- specify rules and boundary conditions, and
leave the concrete instantiation to the end user or to chance.
In contemporary visual art, this more distanced approach already
has a rich tradition. The clearest example is algorithmic art,
where complex, unpredictable processes are defined by means of completely
explicit rules, which are executed consistently and precisely by
the computer.
Inspred by the example of algorithmic art, this article will explore
the possibility of computer-generated architecture -- the possibility
to realise architectural designs as algorithms which do not specify
individual buildings, but infinitely large classes of different
possible buildings.
Positivist
architecture.
Like all other
cultural activities, architecture is first and foremost a matter
of imitation and convention. A house, a department store or a museum,
must look like other houses, department stores or museums. But at
the same time (like in all other cultural contexts), the designer
and his client often feel the subjective desire or the economic
necessity to distinguish themselves from their colleagues. Therefore,
buildings may be designed which are intended to be demonstrably
different from their predecessors, or even demonstrably better.
The architect who introduces smaller or larger departures from traditional
shapes, lay-outs, materials or construction methods, often indicates
his reasons. Usually, these refer to the efficient realisation of
the intended function of the building. Architecture thus seems a
positivist discipline, explicitly articulating the goals to be achieved
by a building, deducing its design decisions from these goals, and
evaluating the decisions in terms of a purely economical kind of
rationality. Buildings appear to be designed just like machines.
The ideal house is a dwelling machine, in which every detail
has a fixed, functionally motivated place.
Post-war housing displays a blatant discrepancy between ideal and
reality. The analytical, goal-oriented approach, intended to yield
provably optimal results, is completely counter-productive. A building's
actual functions cannot be specified explicitly -- they only emerge
in the interaction with the actual users, they keep changing in
the course of time, and they need not be fixed unambiguously at
any moment.
The functions employed by the functionalist architect are largely
fictitious. The economical component of the functionalist method
has therefore taken on a life of its own. The notion of a house
was reduced to the specification of a minimal set of requirements,
and economic reality equated the minimal description with the maximal
one. Functionalism served as an excuse to eliminate "superfluous" elements. The serial production method of car manufacturing became
the guiding example.
Postmodernist
philosophy.
The hybris
of rational thought that we observe here is equally widespread outside
of architecture. That thinking tends to overestimate itself is almost
inevitable: the rational mind would like the world to be an orderly
and thereby controllable complex. For this reason we can never be
sure where the boundary lies between an open-minded search
for structures and an obsessive projection of them. But we
do have some experience with this issue now. Numerous times we have
witnessed that plausible-sounding ideas about methodology or social
structures were implemented all too literally and flopped completely.
This has been the fate of all explicitly formulated political philosophies,
and of all scientific approaches which involved explicitly formulated
methodologies (in fields like sociology, anthropology, psychology
and linguistics). The failure of functionalist architecture and
rationalist urban planning is just another example of this phenomenon.
The illusion of an individual subject that can articulate what it
knows and justify what it does, has been a constitutive ideal in
our culture. This over-estimation of rationality explains the increasingly
comprehensive technocratization of the world. And it explains the
hybris which is often displayed by newly launched artistic, scientific
and social movements which can precisely explain their principles
and methods right away, before they have produced any results.
But perhaps a turning point has come in this development. Philosophers
like Jacques Derrida have popularized notions which allude to the
boundaries of rationality, such as ebb-sense (the withdrawal
of meaning and the meaning of withdrawal) and odder-ness
(the importance of intrinsically elusive things which are incommensurable
with our mental framework). Reality is different from what we know.
Life is more capricious than the schemas which try to capture it.
We therefore cannot control it. Who does not want to admit that,
will find it out all the more painfully.
These themes also appear in recently developed branches of mathematics
and physics which deal with unpredictability. Non-linear dynamics,
for instance, shows that arbitrarily small deviations in the initial
conditions of a physical system may give rise, in the course of
time, to ever increasing variations in the resulting state. In the
beginning of the twentieth century, physicists had discovered already
that it is not possible to observe all details of a physical system
precisely at the same time. This gave rise to quantum mechanics
-- calculations with wave functions which only describe the occurrence
probability of different states. Non-linear dynamics now
shows that the quantized nature of matter is not the only source
of uncertainty when we try to predict future states of a physical
system. In classical, deterministic physics, we cannot predict everything
either -- not even approximately. Our measurements of current states
always have a limited accuracy, and in a non-linear system this
produces an ever-increasing uncertainty about the future.
The issue is now, what we should do with such ideas. What kind of
gay science is on the horizon if we make the dialectical step to
a rational consciousness which does not assume an articulation of
its rationality, to a subjectivity which can act sensibly without
trying to control and predict everything. In dealing with this question,
architecture may find inspiration in recent developments in visual
art.
Kantian
esthetics.
The insight
that reality does not coincide with our conceptual models of it
is not new. It was formulated with inescapable clarity by Immanuel
Kant. And the idea that our rationality is more limited than our
cognition, that we know more than we can articulate, was articulated
quite explicitly by the early nineteenth century romantic artists
and philosophers. Since those days, art defines itself against
the increasing rationality of main-stream culture, by being
the symbol of everything that rational thought ignores. Art celebrates
intuition and direct experience.
Whose intuition? Whose experience? Some artists, and many collectors,
critics and other fans would immediately point to the Divine inspiration
of the artistic genius. But there are other traditions in contemporary
art which emphasize the richness of all individual and collective
human experience. Marcel Duchamp, for instance, was very explicit
about this: "The
spectator makes the picture."
Kant viewed
the esthetic as a dimension of perception: perception which becomes
conscious of itself, when the process of input interpretation does
not yield a definite final result, but nevertheless creates a coherent
experience. When Duchamp assigned the status of "art work" to existing readymade objects, he drew a radical consequence from
Kant's point of view: that the input doesn't matter much, as long
as the observer's process of esthetic reflection can take its course.
Kant himself
already pointed out that works of art constitute sub-optimal input
material for this process, because the artist's intentions deflect
this process: they do induce a definite interpretation of
the art work, which ends the interpretive process before the cognitive
resonances which constitute the esthetic experience could have been
built up. For Kant, the paradigmatic esthetic experience does not
involve art, but natural phenomena.
Since Duchamp, increasingly many artists have accepted the challenge
that is implicit in Kant's ideas: to create a non-intentional art,
an art that can be experienced as a natural phenomenon. Several
rather different artistic movements have developed procedures for
generating art through more or less autonomous processes, initiated
by an artist who would not be able to predict the final result:
écriture automatique, action painting, physical
experiments, biological processes, systematic, conceptual, and stochastic
art. Sol LeWitt: "The artist's will is secondary to the
process he initiates from idea to completion. [...] The process
is mechanical and should not be tampered with. It should run its
course." [1]
Perhaps the
clearest example of this development is algorithmic art,
where a process is defined by completely explicit rules, executed
by the computer with extreme consistency and accuracy. By employing
mathematical simulations of chance, the unpredictability of the
outcome can be maximized.
Art is often viewed as a medium that an artist employs to transmit
profound thoughts to his audience. But what an observer considers
important or meaningful in an artwork is often independent of the
artist's intentions. That a computer has no intentions at all, is
thus no reason to doubt the possibility of fully automatic computer-generated
art. Precisely the iron consistency and the relentless ardor of
the inhuman computer yield results which people may find interesting.
Analytic
art.
But algorithmic
art did not come about as an immediate result of considerations
in philosophical esthetics. There is another source, which is the
tradition of analytical thinking about the structure of the image
-- the kind of thinking we find already in the art and writings
of the Renaissance painter/mathematician Piero della Francesca,
and which in the twentieth century became one of the most important
driving forces behind the artistic developments.
The pioneers of abstract art (Kandinsky, Malewitsch, Mondrian) and
their disciples (such as Lissitzky, Rodschenko, Van Doesburg, Vantongerloo)
always constructed their images by means of a limited repertoire
of elementary building blocks and operations. It almost seems as
if the individual images only served as means to discover an increasingly
pure and sharp visual language. But the results of this exploratory
process never became quite explicit, because, as it happens, painters
make paintings rather than languages.
This tautology became invalid in the sixties, when some new painting
movements were launched who employed all media except painting.
It became increasingly popular, for instance, to make descriptions
of visual situations, rather than actual paintings or sculptures.
Sometimes these descriptions were not meant to be executed, but
to define an art which would only exist at a conceptual level.
The nice thing about a description which is executed (a "visual
score"), is that it can often be realized in many different
ways. In that case, the artist cannot exactly predict the result
of the execution of his work. He only fixed certain properties,
but left other aspects to the "performer" or to chance.
If the description of an artwork or a class of artworks is specified
by mathematically precise algorithm, this is called algorithmic
art. In principle, a simple algorithm can be executed manually
by a human person. But usually, algorithmic art is realized by a
digital computer.
An algorithm may define a large class of different images with complete
precision, for instance by indicating that all variations within
a certain pattern must be enumerated systematically. Or, if the
number of possible choices is too large to be realized one by one,
the algorithm may indicate that random samples are to be drawn from
a set of possibilities. In that case, every new execution of the
algorithm may yield new images.
The algorithm is a "meta-artwork": the mathematical characterization
of a set of possible artworks. The visual language of an artist
is no longer implicitly suggested by an uvre consisting of
individual works. The language is explicitly specified in the algorithm
which generates arbitrary examples from the uvre.
Algorithmic art in the nineteensixties ties in with the analytical
movements in early abstract art, but it defines visual languages
which are less complex than those of Kandinsky, Malewitsch or Mondrian.
The neo-constructivist chance art of François Morellet and
Herman de Vries, for instance, employs algorithms which put a particular
shape (for instance a square, a circle or a line segment) on randomly
chosen positions on the plane. In fact, these algorithms were still
executed by hand; random choices were made by throwing dice or by
consulting random number tables.
Artificial
[2]
The constructivist
tradition was concerned with harmony and purity. Today, that seems
a somewhat arbitrary and limited ideal. Expressionism taught us
the esthetics of ugliness. Duchamp demonstrated the esthetics of
indifference. The current challenge is an esthetics that encompasses
everything: beautiful, ugly, and indifferent.
Art is not a means of communication. It is meaningless raw material,
interpreted in an absolutely arbitrary way by a culturally heterogeneous
audience. There are no serious reasons for wanting to make certain
artworks rather than other ones. An artistic project that wants
to face this issue, must avoid choices, transcend styles, show
everything: generate arbitrary examples from the set of all
possibilities.
An individual, spontaneous artist cannot live up to this challenge.
What is needed, is a deliberate technological/scientific project,
with a sensible division of labor between man and machine. Human
artists/programmers should develop an algebraic definition of the
space of all possibilities; the computer can then choose and display
random examples from this space.
The image-generation project Artificial uses this approach
to realize the Kantian ideal of an art without artists. The algorithmic
techniques that Artificial deploys for this purpose are based
on the neo-constructivist chance art mentioned above.
One of the prototypical algorithms in sixties chance art, for instance,
puts tokens of a given visual shape on randomly chosen positions
on the plane. A similar algorithm creates arbitrary closed shapes
by combining line segments. These two algorithms can be combined
in an obvious way, so that both the shape and the position of the
image elements are determined at random. Other algorithms generate
a multitude of different regular patterns or regular shapes; these
can also be integrated. We may thus gradually abolish choice, by
avoiding the exclusion of any choice -- by affirming every choice,
and by putting it on a par with all other choices inside an all-encompassing
probabilistic system.
The ultimate consequence of this approach would be a computer program
generating all possible images, with probability distributions
that yield maximal diversity. It will not be easy to develop this
program. But the Artificial programs show that it is possible
now to make significant steps in this direction.
Cage: "I
was driving out to the country once with Carolyn and Earle Brown.
We got to talking about Coomaraswamy's statement that the traditional
function of the artist is to imitate nature in her manner of operation.
This led me to the opinion that art changes because science changes
- that is, changes in science give artists different understandings
of how nature works." [3]
In their metonymic
symbolizations of "chance", "nature" and "objectivity",
the process artists of the sixties manifested a deeply felt emotion
-- the desire for an art that does not originate from the whims
of the individual, but from a deeper necessity. The project of a
total, all-embracing chance art is now initiating the process of
actually satisfying that desire -- by treating absolute randomness
as the deepest necessity.
Cage: "Is
man in control of nature or is he, as part of it, going along with
it? [...] Not all of our past, but the parts of it we are taught,
lead us to believe that we are in the driver's seat. With respect
to nature. And that if we are not, life is meaningless. Well, the
grand thing about the human mind is that it can turn its own tables
and see meaninglessness as ultimate meaning." [4]
Artificial
architecture?
Obviously,
Artificial constitutes a very constructive contribution to
contemporary autonomous art, where nothing is created any more without
the painful awareness that there are no good reasons to make exactly
this rather than something entirely different. Artificial
explodes this impasse. It posits a stimulating technical challenge
which takes our current relativistic insights seriously: to show
everything.
In architecture, the same issue is at stake. In many situations
it is inappropriate for the architect to force his individual taste
upon others, but it seems almost unavoidable. Inspired by the Artificial approach to autonomous art, we therefore propose an architecture
of chance, where the architect works at the right level of abstraction.
The designer is no longer concerned with expressive details. He
only defines the "rules of the game", which determine
which situations are possible at all. Rules which specify the size
of the playing field, which pieces are in the game, what groupings
of these pieces are possible, which moves can be made -- just as
in chess. In a particular context the architect can thus (if necessary!)
make decisions about scale, rhythm, or the repertoire of applicable
elements. In this way, a specific situation may get its own morphology.
Concerning the functional dimension of architecture: we view the
function of a building as variable. We choose not to fix
the function as the starting point of the design process. The concept
of "function" is replaced by the concept of "potential".
To take the potential of a building into account, chance architecture
algorithms must be somewhat more complex than algorithms for autonomous
art. To deal with constructive and functional aspects in an optimal
way, they must be integrated with automatic design techniques from
Artificial Intelligence. (Automatic design is one of the most successful
areas in A.I. In designing VLSI circuits, for instance, programs
are by far superior to human designers.)
The Palladio-machine.
The idea of
rule-based architectural design is not new. It occurs implicitly
in the very first theoretical essays about architecture that were
written in our cultural tradition. Especially relevant in this context
is the rediscovery of Vitruvius in the Renaissance, and the interpretation
of his works by Palladio.
Palladio was the first important architect who constructed his designs
by means of rules. This enabled him, for instance, to produce a
large number of variations on the theme of a 'villa'. "I Quattro
Libri dell' Architettura", published in 1570, mentions several
such rules. Other rules can be reconstructed by analyzing his villas,
as was done recently in a study by George Hersey en Richard Freedman. [5]
Hersey and
Freeman tested the correctness of the reconstructed rules by implementing
them in a computer program for generating new designs for Palladian
villas. In this way, they found many mistakes and inaccuracies in
the studies about Palladio from the last 400 years. It also turned
out that Palladio was not consistently precise in applying his own
explicit rules. Palladio's Platonic villa did not always survive
its materialization.
The rules employed by Palladio and his contemporaries were formulated
in terms of elementary transformations such as translation, rotation
and reflexion. The same rules were used for designing the overall
structure and for designing component parts. In nature we also see
that simple transformation rules can yield complex results. And
Artificial displays this phenomenon as well.
The game
of architecture.
Let us get
back to the issues of today's architecture and urban planning. The
great challenge in this area consists in the necessity of flexibility
and the attractiveness of plurifomity. It is not a good idea to
fix someone's living environment forever, and it is even worse to
do that for everyone in the same way.
The building process must not be based on an unduly precise definition
of the function of "living", if our houses and cities
are not to get alienated from actual life. The very fact that the
building process is organized in a particular way, implies that
certain aspects of "living" will be ignored.
The designer only knows his own preferences, and usually he does
not recognize these as limitations or fixations. There is every
reason, therefore, to split design decisions into several levels.
On a higher level, boundary conditions are specified for the lower
level. The way in which these conditions are satisfied, may then
be determined by chance, or by the end-user, and thus reach beyond
the fixations of an individual designer.
Designing a house or a neighborhood should only consist of articulating
structure-defining rules (for instance about space, material and
capacity), and a specification of the dependencies between the different
rules and decision levels. In this way, for instance, the designer
specifies a set of potential partitions rather than one actual one.
Within the limits of the rules, the game can be played in different
ways, with different possible outcomes,
not necessarily anticipated by the designer.
For a rule-designer who wants to explore the possibilities and limitations
of his game, the computer will be indispensible. The next step would
be to design algorithms which transform or generate rules. Formulating
new rules would then become part of the game. This may engender
growth processes which go much beyond the creativity of the individual
designer.
If we look at the increasing complexity of the 'games' which are
being played in algorithmic art, artificial intelligence, and biological
simulations ('artificial life'), then such a playful architecture
seems possible in principle. But is that going to help things? Is
the realization of a more pluriform architecture not precluded anyway
by practical kinds of problems?
Flexible
production methods.
The current
practice in housing-construction suggests a very clear pessimistic
answer to this question. Every step in the direction of a more flexible
procedure tends to be viewed as absolutely impossible, because it
seems incompatible with industrial serial production. Industrial
serial production is considered as the only way to take advantage
of technology in the production process, to be used whenever it
is too expensive to manufacture products individually by hand.
It is interesting to note that even in car manufacturing, industrial
serial production is no longer considered the only game in town.
It was introduced in the beginning of the twentieth century by Ford,
but was rejected by the Japanese immediately after the Second World
War, when Toyota developed the lean production [6] method,
which combines elements from handcrafted production with elements
from mass production. Production lines are used, but unlike in classical
production lines, the machines are suited for different operations.
And the machines are designed so that they can be easily and quickly
set up and adjusted. This integration of craftsmanship and automation
makes it possible to produce smaller series with more variety without
significant additional costs.
Lean production involves innovation at a technological as well as
an organizational level -- concerning issues such as logistics (avoiding
stock-piling), quality management, administration and machinery,
as well as the coordination between designer, producer and supplier.
The flexible nature of the resulting production process makes it
possible to produce customized series -- which was altogether out
of the question in Ford's production lines.
And lean production is not the final step. Production machinery
is increasingly computer-controlled, and therefore increasingly
flexible. As a result, industrial manufacturing of completely individualized
products is becoming a real possibility. Automatization used to
be equivalent to uniformity. In the future, it will be the prerequisite
of diversity.
The Components
House. [7]
Also in architecture,
steps have been made in the direction proposed here. A beginning
that we can build on was the development of the Components House.
This involved a set of project-independent components that can be
used to put together different series of houses.
For the Music District in Almere, Archipel Ontwerpers realized
two housing projects with very different kinds of houses, combined
in a variety of configurations. This approach was developed further
in a system called the Components House, where the building blocks
are components of houses rather than complete houses. The Components
House is based on a number of very simple rules concerning production,
space and function. Nevertheless, the components can be used to
put together a wide variety of houses and living environments. At
the time we did not yet have the possibility to develop architectural
results by means of generative algorithms. We therefore employed
handcrafted designs.
To facilitate controlling the logistics of the design and construction
process, the Component House is divided into three subsystems. The
first subsystem contains the elements which determine the space
of the house: the concrete elements [8] for building the basic construction. The second subsystem contains
various systems for finishing and materializing the (outside) space.
The third subsystem contains systems for technical functionality,
including "house equipment" [9],
intermediary conduit systems and inside walls.
The Components House as a system thus consists of subsystems which
in their turn consist of further subsystems, components and elements.
But because these different subsystems are connected with each other
in complex ways, they do not constitute a simple tree structure.
They form a heterarchy rather than a hierarchy.
Conclusion.
The Components
House fits in well with the modus operandi of an art generation
algorithm such as Artificial, that we discussed above. Artificial
works with a specification of a complex set of elements and patterns,
and of all manners of combining these elements within these patterns;
from the set of possibilities defined in that way, the program then
draws random samples.
The Components House also specifies a set of precisely defined elements.
The 'patterns' for the Comp0onents House are put together out of
descriptions concerning the architectural language, the materials
to be used and their properties, the desired spatial capacity of
the houses, the scale and the degree of differentiation of the composition,
the financial possibilities, the constraints defined by building
regulations and urban planning, etc., etc. The computer program
can generate series of variants of houses that comply with these
descriptions. Because of their formal complexity, these variants
will have a much more organic "look and feel" than the
unequivocal housing blocks which human architects currently press
through the sieve of traditional production.
Footnotes
[1] Sol LeWitt: "Sentences on Conceptual Art." Art-Language
1,1 (May 1969).
[2]
The Artificial programs were designed and implemented by Remko Scha.
Contributions to various stages of software development by Anthony
Bijnen (Metaform Software, Amsterdam), Vangelis Lykos (Academie
van Bouwkunst, Amsterdam) en Boele Klopman (Technische Universiteit
Twente, Enschede).
[3]
John Cage: Preface to: 'Where are we going? And what are we doing?'
In: Silence. Lectures and Writings by John Cage. Middletown,
Connecticut: Wesleyan University Press, 1973, p. 194.
[4]
From the same preface by John Cage, pp. 194/195.
[5]
George Hersey en Richard Freedman: Possible Palladian
Villas (Plus a Few Instructively Impossible Ones). Cambridge,
Mass.: The MIT Press, 1992.
[6]
The concept of lean production was introduced in the MIT
research reported in: Womack, Jones en Roos: The Machine that
Changed the World. New York: Macmillan. 1990.
[7]
The first version of the Component House was developed by Archipel
Ontwerpers in collaboration with Nevanco housing.
[8]
This first version was based on a concrete construction by Heembeton.
Wood, steel or bricks may be applied as well, though every material
will impose its own conditions concerning span, stability and production.
[9]
By "house equipment" we mean furniture-type realizations
of functions such as cooking, washing, and waste disposal. This
equipment is connected through an intermediary conduit-system to
the central conduit-tube. This creates complete freedom to rearrange
the appartment lay-out. Applying Matura prefab-elements, Archipel
Ontwerpers has recently designed two example projects on the
basis of this concept.