Jos de Bruin        Remko Scha         IAAA        Artificial      



This paper appeared in the Proceedings of the Conference Vision Plus 4, Carnegie Mellon University, Pittsburgh, 26-29 March 1998, pages 14 - 25. In the call for papers of this conference, the idea of a "Republic of Information" was emphatically put forward as a topic for discusssion.



A republic of information designers


Jos de Bruin and Remko Scha

Institute of Artificial Art Amsterdam

Summary

Information Design requires system architects, not information architects. It will be about developing algorithms for automatic and semi-automatic visualization, not about creating specific designs. It will be about design systems and design theories, not about deciding for the rest of us what the world should look like.

Anyone who can speak or write or draw is by definition involved in the design of information. In the information republic that we envisage, all citizens will have access to all raw information, and use automatic design tools to represent that information for themselves or for others in whatever way they want.

Existing automatic visualization techniques point the way, but are not yet up to this task. This paper stresses the distinction between the communicative and the esthetic dimension of visual information. It argues that indeterminacy is inherent in communication and essential to esthetics, and that this indeterminacy should be incorporated into the generation processes used by visual design systems. It points to some of the relevant research that is going on already, and sketches the architecture of the interactive visualization systems of the future.

Introduction

Talk of a "Republic of Information" raises some uneasy feelings. Its solemn capitals evoke images of a disembodied network in Cyberspace, in which information has taken on a life of its own and individual persons have been reduced to replaceable I/O-devices, controlled by the spin doctors who used to work for Orwell's Department of Information. Such an authoritarian cyber-state is completely at odds with the idea that the digital revolution could lead to more involved citizens.

The introduction of an information elite does little to reassure us. Wurman (1995) sees a heroic role for "a group of people, small in number, deep in passion, called Information Architects", struggling forward through the "field of black volcanic ash" constituted by current design, in order to save humanity from the "tsunami of data that is crashing onto the beaches of the civilized world". This sounds more like a blurb for the next Spielberg blockbuster, with Information Architects as the good guys, than as a serious proposal about the role of information design. However, the conference brochure similarly suggests that the "Republic of Information" is "going to be laid out and planned by a new breed of architects, informed with a new level of understanding and purpose".

We would like to propose an alternative approach. Let's drop the capitals. The "republic of information" could then refer to a worldwide community of people, freely engaged in producing and processing information -- a community that includes virtually everybody. After all, anyone who can speak or write or draw is by definition involved in the design of information. In contrast to what the term Information Age suggests, information is not a commodity, like iron or steam or electricity, that mankind has only recently learned to harness for its needs, and that requires a new type of experts. Humans have always spent most of their lives processing information. What is new today is the role of technology. This is not the Information Age, but the Information Technology Age.

The information-infrastructure of the world is developing at a very fast rate. The Web epitomizes this trend. Huge databases are or will soon be instantaneously accessible to large numbers of people. These people will increasingly be able to create large and complex databases of their own, by selecting and recombining subsets from existing databases, by employing automatic procedures for collecting their own data, and by performing complex computations. In order to inspect, distribute and publish these databases, people will have to be able to represent them in a visual way.

The need for data visualizations is thus going to explode. Ever more people will regularly be involved in visualizing complex data sets. These people do not want to become professional Information Designers. Nor do they want to hire professional Information Designers, because they want their visualizations immediately. They need computer-aided, interactive or fully automatic data visualization.

In this paper we investigate what is involved in accomplishing this. To do that, we must distinguish two dimensions of information design. The first is the "communicative" dimension: is the visual representation easy to decode, and is it likely to give rise to the interpretation that was actually intended? The second is the "esthetic" dimension: does it give rise to visual pleasure? Does it look fashionable? Does it fit in with the "corporate identity" of the sender?

Note that these two concerns are necessarily in conflict. If communication would be the only goal or have unassailable priority, we should set ISO standards and Deutsche Industrie Norms and stick to them. It is clear that nobody wants to do that. We have seen what happened with functionalism in architecture.

The R&D tradition aiming at automatic visualization algorithms has been almost exclusively concerned with the communicative dimension. It has largely ignored many relevant aspects of visual cognition, and it has completely ignored the esthetic dimension. That is why existing automatic visualization techniques do not yet answer the challenge of providing the world with automatic information design systems. They still need human design experts to tune the systems, to tweak their inputs, to modify their outputs, or even to redo things from scratch when the system fails completely.

This paper proposes to overcome these limitations by accepting that indeterminacy is inherent in communication and essential to esthetics, and incorporating this indeterminacy into the generation processes used by visual design systems. It points to some of the relevant research that is going on already, and sketches the architecture of the interactive visualization systems of the future.

The problem of automatic visualization

Information design as a discipline is based on the assumption that information can be visualized in a systematic manner, i.e. that there are rules that govern the interpretation of visualizations. By taking these rules into account, a designer can create effective visualizations that reflect the relevant properties of the underlying data. Automatic information design takes this idea literally: it tries to make these rules explicit and formal, in order to make it possible to automate their application. This would allow end-users to focus on the information they want to express, leaving decisions on how to express this information to a computer program.

To be able to develop such automatic design systems, we have to be formal, i.e. precise and unambiguous, about each of the steps needed to create an effective visualization. We have to describe in mathematical terms:
  •   the information we want to express
  •   the graphical means that are available
  •   which designs are appropriate for what type of information
  •   what procedure we will use to select a particular design for a given piece of information,
  •   and how we will implement that design and produce an actual visual rendering.
Mathematical data characterization: relational databases

To understand current automatic visualization systems, it is useful to begin by looking in some detail at the first of these issues: how to characterize the information that is to be visualized. This characterization has to be independent of the analysis of graphical attributes, otherwise we will not be able to specify the relations between these two levels clearly and unambiguously. Twyman’s otherwise thought-provoking Schema for the Study of Graphic Language (1979) illustrates how a failure to distinguish clearly between these two levels limits the usefulness of a classification scheme.

To specify the data characteristics, we can make use of the formal semantics of information as developed in logic, statistics, linguistics and computer science. One particularly useful formalism is relational algebra (Codd, 1970; Date, 1975), the formalism underlying relational databases. Relational databases specify the extensions of n-place predicates: e.g. the employees of a company are enumerated in a table together with their values for the functions Name, Salary, Department. This sort of information is by far the most common type of information considered for visualization.

A basic distinction among the attributes of relations is the way their values can be ordered. In nominal attributes such as Name, the set of values is unordered. The values can be distinguished, but not compared further in any meaningful way. In an ordinal data set, values can be ranked, but their differences or distances can not. An interval attribute does allow such comparison of differences, while values on a ratio scale can even be compared in absolute terms.

Automatic visualization: the state of the art

Typical automatic visualization systems such as APT (Mackinlay, 1986) or SAGE (Roth e.a., 1994) are built to exploit the properties of certain kinds of relational data sets. These systems typically assume a rather limited set of well-understood graphical representations (graphs, bar-charts, maps) as potential graphical structures. Esthetic details are fixed arbitrarily or must be specified by the end-user.

The visualization problem is dealt with in these systems as a matter of finding the best mapping from the data to the available graphical structures. Systems of this sort take the basic scale types discussed above as a point of departure. In this they follow in the footsteps of Bertin (1983), who used these types as the basis for his seminal treatment of graphs and maps. The scale types have clear implications for the kind of graphical attributes that are appropriate for expressing the data. For instance, shape or color are effective to distinguish items, but not to rank them, saturation is good to rank a limited number of values, and position along a scale is a good representation for quantative values.

While the distinction between these scale types is a useful starting point, it is not enough. Many researchers have felt the need to add more specific semantic types. Time, space, temperature, money etc. all have their own special characteristics, which influence the appropriate choice of graphical attributes (see Roth, 1990 for some additional types). This is an open issue, with each special domain introducing additional distinctions (see e.g. Zhou and Feiner, 1996 for types required within the medical domain).

The scope of this approach can be extended by distinguishing different types of predicates or relations. Sets, functions, trees, lists, can all be specified as n-place predicates with special properties. Each of these has different implications for effective presentations. Kamps et al. (1996) give a taxonomy of binary predicates and their appropriate graphical representations.

A mathematical characterization of the space of possible graphical structures

Current automatic visualization systems consider rather limited sets of rather prototypical visualizations. What would it mean to try to take the whole range of possibilities into account? How can all possible graphics be systematically described? Spreadsheets, statistical and mathematical packages, Geographical Information Systems and other tools with facilities for data presentation offer an ever-increasing set of templates or graph-types. Can we describe these in terms of a shared set of structural properties?

Visualizations are useful for communication to the extent that their interpretations are shared. Some interpretations can only be shared if all participants learn the direct and arbitrary associations between pictures and meanings. Like symbols or words, most iconic pictures can only be understood if their meaning is given. However, the real power of visualizations lies in their ability to express new meanings that go beyond the given meanings of their components or attributes, just like a sentence or larger text expresses more than an unstructured collection of words.

Therefore a first pre-requisite for specifying design rules is a good way to describe pictures in terms of their structure. Automatic design presupposes the availability of a set of graphical categories and a generative formalism in which specific visualizations can be described. Just as a language can be concisely specified in terms of a grammar and a lexicon, we can specify an unlimited number of possible graphics by using a limited set of basic terms, plus a set of operators with which new terms or categories can be constructed out of these basic terms.

One place to look for the type of operators required is in drawing tools such as MacPaint or AutoCAD. These contain a large number of operators that are suitable candidates for inclusion in a generative image grammar. Relevant also are interface builders, such as systems for computational steering (Van Liere, 1996), which provide users with low-level graphical objects from which to construct interfaces that provide direct control over selected parameters of a simulation.

Mappings from visual Gestalts to information structures

While constructive operators are sufficient to define the space of graphical structures, they provide a misleading angle on the visualization problem. Visualization is not so much a matter of finding the right visual structure for a piece of information as of judging whether a given visual structure does indeed cause the intended perceivers to draw the intended inferences. It is not enough that there is an isomorphism between the information structure and some description of its graphical representation; what matters is that there is such an isomorphism between the information structure and the structure of the graphical representation that is actually perceived. Existing automatic visualization systems do not embody theories about the way in which visual input is perceived by humans. This is an important limitation, even if existing theories are still sketchy.

Gestalt Psychology (Wertheimer, 1938) has distinguished some of the organizing principles, such as proximity, continuity and similarity, that underlie perceptual structures or Gestalts. Gestalt Psychology was brought to the US by German psychologists fleeing the Third Reich. While they survived, their psychology succumbed to the behavioristic mainstream of the time. It resurfaced in the work of Leeuwenberg et al (1971), who explained the Prägnanz of certain Gestalts (i.e. the perceptual preference for certain structural interpretations) in terms of a complexity measure over expressions that specify the operations needed to recreate the pictures that cause these Gestalts. This work was initially restricted to "turtle graphics", i.e. line-based structures that are essentially one-dimensional. Dastani is currently extending it to real 2-D by developing a mathematical characterization of the space of all possible Gestalts, and a mathematical description of the mappings from visual inputs to visual Gestalts as performed by the human visual/cognitive system.

Mappings are sets of rules that specify how the elements and operators that compose a structure in one domain can be translated into the elements and operators that compose the corresponding structure in the other domain. Once we understand which visual structures are perceived, these structures have to be mapped to their corresponding information structures. A mathematical characterization of the space of possible mappings from Gestalts to information structures has to include a preference measure on this space that reflects how easily the human cognitive system performs this mapping. (Cf. the rules for "good" or "effective" design.)

One of the first designers to formulate such mappings, was the cartographer Bertin (1983). He based his firm prescriptions for good design on a number of intuitively appealing correspondences between the way people can distinguish colors, saturation levels, positions, shapes, sizes etc. on the one hand and the properties of different types of data (continuous, ordinal, nominal) on the other hand. Dastani (1997) is formalizing these mappings using isomorphisms between relational and graphical algebra’s. Engelhardt et al. (1996, 1998) elaborate the semantic use of space. Wang et al. (1997) formalize these ideas in a design theory based on mappings between data signatures and graphical signatures. Design systems such as APT and SAGE use these mappings in a more pragmatic way, and in the other direction, to generate designs based on data characteristics.

Indeterminacy and effective designs

Given the mappings, a visualization algorithm can be developed. This algorithm will have to make a number of indeterminate choices, since the mappings will be neither sufficient nor necessary, i.e. they do not cover all details, and are only probable. In other words, there will always be many different ways to convey the same information, and this leaves an enormous space for variation.

There are three sources for this indeterminacy. The first has to do with what we have called the esthetic dimension: a design generated by an automatic visualization system will never be complete. The data characteristics will never completely determine fonts, colors, the shape of marks, the use of illustrations and so on. The same design (e.g. a line graph with time as X-axis, temperature on the Y-axis) can be rendered in many ways.

Two other sources of indeterminacy lie within the communicative realm. Firstly, more than one mapping might be applicable, i.e. more than one design might be appropriate. The type of graph, the coupling of data dimensions to scales etc., each may well allow multiple choices. Secondly, the information itself will normally be more or less indeterminate. Initially users will not know exactly what they want to express or what they are looking for.

A visualization algorithm should not only find and apply the appropriate mappings, but should also have a strategy for dealing with the large set of alternative possibilities for visualizing any particular piece of information. The usual approaches are: (1) forced determinism (the system makes certain fixed choices) or (2) user control (the system lets the end-user decide on the steps to take). Both of these strategies are problematic. In the first approach, the set of possibilities is limited in an arbitrary way. If the second approach is taken seriously, end-users must make too many choices which they do not understand, with a similar result: the set of possibilities ends up being limited in an equally arbitrary way.

The VISAGE framework developed by Roth c.s. (1997) follows the second approach. In VISAGE the SAGE design engine is augmented with SAGEBRUSH, a sketch pad for new designs, and SAGEBOOK, a repository of previous designs, both of which are intended to give the user more control over the process. We expect that these will lead to less variation and less surprising designs.

Giving both designer and end-user more control was also the aim of the M modeling tool (de Bruin, 1996). The M package consists of an integrated simulation and visualization system. It is used to develop user interfaces to simulation models of environmental and public health issues, such as climate change. These complex issues require the involvement of many different experts, affect many people and are surrounded with much uncertainty and controversy. It should be easy for designers of such models to adapt their presentations to different audiences, be they policy makers, other experts or members of the general public. These audiences should also be able to explore these models by themselves, e.g. by focusing on details of particular concern to them.

To achieve this, the M interface design engine uses its knowledge of the model variables and their dependencies to produce appropriate visualizations for different data types, and knowledge of the model equations to produce network diagrams that reflect the dependencies between variables. Users can always override these suggestions by direct manipulation of the interface design. As different occasions or users may require different designs, multiple views on the same model can be generated and presented simultaneously.

M has been used successfully in large modeling efforts involving up to 10 designers over periods of 4-5 years. While users are generally satisfied with the resulting interfaces, they often shy away from taking full advantage of the options provided. They tend to stick to basic templates and pre-selected settings, instead of actively taking advantage of the possibility to generate alternative views that might provide new or more effective insights.

Random design

Therefore, we propose a different approach, which allows users to take better advantage of the myriad possibilities for expressive designs. The inspiration for this approach comes from the use of mathematical randomness in "chance art", which was introduced in music and visual art in the late fifties. (Cf. Brecht, 1957; Cage, 1973; Morellet, 1962; Nake, 1974.) Artists practicing this genre do not design individual works of art; instead, they specify constructive definitions of large classes of possible artworks, and then execute randomly selected samples from these classes. Chance art was first developed by artists with a "minimalist" frame of mind; their definitions of "artwork schemata" were therefore extremely simple. Typical examples would be: the set of all possible colorings of a grid of squares, using a very small set of colors; the set of all possible positionings on a plane of a random number of identical black dots; or the set of all possible ways to connect randomly chosen points on the plane by means of straight lines with a fixed width and a fixed color.

More recently, this approach has been developed further in two different directions. On the one hand, some artists have implemented complex algorithms that emulate specific artistic styles (cf. Cohen, 1979). More relevant for our current discussion, however, is the "postmodernist" generalization of chance art exemplified by the program "Artificial" (Eberson, 1993; Kluitenberg and Harry, 1998; Scha, 1989.) Here the goal is precisely not to emulate a particular style, but to develop an algorithm that can generate any image, in any possible style -- an algorithm that embodies absolute meaninglessness and total arbitrariness, by implementing an all-encompassing "style to end all styles".

The project of fully automatic, completely random "Artificial Art" is of course not finished yet, and it is not even clear whether it ever will be. But it has been demonstrated that significant steps in this direction can be taken. "Artificial" employs a mathematical picture-description language that incorporates notions from programs like MacPaint and from Gestalt-perception theories like Leeuwenberg's. By executing randomly generated expressions from this language, "Artificial" displays an interesting caricature of art-generating behavior. It is obvious that this technique can be usefully applied to deal with what we have called the "esthetic" dimension of the automatic visualization problem. Because here we have the same situation: a wealth of possible choices which are ultimately arbitrary, but which the end-user may nevertheless find significant for unfathomable reasons. Rather than making one fixed ill-motivated choice, the ideal system should operate with an explicit representation of the whole "choice-space" that is available, and present the user with samples from that space.

The approach we suggest can thus be summarized as "generate and test". Choices should be made by the system, not deterministically and once, but randomly and repeatedly. The system should thus generate many variations of appropriate visualizations, possibly taking into account preference measures and user-defined constraints. The task of the users is to pick the ones they like best.

A requirement is that the visualizations generated are reasonable candidates: they should express the underlying data faithfully and take the known Gestalt principles into account. However, even so there will exist many reasonable variants, some of which will be surprisingly better at conveying particular aspects to particular users. Finding these will be an iterative process: the user can interactively modify constraints and preferences.

By enlisting the unlimited capacity of the computer to apply the design rules without prejudice, i.e. in a truly random fashion, there will be much more variation. The algorithm we have in mind will not be tempted to stick to previously successful designs, except of course when it is told to do so. This freedom to explore uncharted areas is an obvious advantage in art, where there is no need for communication, and the esthetic experience of the observer is the only criterion. Artists perform their function in a suboptimal way, because of their need for subjective expression and story-telling, and their tendency to follow fads and fashions (cf. Harry, 1992; Vreedenburgh and Scha, 1994).

To the extent that esthetic pleasure also plays a role in information design, the same argument applies. But does it also apply to the communicative aspects of information design? Seen as communications, visualizations are expressions of something and not just l'art- pour-l'art, and one could argue that a search for all possible variants is irrelevant or perhaps even counter-productive. If a visualization works, it works, so why confuse people by straying from generally accepted conventions and familiar images? Why not leave it up to the users to adapt a design to their esthetic tastes, or to change the data specification in order to get another view that might correspond more closely to what they could be looking for?

There are two closely related reasons why the freedom to explore the full space of potential designs is an important advantage for information design: (a) designers exploring a data set seldom know in advance what exactly they are looking for, and (b) even if designers know exactly what they want to tell their audience, they normally have neither full knowledge of this audience nor full control over the circumstances in which this audience will be looking at their message.

In other words, the visualization cannot be fully specified in advance and a single, once-and-for-all design of the information is not sufficient. In these cases – and they are by far the most common – automatic design has to be part of an iterative search for the right way to present some piece of information. Whether communicating with oneself (i.e. exploring information) or communicating information to others, the computer can help in the search for the most effective presentation by a) reducing the space of potential forms and b) quickly testing candidates from the remaining space. Using the computer, this testing does not necessarily have to be done by the sender. A sender could obviously send multiple versions of a message in the hope of thus increasing the chances of conveying his message to a diverse audience. However, it is simpler and more appropriate to send the contents, and let recipients use design algorithms and decide for themselves on the precise form in which to look at the message.

The role of style in the generation of graphics

Information design wears two hats, often at the same time. Wearing one hat, information design is about expressing information in an objective, scientific manner, about "not lying with visualizations" and improving insight (see Tufte, 1997, for some rather high-minded opinions on this point - and some nice illustrations). Wearing its other hat, information design is about helping customers to find a corporate style, i.e. forms that express their individual and distinguishing characteristics: what they stand for or, at least, what they would like their customers to believe about them. In this role designers play the facilitator and act as a kind of medium for the corporation-as-artist.

The term information architect suggests that information designers feel that they should look at their colleagues of the bricks-and-mortar variety for inspiration. Whether by inclination or because the technology for another approach has not yet been fully developed, real architects (the ones that do not need a qualifier) generally believe that they can objectively translate the functions required of a building into a particular, fixed form. This presumption often this leads to arbitrary straight-jackets for the users of their buildings.

Instead of taking this approach to design as their inspiration, information designers should take up the challenge and translate requirements not into final answers, but into further restrictions on the rules or mappings that specify the space of possible answers. Information design thus becomes a two-step process:
  1. based on characteristics of the data, a goal-driven process generates (under-specified) visual structures that are optimal for conveying these characteristics;
  2. fully specified instances of these visual structures are generated randomly, possibly subject to further "arbitrary" constraints of style.

This last step is crucial. In the current approach to design, a designer either specifies a design down to the last pixel, or puts together a style guide that is so complex that it can only be used by other designers to create variants within a style. We suggest that these styles can be, and should be, formulated as (additional) mapping rules. Design systems can then impose specific style guides by applying these additional rule sets. Technological progress makes it possible to incorporate the creation of these variants directly into the production process of documents. Documents, letters, bills etc. are no longer printed on pre-printed forms, but as a whole, including letterheads and other design elements, so the technology is there for much more customized information design, not only on screen, but also on paper. Random design is needed to make optimal use of these possibilities.

Conclusion

Information design is about providing users with tools for extending their means of communication, with others and with themselves. The essence of communication is self-correction, the ability to start speaking or writing and home in on a subject by continually listening to oneself or one’s partner and correcting the misunderstandings and gaps present in the expression so far. To allow this to take place also during visualization, indeterminacy should not be removed from the system, but be used as a force for exploration and creativity.

Information design can thus profit in at least two ways from the approach embodied in chance art, i.e. the random application of formal design rules. To the extent that design is involved with creating pleasing decorations, randomness can create much more variation – e.g. a different illustration in each individual letterhead – and thus lead to more diverse and more enjoyable esthetic experiences. By using a restricted grammar, this diversity can still remain recognizable as belonging to a specific style. To the extent that information design is about effective data visualization, the introduction of randomness in the process of automatic design increases the chances of finding more effective presentations.

In the information republic that we envisage, all citizens will have access to all raw information, digitally represented in standard database formats; and everyone will be able to represent that information for themselves or for others in whatever way they want. Visualizing individual data sets will be obsolete as a professional activity. It will become a nostalgic art form, like oil painting after the invention of photography. Information Design will be about developing algorithms for automatic and semi-automatic visualization. It will be about design systems and design theories, not about deciding for the rest of us what the world should look like.

References

Brecht, G. (1957) Chance-Imagery. New York: Something Else Press, 1966.

Bruin, J. de, P. de Vink and Jarke van Wijk (1996) M - A Visual Simulation Tool. In: Simulation in the Medical Sciences, The Society for Computer Simulation, San Diego, 181-186.

Codd, E.F. (1970) A relational model of data for large shared data banks. CACM 13/6, 377-387.

Cage, J. (1973) Silence. Lectures and Writings by John Cage. Middletown, Connecticut: Wesleyan University Press.

Cohen, H. (1979) What is an image? In: Proceedings International Joint Conference on Artificial Intelligence, Tokyo, pp. 1028-1057.

Dastani, M. (1997) A Formal Framework for Visual Representation. TVL 97, International Workshop on Theory of Visual Languages, Capri, Italy; pp. 117-126.

Date, C.J. (1975) An introduction to database systems, Addison-Wesley.

Eberson, H. (ed.) (1993): Artificial. Trademark, Amsterdam, 1993. (With contributions by M. Bruinsma, P. van Emde Boas, P. Groot, R. Scha and D. van Weelden.)

Engelhardt, Y., J. de Bruin, T. Janssen and R. Scha (1996): "The visual grammar of information graphics" In: Artificial Intelligence in Design (AID '96), Workshop on Visual Representation, Reasoning and Interaction in Design, June 1996, Stanford University.

Engelhardt, Y. (1998) Meaningful space in diagrammatic graphics. These proceedings.

Harry, H.: "On the Role of Machines and Human Persons in the Art of the Future." Pose, 1992.

Kluitenberg, E. and H. Harry (1998): "Human art is dead. Long live the algorithmic art of the machine." Mute 9, pp. 14-21.

Leeuwenberg, E.L.J. (1971): A Perceptual Coding language for Visual and Auditory Patterns, American Journal of Psychology, Vol.84, 3.

Lier, R. van, J.J. van Wijk (1996): CSE - A modular architecture for computational steering. 7th Eurographics Workshop, Prague.

Mackinlay, J. (1986) Automating the design of graphical presentations of relational information. ACM Transactions on Graphics, Vol. 5, No.2.

Morellet, F. (1962): "For an experimental programmed painting" and "From the spectator to the spectator or the art of unpacking the picnic". In: Exhibition catalogue "Morellet", Nationalgalerie Berlin, 1977.

Nake, F. (1974): Aesthetik als Informationsverarbeitung. Grundlagen und Anwendungen der Informatik im Bereich aesthetischer Produktion und Kritik. Vienna/New York: Springer Verlag.

Roth, S.F. and J. Mattis (1990): Data characterization for intelligent graphics presentation. In: Proceedings of the 1990 Conference on Human Factors in Computing Systems, ACM/SIGGHI, New Orleans.

Roth, S.F., M.C. Chuah, S. Kerpedjiev, J.A. Kolojejchick and P. Lucas (1997): Towards an Information visualization workspace: combining multiple means of expression. Human-Computer Interaction Journal.

Scha, R. (1989): Computer/Art/Photography, Perspektief, 37, pp. 4-10.

Tufte, E.R. (1997): Visual Explanations. Graphics Press.

Twyman, M. (1979): A Schema for the Study of Graphical Language. In: P.A. Kolers, M.E.Wrolstad, H. Bouma (eds.) Processing of Visible Languages. Vol.1. New York, Plenum Press.

Zhou, X.Z. and S.K. Feiner (1996): Data Characterization for Automatically Visualizing Heterogeneous Information. In: Proc. InfoVis’96, 13-20, San Francisco, IEEE.

Vreedenburgh, E. and R. Scha (1994): Vers une autre Architecture. Zeezucht 7, 8, pp. 6-14.

Wang, D., Y. Engelhardt and H. Zeevat (1997): Formal Specification of a Graphic Design Theory TVL 97, International Workshop on Theory of Visual Languages, Capri, Italy; pp. 97-116

Wertheimer, M. (1938): Laws of Organization in Perceptual Forms. In: A Source Book of Gestalt Psychology, ed. W.D. Ellis, London, Routledge & Kegan Paul.

Wurman, R.S. (1996): Information Architects, Graphis Press.


Bio’s

Jos de Bruin is a scientist at the Dutch National Institute of Public Health and the Environment, Bilthoven. His research is concerned with data visualization and interactive simulations.

Remko Scha is head of the Department of Computational Linguistics in the Faculty of Humanities of the University of Amsterdam. His research is concerned with computational models of language processing and perception. He also developed an automatic guitar band ("The Machines") and an art generation algorithm ("Artificial").